{"text": "July 2022 Newsletter\n\n\nMIRI has put out three major new posts:\nAGI Ruin: A List of Lethalities. Eliezer Yudkowsky lists reasons AGI appears likely to cause an existential catastrophe, and reasons why he thinks the current research community—MIRI included—isn't succeeding at preventing this from happening\nA central AI alignment problem: capabilities generalization, and the sharp left turn. Nate Soares describes a core obstacle to aligning AGI systems: \n[C]apabilities generalize further than alignment (once capabilities start to generalize real well (which is a thing I predict will happen)). And this, by default, ruins your ability to direct the AGI (that has slipped down the capabilities well), and breaks whatever constraints you were hoping would keep it corrigible.\nOn Nate's model, very little work is currently going into this problem. He advocates for putting far more effort into addressing this challenge in particular, and making it a major focus of future work.\nSix Dimensions of Operational Adequacy in AGI Projects. Eliezer describes six criteria an AGI project likely needs to satisfy in order to have a realistic chance at preventing catastrophe at the time AGI is developed: trustworthy command, research closure, strong opsec, common good commitment, alignment mindset, and requisite resource levels.\nOther MIRI updates\n\nI (Rob Bensinger) wrote a post discussing the inordinately slow spread of good AGI conversations in ML.\nI want to signal-boost two of my forum comments: on AGI Ruin, a discussion of common mindset issues in thinking about AGI alignment; and on Six Dimensions, a comment on pivotal acts and \"strawberry-grade\" alignment.\nAlso, a quick note from me, in case this is non-obvious: MIRI leadership thinks that humanity never building AGI would mean the loss of nearly all of the future's value. If this were a live option, it would be an unacceptably bad one.\nNate discusses MIRI's past writing on recursive self-improvement (with good discussion in the comments).\nLet's See You Write That Corrigibility Tag: Eliezer posts a challenge to write a list of \"the sort of principles you'd build into a Bounded Thing meant to carry out some single task or task-class and not destroy the world by doing it\".\nFrom Eliezer: MIRI announces new \"Death With Dignity\" strategy. Although released on April Fools' Day (whence the silly title), the post body is an entirely non-joking account of Eliezer's current models, including his currently-high p(doom) and his recommendations on conditionalization and naïve consequentialism.\n\nNews and links\n\nPaul Christiano (link) and Zvi Mowshowitz (link) share their takes on the AGI Ruin post.\nGoogle's new large language model, Minerva, achieves 50.3% performance on the MATH dataset (problems at the level of high school math competitions), a dramatic improvement on the previous state of the art of 6.9%.\nJacob Steinhardt reports generally poor forecaster performance on predicting AI progress, with capabilities work moving faster than expected and robustness slower than expected. Outcomes for both the MATH and Massive Multitask Language Understanding datasets \"exceeded the 95th percentile prediction\".\nIn the wake of April/May/June results like Minerva, Google's PaLM, OpenAI's DALL-E, and DeepMind's Chinchilla and Gato, Metaculus' \"Date of Artificial General Intelligence\" forecast has dropped from 2057 to 2039. (I'll mention that Eliezer and Nate's timelines were already pretty short, and I'm not aware of any MIRI updates toward shorter timelines this year. I'll also note that I don't personally put much weight on Metaculus' AGI timeline predictions, since many of them are inconsistent and this is a difficult and weird domain to predict.)\nConjecture is a new London-based AI alignment startup with a focus on short-timeline scenarios, founded by EleutherAI alumni. The organization is currently hiring engineers and researchers, and is \"particularly interested in hiring devops and infrastructure engineers with supercomputing experience\".\n\n\nThe post July 2022 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "July 2022 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=1", "id": "2b3c2ab93beb471c23860075c03a554a"} {"text": "A central AI alignment problem: capabilities generalization, and the sharp left turn\n\n(This post was factored out of a larger post that I (Nate Soares) wrote, with help from Rob Bensinger, who also rearranged some pieces and added some text to smooth things out. I'm not terribly happy with it, but am posting it anyway (or, well, having Rob post it on my behalf while I travel) on the theory that it's better than nothing.)\n\nI expect navigating the acute risk period to be tricky for our civilization, for a number of reasons. Success looks to me to require clearing a variety of technical, sociopolitical, and moral hurdles, and while in principle sufficient mastery of solutions to the technical problems might substitute for solutions to the sociopolitical and other problems, it nevertheless looks to me like we need a lot of things to go right.\nSome sub-problems look harder to me than others. For instance, people are still regularly surprised when I tell them that I think the hard bits are much more technical than moral: it looks to me like figuring out how to aim an AGI at all is harder than figuring out where to aim it.[1]\nWithin the list of technical obstacles, there are some that strike me as more central than others, like \"figure out how to aim optimization\". And a big reason why I'm currently fairly pessimistic about humanity's odds is that it seems to me like almost nobody is focusing on the technical challenges that seem most central and unavoidable to me.\nMany people wrongly believe that I'm pessimistic because I think the alignment problem is extraordinarily difficult on a purely technical level. That's flatly false, and is pretty high up there on my list of least favorite misconceptions of my views.[2]\nI think the problem is a normal problem of mastering some scientific field, as humanity has done many times before. Maybe it's somewhat trickier, on account of (e.g.) intelligence being more complicated than, say, physics; maybe it's somewhat easier on account of how we have more introspective access to a working mind than we have to the low-level physical fields; but on the whole, I doubt it's all that qualitatively different than the sorts of summits humanity has surmounted before.\nIt's made trickier by the fact that we probably have to attain mastery of general intelligence before we spend a bunch of time working with general intelligences (on account of how we seem likely to kill ourselves by accident within a few years, once we have AGIs on hand, if no pivotal act occurs), but that alone is not enough to undermine my hope.\nWhat undermines my hope is that nobody seems to be working on the hard bits, and I don't currently expect most people to become convinced that they need to solve those hard bits until it's too late.\nBelow, I'll attempt to sketch out what I mean by \"the hard bits\" of the alignment problem. Although these look hard, I'm a believer in the capacity of humanity to solve technical problems at this level of difficulty when we put our minds to it. My concern is that I currently don't think the field is trying to solve this problem. My hope in writing this post is to better point at the problem, with a follow-on hope that this causes new researchers entering the field to attack what seem to me to be the central challenges head-on.\n \nDiscussion of a problem\nOn my model, one of the most central technical challenges of alignment—and one that every viable alignment plan will probably need to grapple with—is the issue that capabilities generalize better than alignment.\nMy guess for how AI progress goes is that at some point, some team gets an AI that starts generalizing sufficiently well, sufficiently far outside of its training distribution, that it can gain mastery of fields like physics, bioengineering, and psychology, to a high enough degree that it more-or-less singlehandedly threatens the entire world. Probably without needing explicit training for its most skilled feats, any more than humans needed many generations of killing off the least-successful rocket engineers to refine our brains towards rocket-engineering before humanity managed to achieve a moon landing.\nAnd in the same stroke that its capabilities leap forward, its alignment properties are revealed to be shallow, and to fail to generalize. The central analogy here is that optimizing apes for inclusive genetic fitness (IGF) doesn't make the resulting humans optimize mentally for IGF. Like, sure, the apes are eating because they have a hunger instinct and having sex because it feels good—but it's not like they could be eating/fornicating due to explicit reasoning about how those activities lead to more IGF. They can't yet perform the sort of abstract reasoning that would correctly justify those actions in terms of IGF. And then, when they start to generalize well in the way of humans, they predictably don't suddenly start eating/fornicating because of abstract reasoning about IGF, even though they now could. Instead, they invent condoms, and fight you if you try to remove their enjoyment of good food (telling them to just calculate IGF manually). The alignment properties you lauded before the capabilities started to generalize, predictably fail to generalize with the capabilities.\n\nSome people I say this to respond with arguments like: \"Surely, before a smaller team could get an AGI that can master subjects like biotech and engineering well enough to kill all humans, some other, larger entity such as a state actor will have a somewhat worse AI that can handle biotech and engineering somewhat less well, but in a way that prevents any one AGI from running away with the whole future?\"\nI respond with arguments like, \"In the one real example of intelligence being developed we have to look at, continuous application of natural selection in fact found Homo sapiens sapiens, and the capability-gain curves of the ecosystem for various measurables were in fact sharply kinked by this new species (e.g., using machines, we sharply outperform other animals on well-established metrics such as \"airspeed\", \"altitude\", and \"cargo carrying capacity\").\"\nTheir response in turn is generally some variant of \"well, natural selection wasn't optimizing very intelligently\" or \"maybe humans weren't all that sharply above evolutionary trends\" or \"maybe the power that let humans beat the rest of the ecosystem was simply the invention of culture, and nothing embedded in our own already-existing culture can beat us\" or suchlike.\nRather than arguing further here, I'll just say that failing to believe the hard problem exists is one surefire way to avoid tackling it.\nSo, flatly summarizing my point instead of arguing for it: it looks to me like there will at some point be some sort of \"sharp left turn\", as systems start to work really well in domains really far beyond the environments of their training—domains that allow for significant reshaping of the world, in the way that humans reshape the world and chimps don't. And that's where (according to me) things start to get crazy. In particular, I think that once AI capabilities start to generalize in this particular way, it's predictably the case that the alignment of the system will fail to generalize with it.[3]\nThis is slightly upstream of a couple other challenges I consider quite core and difficult to avoid, including:\n\nDirecting a capable AGI towards an objective of your choosing.\nEnsuring that the AGI is low-impact, conservative, shutdownable, and otherwise corrigible.\n\nThese two problems appear in the strawberry problem, which Eliezer's been pointing at for quite some time: the problem of getting an AI to place two identical (down to the cellular but not molecular level) strawberries on a plate, and then do nothing else. The demand of cellular-level copying forces the AI to be capable; the fact that we can get it to duplicate a strawberry instead of doing some other thing demonstrates our ability to direct it; the fact that it does nothing else indicates that it's corrigible (or really well aligned to a delicate human intuitive notion of inaction).\nHow is the \"capabilities generalize further than alignment\" problem upstream of these problems? Suppose that the fictional team OpenMind is training up a variety of AI systems, before one of them takes that sharp left turn. Suppose they've put the AI in lots of different video-game and simulated environments, and they've had good luck training it to pursue an objective that the operators described in English. \"I don't know what those MIRI folks were talking about; these systems are easy to direct; simple training suffices\", they say. At the same time, they apply various training methods, some simple and some clever, to cause the system to allow itself to be removed from various games by certain \"operator-designated\" characters in those games, in the name of shutdownability. And they use various techniques to prevent it from stripmining in Minecraft, in the name of low-impact. And they train it on a variety of moral dilemmas, and find that it can be trained to give correct answers to moral questions (such as \"in thus-and-such a circumstance, should you poison the operator's opponent?\") just as well as it can be trained to give correct answers to any other sort of question. \"Well,\" they say, \"this alignment thing sure was easy. I guess we lucked out.\"\nThen, the system takes that sharp left turn,[4][5] and, predictably, the capabilities quickly improve outside of its training distribution, while the alignment falls apart.\nThe techniques OpenMind used to train it away from the error where it convinces itself that bad situations are unlikely? Those generalize fine. The techniques you used to train it to allow the operators to shut it down? Those fall apart, and the AGI starts wanting to avoid shutdown, including wanting to deceive you if it's useful to do so.\nWhy does alignment fail while capabilities generalize, at least by default and in predictable practice? In large part, because good capabilities form something like an attractor well. (That's one of the reasons to expect intelligent systems to eventually make that sharp left turn if you push them far enough, and it's why natural selection managed to stumble into general intelligence with no understanding, foresight, or steering.)\nMany different training scenarios are teaching your AI the same instrumental lessons, about how to think in accurate and useful ways. Furthermore, those lessons are underwritten by a simple logical structure, much like the simple laws of arithmetic that abstractly underwrite a wide variety of empirical arithmetical facts about what happens when you add four people's bags of apples together on a table and then divide the contents among two people.\nBut that attractor well? It's got a free parameter. And that parameter is what the AGI is optimizing for. And there's no analogously-strong attractor well pulling the AGI's objectives towards your preferred objectives.\nThe hard left turn? That's your system sliding into the capabilities well. (You don't need to fall all that far to do impressive stuff; humans are better at an enormous variety of relevant skills than chimps, but they aren't all that lawful in an absolute sense.)\nThere's no analogous alignment well to slide into.\nOn the contrary, sliding down the capabilities well is liable to break a bunch of your existing alignment properties.[6]\nWhy? Because things in the capabilities well have instrumental incentives that cut against your alignment patches. Just like how your previous arithmetic errors (such as the pebble sorters on the wrong side of the Great War of 1957) get steamrolled by the development of arithmetic, so too will your attempts to make the AGI low-impact and shutdownable ultimately (by default, and in the absence of technical solutions to core alignment problems) get steamrolled by a system that pits those reflexes / intuitions / much-more-alien-behavioral-patterns against the convergent instrumental incentive to survive the day.\nPerhaps this is not convincing; perhaps to convince you we'd need to go deeper into the weeds of the various counterarguments, if you are to be convinced. (Like acknowledging that humans, who can foresee these difficulties and adjust their training procedures accordingly, have a better chance than natural selection did, while then discussing why current proposals do not seem to me to be hopeful.) But hopefully you can at least, in reading this document, develop a basic understanding of my position.\nStating it again, in summary: my position is that capabilities generalize further than alignment (once capabilities start to generalize real well (which is a thing I predict will happen)). And this, by default, ruins your ability to direct the AGI (that has slipped down the capabilities well), and breaks whatever constraints you were hoping would keep it corrigible. And addressing the problem looks like finding some way to either keep your system aligned through that sharp left turn, or render it aligned afterwards.\nIn an upcoming post, I'll say more about how it looks to me like  ~nobody is working on this particular hard problem, by briefly reviewing a variety of current alignment research proposals. In short, I think that the field's current range of approaches nearly all assume this problem away, or direct their attention elsewhere.\n \n\n \n\n^\n\nFurthermore, figuring where to aim it looks to me like more of a technical problem than a moral problem. Attempting to manually specify the nature of goodness is a doomed endeavor, of course, but that's fine, because we can instead specify processes for figuring out (the coherent extrapolation of) what humans value. Which still looks prohibitively difficult as a goal to give humanity's first AGI (which I expect to be deployed under significant time pressure), mind you, and I further recommend aiming humanity's first AGI systems at simple limited goals that end the acute risk period and then cede stewardship of the future to some process that can reliably do the \"aim minds towards the right thing\" thing. So today's alignment problems are a few steps removed from tricky moral questions, on my models.\n\n\n^\n\nWhile we're at it: I think trying to get provable safety guarantees about our AGI systems is silly, and I'm pretty happy to follow Eliezer in calling an AGI \"safe\" if it has a <50% chance of killing >1B people. Also, I think there's a very large chance of AGI killing us, and I thoroughly disclaim the argument that even if the probability is tiny then we should work on it anyway because the stakes are high.\n\n\n^\n\nNote that this is consistent with findings like \"large language models perform just as well on moral dilemmas as they perform on non-moral ones\"; to find this reassuring is to misunderstand the problem. Chimps have an easier time than squirrels following and learning from human cues. Yet this fact doesn't particularly mean that enhanced chimps are more likely than enhanced squirrels to remove their hunger drives, once they understand inclusive genetic fitness and are able to eat purely for reasons of fitness maximization. Pre-left-turn AIs will get better at various 'alignment' metrics, in ways that I expect to build a false sense of security, without addressing the lurking difficulties.\n\n\n^\n\n\"What do you mean 'it takes a sharp left turn'? Are you talking about recursive self-improvement? I thought you said somewhere else that you don't think recursive self-improvement is necessarily going to play a central role before the extinction of humanity?\" I'm not talking about recursive self-improvement. That's one way to take a sharp left turn, and it could happen, but note that humans have neither the understanding nor control over their own minds to recursively self-improve, and we outstrip the rest of the animals pretty handily. I'm talking about something more like \"intelligence that is general enough to be dangerous\", the sort of thing that humans have and chimps don't.\n\n\n^\n\n\"Hold on, isn't this unfalsifiable? Aren't you saying that you're going to continue believing that alignment is hard, even as we get evidence that it's easy?\" Well, I contend that \"GPT can learn to answer moral questions just as well as it can learn to answer other questions\" is not much evidence either way about the difficulty of alignment. I'm not saying we'll get evidence that I'll ignore; I'm naming in advance some things that I wouldn't consider negative evidence (partially in hopes that I can refer back to this post when people crow later and request an update). But, yes, my model does have the inconvenient property that people who are skeptical now, are liable to remain skeptical until it's too late, because most of the evidence I expect to give us advance warning about the nature of the problem is evidence that we've already seen. I assure you that I do not consider this property to be convenient.\nAs for things that could convince me otherwise: technical understanding of intelligence could undermine my \"sharp left turn\" model. I could also imagine observing some ephemeral hopefully-I'll-know-it-when-I-see-it capabilities thresholds, without any sharp left turns, that might update me. (Short of \"full superintelligence without a sharp left turn\", which would obviously convince me but comes too late in the game to shift my attention.)\n\n\n^\n\nTo use my overly-detailed evocative example from earlier: Humans aren't tempted to rewire our own brains so that we stop liking good meals for the sake of good meals, and start eating only insofar as we know we have to eat to reproduce (or, rather, maximize inclusive genetic fitness) (after upgrading the rest of our minds such that that sort of calculation doesn't drag down the rest of the fitness maximization). The cleverer humans are chomping at the bit to have their beliefs be more accurate, but they're not chomping at the bit to replace all these mere-shallow-correlates of inclusive genetic fitness with explicit maximization. So too with other minds, at least by default: that which makes them generally intelligent, does not make them motivated by your objectives.\n\n\n\n\nThe post A central AI alignment problem: capabilities generalization, and the sharp left turn appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "A central AI alignment problem: capabilities generalization, and the sharp left turn", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=1", "id": "f45650ad73a67bc462329befaf4d7304"} {"text": "AGI Ruin: A List of Lethalities\n\n\nPreamble:\n(If you're already familiar with all basics and don't want any preamble, skip ahead to Section B for technical difficulties of alignment proper.)\nI have several times failed to write up a well-organized list of reasons why AGI will kill you.  People come in with different ideas about why AGI would be survivable, and want to hear different obviously key points addressed first.  Some fraction of those people are loudly upset with me if the obviously most important points aren't addressed immediately, and I address different points first instead.\nHaving failed to solve this problem in any good way, I now give up and solve it poorly with a poorly organized list of individual rants.  I'm not particularly happy with this list; the alternative was publishing nothing, and publishing this seems marginally more dignified.\nThree points about the general subject matter of discussion here, numbered so as not to conflict with the list of lethalities:\n-3.  I'm assuming you are already familiar with some basics, and already know what 'orthogonality' and 'instrumental convergence' are and why they're true.  People occasionally claim to me that I need to stop fighting old wars here, because, those people claim to me, those wars have already been won within the important-according-to-them parts of the current audience.  I suppose it's at least true that none of the current major EA funders seem to be visibly in denial about orthogonality or instrumental convergence as such; so, fine.  If you don't know what 'orthogonality' or 'instrumental convergence' are, or don't see for yourself why they're true, you need a different introduction than this one.\n-2.  When I say that alignment is lethally difficult, I am not talking about ideal or perfect goals of 'provable' alignment, nor total alignment of superintelligences on exact human values, nor getting AIs to produce satisfactory arguments about moral dilemmas which sorta-reasonable humans disagree about, nor attaining an absolute certainty of an AI not killing everyone.  When I say that alignment is difficult, I mean that in practice, using the techniques we actually have, \"please don't disassemble literally everyone with probability roughly 1\" is an overly large ask that we are not on course to get.  So far as I'm concerned, if you can get a powerful AGI that carries out some pivotal superhuman engineering task, with a less than fifty percent change of killing more than one billion people, I'll take it.  Even smaller chances of killing even fewer people would be a nice luxury, but if you can get as incredibly far as \"less than roughly certain to kill everybody\", then you can probably get down to under a 5% chance with only slightly more effort.  Practically all of the difficulty is in getting to \"less than certainty of killing literally everyone\".  Trolley problems are not an interesting subproblem in all of this; if there are any survivors, you solved alignment.  At this point, I no longer care how it works, I don't care how you got there, I am cause-agnostic about whatever methodology you used, all I am looking at is prospective results, all I want is that we have justifiable cause to believe of a pivotally useful AGI 'this will not kill literally everyone'.  Anybody telling you I'm asking for stricter 'alignment' than this has failed at reading comprehension.  The big ask from AGI alignment, the basic challenge I am saying is too difficult, is to obtain by any strategy whatsoever a significant chance of there being any survivors.\n-1.  None of this is about anything being impossible in principle.  The metaphor I usually use is that if a textbook from one hundred years in the future fell into our hands, containing all of the simple ideas that actually work robustly in practice, we could probably build an aligned superintelligence in six months.  For people schooled in machine learning, I use as my metaphor the difference between ReLU activations and sigmoid activations.  Sigmoid activations are complicated and fragile, and do a terrible job of transmitting gradients through many layers; ReLUs are incredibly simple (for the unfamiliar, the activation function is literally max(x, 0)) and work much better.  Most neural networks for the first decades of the field used sigmoids; the idea of ReLUs wasn't discovered, validated, and popularized until decades later.  What's lethal is that we do not have the Textbook From The Future telling us all the simple solutions that actually in real life just work and are robust; we're going to be doing everything with metaphorical sigmoids on the first critical try.  No difficulty discussed here about AGI alignment is claimed by me to be impossible – to merely human science and engineering, let alone in principle – if we had 100 years to solve it using unlimited retries, the way that science usually has an unbounded time budget and unlimited retries.  This list of lethalities is about things we are not on course to solve in practice in time on the first critical try; none of it is meant to make a much stronger claim about things that are impossible in principle.\nThat said:\nHere, from my perspective, are some different true things that could be said, to contradict various false things that various different people seem to believe, about why AGI would be survivable on anything remotely remotely resembling the current pathway, or any other pathway we can easily jump to.\n \nSection A:\nThis is a very lethal problem, it has to be solved one way or another, it has to be solved at a minimum strength and difficulty level instead of various easier modes that some dream about, we do not have any visible option of 'everyone' retreating to only solve safe weak problems instead, and failing on the first really dangerous try is fatal.\n \n1.  Alpha Zero blew past all accumulated human knowledge about Go after a day or so of self-play, with no reliance on human playbooks or sample games.  Anyone relying on \"well, it'll get up to human capability at Go, but then have a hard time getting past that because it won't be able to learn from humans any more\" would have relied on vacuum.  AGI will not be upper-bounded by human ability or human learning speed.  Things much smarter than human would be able to learn from less evidence than humans require to have ideas driven into their brains; there are theoretical upper bounds here, but those upper bounds seem very high. (Eg, each bit of information that couldn't already be fully predicted can eliminate at most half the probability mass of all hypotheses under consideration.)  It is not naturally (by default, barring intervention) the case that everything takes place on a timescale that makes it easy for us to react.\n\n2.  A cognitive system with sufficiently high cognitive powers, given any medium-bandwidth channel of causal influence, will not find it difficult to bootstrap to overpowering capabilities independent of human infrastructure.  The concrete example I usually use here is nanotech, because there's been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point.  My lower-bound model of \"how a sufficiently powerful intelligence would kill everyone, if it didn't want to not do that\" is that it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they're dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery.  (Back when I was first deploying this visualization, the wise-sounding critics said \"Ah, but how do you know even a superintelligence could solve the protein folding problem, if it didn't already have planet-sized supercomputers?\" but one hears less of this after the advent of AlphaFold 2, for some odd reason.)  The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth's atmosphere, get into human bloodstreams and hide, strike on a timer.  Losing a conflict with a high-powered cognitive system looks at least as deadly as \"everybody on the face of the Earth suddenly falls over dead within the same second\".  (I am using awkward constructions like 'high cognitive power' because standard English terms like 'smart' or 'intelligent' appear to me to function largely as status synonyms.  'Superintelligence' sounds to most people like 'something above the top of the status hierarchy that went to double college', and they don't understand why that would be all that dangerous?  Earthlings have no word and indeed no standard native concept that means 'actually useful cognitive power'.  A large amount of failure to panic sufficiently, seems to me to stem from a lack of appreciation for the incredible potential lethality of this thing that Earthlings as a culture have not named.)\n3.  We need to get alignment right on the 'first critical try' at operating at a 'dangerous' level of intelligence, where unaligned operation at a dangerous level of intelligence kills everybody on Earth and then we don't get to try again.  This includes, for example: (a) something smart enough to build a nanosystem which has been explicitly authorized to build a nanosystem; or (b) something smart enough to build a nanosystem and also smart enough to gain unauthorized access to the Internet and pay a human to put together the ingredients for a nanosystem; or (c) something smart enough to get unauthorized access to the Internet and build something smarter than itself on the number of machines it can hack; or (d) something smart enough to treat humans as manipulable machinery and which has any authorized or unauthorized two-way causal channel with humans; or (e) something smart enough to improve itself enough to do (b) or (d); etcetera.  We can gather all sorts of information beforehand from less powerful systems that will not kill us if we screw up operating them; but once we are running more powerful systems, we can no longer update on sufficiently catastrophic errors.  This is where practically all of the real lethality comes from, that we have to get things right on the first sufficiently-critical try.  If we had unlimited retries – if every time an AGI destroyed all the galaxies we got to go back in time four years and try again – we would in a hundred years figure out which bright ideas actually worked.  Human beings can figure out pretty difficult things over time, when they get lots of tries; when a failed guess kills literally everyone, that is harder.  That we have to get a bunch of key stuff right on the first try is where most of the lethality really and ultimately comes from; likewise the fact that no authority is here to tell us a list of what exactly is 'key' and will kill us if we get it wrong.  (One remarks that most people are so absolutely and flatly unprepared by their 'scientific' educations to challenge pre-paradigmatic puzzles with no scholarly authoritative supervision, that they do not even realize how much harder that is, or how incredibly lethal it is to demand getting that right on the first critical try.)\n4.  We can't just \"decide not to build AGI\" because GPUs are everywhere, and knowledge of algorithms is constantly being improved and published; 2 years after the leading actor has the capability to destroy the world, 5 other actors will have the capability to destroy the world.  The given lethal challenge is to solve within a time limit, driven by the dynamic in which, over time, increasingly weak actors with a smaller and smaller fraction of total computing power, become able to build AGI and destroy the world.  Powerful actors all refraining in unison from doing the suicidal thing just delays this time limit – it does not lift it, unless computer hardware and computer software progress are both brought to complete severe halts across the whole Earth.  The current state of this cooperation to have every big actor refrain from doing the stupid thing, is that at present some large actors with a lot of researchers and computing power are led by people who vocally disdain all talk of AGI safety (eg Facebook AI Research).  Note that needing to solve AGI alignment only within a time limit, but with unlimited safe retries for rapid experimentation on the full-powered system; or only on the first critical try, but with an unlimited time bound; would both be terrifically humanity-threatening challenges by historical standards individually.\n5.  We can't just build a very weak system, which is less dangerous because it is so weak, and declare victory; because later there will be more actors that have the capability to build a stronger system and one of them will do so.  I've also in the past called this the 'safe-but-useless' tradeoff, or 'safe-vs-useful'.  People keep on going \"why don't we only use AIs to do X, that seems safe\" and the answer is almost always either \"doing X in fact takes very powerful cognition that is not passively safe\" or, even more commonly, \"because restricting yourself to doing X will not prevent Facebook AI Research from destroying the world six months later\".  If all you need is an object that doesn't do dangerous things, you could try a sponge; a sponge is very passively safe.  Building a sponge, however, does not prevent Facebook AI Research from destroying the world six months later when they catch up to the leading actor.\n6.  We need to align the performance of some large task, a 'pivotal act' that prevents other people from building an unaligned AGI that destroys the world.  While the number of actors with AGI is few or one, they must execute some \"pivotal act\", strong enough to flip the gameboard, using an AGI powerful enough to do that.  It's not enough to be able to align a weak system – we need to align a system that can do some single very large thing.  The example I usually give is \"burn all GPUs\".  This is not what I think you'd actually want to do with a powerful AGI – the nanomachines would need to operate in an incredibly complicated open environment to hunt down all the GPUs, and that would be needlessly difficult to align.  However, all known pivotal acts are currently outside the Overton Window, and I expect them to stay there.  So I picked an example where if anybody says \"how dare you propose burning all GPUs?\" I can say \"Oh, well, I don't actually advocate doing that; it's just a mild overestimate for the rough power level of what you'd have to do, and the rough level of machine cognition required to do that, in order to prevent somebody else from destroying the world in six months or three years.\"  (If it wasn't a mild overestimate, then 'burn all GPUs' would actually be the minimal pivotal task and hence correct answer, and I wouldn't be able to give that denial.)  Many clever-sounding proposals for alignment fall apart as soon as you ask \"How could you use this to align a system that you could use to shut down all the GPUs in the world?\" because it's then clear that the system can't do something that powerful, or, if it can do that, the system wouldn't be easy to align.  A GPU-burner is also a system powerful enough to, and purportedly authorized to, build nanotechnology, so it requires operating in a dangerous domain at a dangerous level of intelligence and capability; and this goes along with any non-fantasy attempt to name a way an AGI could change the world such that a half-dozen other would-be AGI-builders won't destroy the world 6 months later.\n7.  The reason why nobody in this community has successfully named a 'pivotal weak act' where you do something weak enough with an AGI to be passively safe, but powerful enough to prevent any other AGI from destroying the world a year later – and yet also we can't just go do that right now and need to wait on AI – is that nothing like that exists.  There's no reason why it should exist.  There is not some elaborate clever reason why it exists but nobody can see it.  It takes a lot of power to do something to the current world that prevents any other AGI from coming into existence; nothing which can do that is passively safe in virtue of its weakness.  If you can't solve the problem right now (which you can't, because you're opposed to other actors who don't want to be solved and those actors are on roughly the same level as you) then you are resorting to some cognitive system that can do things you could not figure out how to do yourself, that you were not close to figuring out because you are not close to being able to, for example, burn all GPUs.  Burning all GPUs would actually stop Facebook AI Research from destroying the world six months later; weaksauce Overton-abiding stuff about 'improving public epistemology by setting GPT-4 loose on Twitter to provide scientifically literate arguments about everything' will be cool but will not actually prevent Facebook AI Research from destroying the world six months later, or some eager open-source collaborative from destroying the world a year later if you manage to stop FAIR specifically.  There are no pivotal weak acts.\n8.  The best and easiest-found-by-optimization algorithms for solving problems we want an AI to solve, readily generalize to problems we'd rather the AI not solve; you can't build a system that only has the capability to drive red cars and not blue cars, because all red-car-driving algorithms generalize to the capability to drive blue cars.\n9.  The builders of a safe system, by hypothesis on such a thing being possible, would need to operate their system in a regime where it has the capability to kill everybody or make itself even more dangerous, but has been successfully designed to not do that.  Running AGIs doing something pivotal are not passively safe, they're the equivalent of nuclear cores that require actively maintained design properties to not go supercritical and melt down.\n \nSection B:\nOkay, but as we all know, modern machine learning is like a genie where you just give it a wish, right?  Expressed as some mysterious thing called a 'loss function', but which is basically just equivalent to an English wish phrasing, right?  And then if you pour in enough computing power you get your wish, right?  So why not train a giant stack of transformer layers on a dataset of agents doing nice things and not bad things, throw in the word 'corrigibility' somewhere, crank up that computing power, and get out an aligned AGI?\n \nSection B.1:  The distributional leap.\n10.  You can't train alignment by running lethally dangerous cognitions, observing whether the outputs kill or deceive or corrupt the operators, assigning a loss, and doing supervised learning.  On anything like the standard ML paradigm, you would need to somehow generalize optimization-for-alignment you did in safe conditions, across a big distributional shift to dangerous conditions.  (Some generalization of this seems like it would have to be true even outside that paradigm; you wouldn't be working on a live unaligned superintelligence to align it.)  This alone is a point that is sufficient to kill a lot of naive proposals from people who never did or could concretely sketch out any specific scenario of what training they'd do, in order to align what output – which is why, of course, they never concretely sketch anything like that.  Powerful AGIs doing dangerous things that will kill you if misaligned, must have an alignment property that generalized far out-of-distribution from safer building/training operations that didn't kill you.  This is where a huge amount of lethality comes from on anything remotely resembling the present paradigm.  Unaligned operation at a dangerous level of intelligence*capability will kill you; so, if you're starting with an unaligned system and labeling outputs in order to get it to learn alignment, the training regime or building regime must be operating at some lower level of intelligence*capability that is passively safe, where its currently-unaligned operation does not pose any threat.  (Note that anything substantially smarter than you poses a threat given any realistic level of capability.  Eg, \"being able to produce outputs that humans look at\" is probably sufficient for a generally much-smarter-than-human AGI to navigate its way out of the causal systems that are humans, especially in the real world where somebody trained the system on terabytes of Internet text, rather than somehow keeping it ignorant of the latent causes of its source code and training environments.)\n11.  If cognitive machinery doesn't generalize far out of the distribution where you did tons of training, it can't solve problems on the order of 'build nanotechnology' where it would be too expensive to run a million training runs of failing to build nanotechnology.  There is no pivotal act this weak; there's no known case where you can entrain a safe level of ability on a safe environment where you can cheaply do millions of runs, and deploy that capability to save the world and prevent the next AGI project up from destroying the world two years later.  Pivotal weak acts like this aren't known, and not for want of people looking for them.  So, again, you end up needing alignment to generalize way out of the training distribution – not just because the training environment needs to be safe, but because the training environment probably also needs to be cheaper than evaluating some real-world domain in which the AGI needs to do some huge act.  You don't get 1000 failed tries at burning all GPUs – because people will notice, even leaving out the consequences of capabilities success and alignment failure.\n12.  Operating at a highly intelligent level is a drastic shift in distribution from operating at a less intelligent level, opening up new external options, and probably opening up even more new internal choices and modes.  Problems that materialize at high intelligence and danger levels may fail to show up at safe lower levels of intelligence, or may recur after being suppressed by a first patch.\n13.  Many alignment problems of superintelligence will not naturally appear at pre-dangerous, passively-safe levels of capability.  Consider the internal behavior 'change your outer behavior to deliberately look more aligned and deceive the programmers, operators, and possibly any loss functions optimizing over you'.  This problem is one that will appear at the superintelligent level; if, being otherwise ignorant, we guess that it is among the median such problems in terms of how early it naturally appears in earlier systems, then around half of the alignment problems of superintelligence will first naturally materialize after that one first starts to appear.  Given correct foresight of which problems will naturally materialize later, one could try to deliberately materialize such problems earlier, and get in some observations of them.  This helps to the extent (a) that we actually correctly forecast all of the problems that will appear later, or some superset of those; (b) that we succeed in preemptively materializing a superset of problems that will appear later; and (c) that we can actually solve, in the earlier laboratory that is out-of-distribution for us relative to the real problems, those alignment problems that would be lethal if we mishandle them when they materialize later.  Anticipating all of the really dangerous ones, and then successfully materializing them, in the correct form for early solutions to generalize over to later solutions, sounds possibly kinda hard.\n14.  Some problems, like 'the AGI has an option that (looks to it like) it could successfully kill and replace the programmers to fully optimize over its environment', seem like their natural order of appearance could be that they first appear only in fully dangerous domains.  Really actually having a clear option to brain-level-persuade the operators or escape onto the Internet, build nanotech, and destroy all of humanity – in a way where you're fully clear that you know the relevant facts, and estimate only a not-worth-it low probability of learning something which changes your preferred strategy if you bide your time another month while further growing in capability – is an option that first gets evaluated for real at the point where an AGI fully expects it can defeat its creators.  We can try to manifest an echo of that apparent scenario in earlier toy domains.  Trying to train by gradient descent against that behavior, in that toy domain, is something I'd expect to produce not-particularly-coherent local patches to thought processes, which would break with near-certainty inside a superintelligence generalizing far outside the training distribution and thinking very different thoughts.  Also, programmers and operators themselves, who are used to operating in not-fully-dangerous domains, are operating out-of-distribution when they enter into dangerous ones; our methodologies may at that time break.\n15.  Fast capability gains seem likely, and may break lots of previous alignment-required invariants simultaneously.  Given otherwise insufficient foresight by the operators, I'd expect a lot of those problems to appear approximately simultaneously after a sharp capability gain.  See, again, the case of human intelligence.  We didn't break alignment with the 'inclusive reproductive fitness' outer loss function, immediately after the introduction of farming – something like 40,000 years into a 50,000 year Cro-Magnon takeoff, as was itself running very quickly relative to the outer optimization loop of natural selection.  Instead, we got a lot of technology more advanced than was in the ancestral environment, including contraception, in one very fast burst relative to the speed of the outer optimization loop, late in the general intelligence game.  We started reflecting on ourselves a lot more, started being programmed a lot more by cultural evolution, and lots and lots of assumptions underlying our alignment in the ancestral training environment broke simultaneously.  (People will perhaps rationalize reasons why this abstract description doesn't carry over to gradient descent; eg, \"gradient descent has less of an information bottleneck\".  My model of this variety of reader has an inside view, which they will label an outside view, that assigns great relevance to some other data points that are not observed cases of an outer optimization loop producing an inner general intelligence, and assigns little importance to our one data point actually featuring the phenomenon in question.  When an outer optimization loop actually produced general intelligence, it broke alignment after it turned general, and did so relatively late in the game of that general intelligence accumulating capability and knowledge, almost immediately before it turned 'lethally' dangerous relative to the outer optimization loop of natural selection.  Consider skepticism, if someone is ignoring this one warning, especially if they are not presenting equally lethal and dangerous things that they say will go wrong instead.)\n \nSection B.2:  Central difficulties of outer and inner alignment.\n16.  Even if you train really hard on an exact loss function, that doesn't thereby create an explicit internal representation of the loss function inside an AI that then continues to pursue that exact loss function in distribution-shifted environments.  Humans don't explicitly pursue inclusive genetic fitness; outer optimization even on a very exact, very simple loss function doesn't produce inner optimization in that direction.  This happens in practice in real life, it is what happened in the only case we know about, and it seems to me that there are deep theoretical reasons to expect it to happen again: the first semi-outer-aligned solutions found, in the search ordering of a real-world bounded optimization process, are not inner-aligned solutions.  This is sufficient on its own, even ignoring many other items on this list, to trash entire categories of naive alignment proposals which assume that if you optimize a bunch on a loss function calculated using some simple concept, you get perfect inner alignment on that concept.\n17.  More generally, a superproblem of 'outer optimization doesn't produce inner alignment' is that on the current optimization paradigm there is no general idea of how to get particular inner properties into a system, or verify that they're there, rather than just observable outer ones you can run a loss function over.  This is a problem when you're trying to generalize out of the original training distribution, because, eg, the outer behaviors you see could have been produced by an inner-misaligned system that is deliberately producing outer behaviors that will fool you.  We don't know how to get any bits of information into the inner system rather than the outer behaviors, in any systematic or general way, on the current optimization paradigm.\n18.  There's no reliable Cartesian-sensory ground truth (reliable loss-function-calculator) about whether an output is 'aligned', because some outputs destroy (or fool) the human operators and produce a different environmental causal chain behind the externally-registered loss function.  That is, if you show an agent a reward signal that's currently being generated by humans, the signal is not in general a reliable perfect ground truth about how aligned an action was, because another way of producing a high reward signal is to deceive, corrupt, or replace the human operators with a different causal system which generates that reward signal.  When you show an agent an environmental reward signal, you are not showing it something that is a reliable ground truth about whether the system did the thing you wanted it to do; even if it ends up perfectly inner-aligned on that reward signal, or learning some concept that exactly corresponds to 'wanting states of the environment which result in a high reward signal being sent', an AGI strongly optimizing on that signal will kill you, because the sensory reward signal was not a ground truth about alignment (as seen by the operators).\n19.  More generally, there is no known way to use the paradigm of loss functions, sensory inputs, and/or reward inputs, to optimize anything within a cognitive system to point at particular things within the environment – to point to latent events and objects and properties in the environment, rather than relatively shallow functions of the sense data and reward.  This isn't to say that nothing in the system's goal (whatever goal accidentally ends up being inner-optimized over) could ever point to anything in the environment by accident.  Humans ended up pointing to their environments at least partially, though we've got lots of internally oriented motivational pointers as well.  But insofar as the current paradigm works at all, the on-paper design properties say that it only works for aligning on known direct functions of sense data and reward functions.  All of these kill you if optimized-over by a sufficiently powerful intelligence, because they imply strategies like 'kill everyone in the world using nanotech to strike before they know they're in a battle, and have control of your reward button forever after'.  It just isn't true that we know a function on webcam input such that every world with that webcam showing the right things is safe for us creatures outside the webcam.  This general problem is a fact about the territory, not the map; it's a fact about the actual environment, not the particular optimizer, that lethal-to-us possibilities exist in some possible environments underlying every given sense input.\n20.  Human operators are fallible, breakable, and manipulable.  Human raters make systematic errors – regular, compactly describable, predictable errors.  To faithfully learn a function from 'human feedback' is to learn (from our external standpoint) an unfaithful description of human preferences, with errors that are not random (from the outside standpoint of what we'd hoped to transfer).  If you perfectly learn and perfectly maximize the referent of rewards assigned by human operators, that kills them.  It's a fact about the territory, not the map – about the environment, not the optimizer – that the best predictive explanation for human answers is one that predicts the systematic errors in our responses, and therefore is a psychological concept that correctly predicts the higher scores that would be assigned to human-error-producing cases.\n21.  There's something like a single answer, or a single bucket of answers, for questions like 'What's the environment really like?' and 'How do I figure out the environment?' and 'Which of my possible outputs interact with reality in a way that causes reality to have certain properties?', where a simple outer optimization loop will straightforwardly shove optimizees into this bucket.  When you have a wrong belief, reality hits back at your wrong predictions.  When you have a broken belief-updater, reality hits back at your broken predictive mechanism via predictive losses, and a gradient descent update fixes the problem in a simple way that can easily cohere with all the other predictive stuff.  In contrast, when it comes to a choice of utility function, there are unbounded degrees of freedom and multiple reflectively coherent fixpoints.  Reality doesn't 'hit back' against things that are locally aligned with the loss function on a particular range of test cases, but globally misaligned on a wider range of test cases.  This is the very abstract story about why hominids, once they finally started to generalize, generalized their capabilities to Moon landings, but their inner optimization no longer adhered very well to the outer-optimization goal of 'relative inclusive reproductive fitness' – even though they were in their ancestral environment optimized very strictly around this one thing and nothing else.  This abstract dynamic is something you'd expect to be true about outer optimization loops on the order of both 'natural selection' and 'gradient descent'.  The central result:  Capabilities generalize further than alignment once capabilities start to generalize far.\n22.  There's a relatively simple core structure that explains why complicated cognitive machines work; which is why such a thing as general intelligence exists and not just a lot of unrelated special-purpose solutions; which is why capabilities generalize after outer optimization infuses them into something that has been optimized enough to become a powerful inner optimizer.  The fact that this core structure is simple and relates generically to low-entropy high-structure environments is why humans can walk on the Moon.  There is no analogous truth about there being a simple core of alignment, especially not one that is even easier for gradient descent to find than it would have been for natural selection to just find 'want inclusive reproductive fitness' as a well-generalizing solution within ancestral humans.  Therefore, capabilities generalize further out-of-distribution than alignment, once they start to generalize at all.\n23.  Corrigibility is anti-natural to consequentialist reasoning; \"you can't bring the coffee if you're dead\" for almost every kind of coffee.  We (MIRI) tried and failed to find a coherent formula for an agent that would let itself be shut down (without that agent actively trying to get shut down).  Furthermore, many anti-corrigible lines of reasoning like this may only first appear at high levels of intelligence.\n24.  There are two fundamentally different approaches you can potentially take to alignment, which are unsolvable for two different sets of reasons; therefore, by becoming confused and ambiguating between the two approaches, you can confuse yourself about whether alignment is necessarily difficult.  The first approach is to build a CEV-style Sovereign which wants exactly what we extrapolated-want and is therefore safe to let optimize all the future galaxies without it accepting any human input trying to stop it.  The second course is to build corrigible AGI which doesn't want exactly what we want, and yet somehow fails to kill us and take over the galaxies despite that being a convergent incentive there.\n\nThe first thing generally, or CEV specifically, is unworkable because the complexity of what needs to be aligned or meta-aligned for our Real Actual Values is far out of reach for our FIRST TRY at AGI.  Yes I mean specifically that the dataset, meta-learning algorithm, and what needs to be learned, is far out of reach for our first try.  It's not just non-hand-codable, it is unteachable on-the-first-try because the thing you are trying to teach is too weird and complicated.\nThe second thing looks unworkable (less so than CEV, but still lethally unworkable) because corrigibility runs actively counter to instrumentally convergent behaviors within a core of general intelligence (the capability that generalizes far out of its original distribution).  You're not trying to make it have an opinion on something the core was previously neutral on.  You're trying to take a system implicitly trained on lots of arithmetic problems until its machinery started to reflect the common coherent core of arithmetic, and get it to say that as a special case 222 + 222 = 555.  You can maybe train something to do this in a particular training distribution, but it's incredibly likely to break when you present it with new math problems far outside that training distribution, on a system which successfully generalizes capabilities that far at all.\n\n \nSection B.3:  Central difficulties of sufficiently good and useful transparency / interpretability.\n25.  We've got no idea what's actually going on inside the giant inscrutable matrices and tensors of floating-point numbers.  Drawing interesting graphs of where a transformer layer is focusing attention doesn't help if the question that needs answering is \"So was it planning how to kill us or not?\"\n26.  Even if we did know what was going on inside the giant inscrutable matrices while the AGI was still too weak to kill us, this would just result in us dying with more dignity, if DeepMind refused to run that system and let Facebook AI Research destroy the world two years later.  Knowing that a medium-strength system of inscrutable matrices is planning to kill us, does not thereby let us build a high-strength system of inscrutable matrices that isn't planning to kill us.\n27.  When you explicitly optimize against a detector of unaligned thoughts, you're partially optimizing for more aligned thoughts, and partially optimizing for unaligned thoughts that are harder to detect.  Optimizing against an interpreted thought optimizes against interpretability.\n28.  The AGI is smarter than us in whatever domain we're trying to operate it inside, so we cannot mentally check all the possibilities it examines, and we cannot see all the consequences of its outputs using our own mental talent.  A powerful AI searches parts of the option space we don't, and we can't foresee all its options.\n29.  The outputs of an AGI go through a huge, not-fully-known-to-us domain (the real world) before they have their real consequences.  Human beings cannot inspect an AGI's output to determine whether the consequences will be good.\n30.  Any pivotal act that is not something we can go do right now, will take advantage of the AGI figuring out things about the world we don't know so that it can make plans we wouldn't be able to make ourselves.  It knows, at the least, the fact we didn't previously know, that some action sequence results in the world we want.  Then humans will not be competent to use their own knowledge of the world to figure out all the results of that action sequence.  An AI whose action sequence you can fully understand all the effects of, before it executes, is much weaker than humans in that domain; you couldn't make the same guarantee about an unaligned human as smart as yourself and trying to fool you.  There is no pivotal output of an AGI that is humanly checkable and can be used to safely save the world but only after checking it; this is another form of pivotal weak act which does not exist.\n31.  A strategically aware intelligence can choose its visible outputs to have the consequence of deceiving you, including about such matters as whether the intelligence has acquired strategic awareness; you can't rely on behavioral inspection to determine facts about an AI which that AI might want to deceive you about.  (Including how smart it is, or whether it's acquired strategic awareness.)\n32.  Human thought partially exposes only a partially scrutable outer surface layer.  Words only trace our real thoughts.  Words are not an AGI-complete data representation in its native style.  The underparts of human thought are not exposed for direct imitation learning and can't be put in any dataset.  This makes it hard and probably impossible to train a powerful system entirely on imitation of human words or other human-legible contents, which are only impoverished subsystems of human thoughts; unless that system is powerful enough to contain inner intelligences figuring out the humans, and at that point it is no longer really working as imitative human thought.\n33.  The AI does not think like you do, the AI doesn't have thoughts built up from the same concepts you use, it is utterly alien on a staggering scale.  Nobody knows what the hell GPT-3 is thinking, not only because the matrices are opaque, but because the stuff within that opaque container is, very likely, incredibly alien – nothing that would translate well into comprehensible human thinking, even if we could see past the giant wall of floating-point numbers to what lay behind.\n \nSection B.4:  Miscellaneous unworkable schemes.\n34.  Coordination schemes between superintelligences are not things that humans can participate in (eg because humans can't reason reliably about the code of superintelligences); a \"multipolar\" system of 20 superintelligences with different utility functions, plus humanity, has a natural and obvious equilibrium which looks like \"the 20 superintelligences cooperate with each other but not with humanity\".\n35.  Schemes for playing \"different\" AIs off against each other stop working if those AIs advance to the point of being able to coordinate via reasoning about (probability distributions over) each others' code.  Any system of sufficiently intelligent agents can probably behave as a single agent, even if you imagine you're playing them against each other.  Eg, if you set an AGI that is secretly a paperclip maximizer, to check the output of a nanosystems designer that is secretly a staples maximizer, then even if the nanosystems designer is not able to deduce what the paperclip maximizer really wants (namely paperclips), it could still logically commit to share half the universe with any agent checking its designs if those designs were allowed through, if the checker-agent can verify the suggester-system's logical commitment and hence logically depend on it (which excludes human-level intelligences).  Or, if you prefer simplified catastrophes without any logical decision theory, the suggester could bury in its nanosystem design the code for a new superintelligence that will visibly (to a superhuman checker) divide the universe between the nanosystem designer and the design-checker.\n36.  What makes an air conditioner 'magic' from the perspective of say the thirteenth century, is that even if you correctly show them the design of the air conditioner in advance, they won't be able to understand from seeing that design why the air comes out cold; the design is exploiting regularities of the environment, rules of the world, laws of physics, that they don't know about.  The domain of human thought and human brains is very poorly understood by us, and exhibits phenomena like optical illusions, hypnosis, psychosis, mania, or simple afterimages produced by strong stimuli in one place leaving neural effects in another place.  Maybe a superintelligence couldn't defeat a human in a very simple realm like logical tic-tac-toe; if you're fighting it in an incredibly complicated domain you understand poorly, like human minds, you should expect to be defeated by 'magic' in the sense that even if you saw its strategy you would not understand why that strategy worked.  AI-boxing can only work on relatively weak AGIs; the human operators are not secure systems.\n \nSection C:\nOkay, those are some significant problems, but lots of progress is being made on solving them, right?  There's a whole field calling itself \"AI Safety\" and many major organizations are expressing Very Grave Concern about how \"safe\" and \"ethical\" they are?\n \n37.  There's a pattern that's played out quite often, over all the times the Earth has spun around the Sun, in which some bright-eyed young scientist, young engineer, young entrepreneur, proceeds in full bright-eyed optimism to challenge some problem that turns out to be really quite difficult.  Very often the cynical old veterans of the field try to warn them about this, and the bright-eyed youngsters don't listen, because, like, who wants to hear about all that stuff, they want to go solve the problem!  Then this person gets beaten about the head with a slipper by reality as they find out that their brilliant speculative theory is wrong, it's actually really hard to build the thing because it keeps breaking, and society isn't as eager to adopt their clever innovation as they might've hoped, in a process which eventually produces a new cynical old veteran.  Which, if not literally optimal, is I suppose a nice life cycle to nod along to in a nature-show sort of way.  Sometimes you do something for the first time and there are no cynical old veterans to warn anyone and people can be really optimistic about how it will go; eg the initial Dartmouth Summer Research Project on Artificial Intelligence in 1956:  \"An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.\"  This is less of a viable survival plan for your planet if the first major failure of the bright-eyed youngsters kills literally everyone before they can predictably get beaten about the head with the news that there were all sorts of unforeseen difficulties and reasons why things were hard.  You don't get any cynical old veterans, in this case, because everybody on Earth is dead.  Once you start to suspect you're in that situation, you have to do the Bayesian thing and update now to the view you will predictably update to later: realize you're in a situation of being that bright-eyed person who is going to encounter Unexpected Difficulties later and end up a cynical old veteran – or would be, except for the part where you'll be dead along with everyone else.  And become that cynical old veteran right away, before reality whaps you upside the head in the form of everybody dying and you not getting to learn.  Everyone else seems to feel that, so long as reality hasn't whapped them upside the head yet and smacked them down with the actual difficulties, they're free to go on living out the standard life-cycle and play out their role in the script and go on being bright-eyed youngsters; there's no cynical old veterans to warn them otherwise, after all, and there's no proof that everything won't go beautifully easy and fine, given their bright-eyed total ignorance of what those later difficulties could be.\n38.  It does not appear to me that the field of 'AI safety' is currently being remotely productive on tackling its enormous lethal problems.  These problems are in fact out of reach; the contemporary field of AI safety has been selected to contain people who go to work in that field anyways.  Almost all of them are there to tackle problems on which they can appear to succeed and publish a paper claiming success; if they can do that and get funded, why would they embark on a much more unpleasant project of trying something harder that they'll fail at, just so the human species can die with marginally more dignity?  This field is not making real progress and does not have a recognition function to distinguish real progress if it took place.  You could pump a billion dollars into it and it would produce mostly noise to drown out what little progress was being made elsewhere.\n39.  I figured this stuff out using the null string as input, and frankly, I have a hard time myself feeling hopeful about getting real alignment work out of somebody who previously sat around waiting for somebody else to input a persuasive argument into them.  This ability to \"notice lethal difficulties without Eliezer Yudkowsky arguing you into noticing them\" currently is an opaque piece of cognitive machinery to me, I do not know how to train it into others.  It probably relates to 'security mindset', and a mental motion where you refuse to play out scripts, and being able to operate in a field that's in a state of chaos.\n40.  \"Geniuses\" with nice legible accomplishments in fields with tight feedback loops where it's easy to determine which results are good or bad right away, and so validate that this person is a genius, are (a) people who might not be able to do equally great work away from tight feedback loops, (b) people who chose a field where their genius would be nicely legible even if that maybe wasn't the place where humanity most needed a genius, and (c) probably don't have the mysterious gears simply because they're rare.  You cannot just pay $5 million apiece to a bunch of legible geniuses from other fields and expect to get great alignment work out of them.  They probably do not know where the real difficulties are, they probably do not understand what needs to be done, they cannot tell the difference between good and bad work, and the funders also can't tell without me standing over their shoulders evaluating everything, which I do not have the physical stamina to do.  I concede that real high-powered talents, especially if they're still in their 20s, genuinely interested, and have done their reading, are people who, yeah, fine, have higher probabilities of making core contributions than a random bloke off the street. But I'd have more hope – not significant hope, but more hope – in separating the concerns of (a) credibly promising to pay big money retrospectively for good work to anyone who produces it, and (b) venturing prospective payments to somebody who is predicted to maybe produce good work later.\n41.  Reading this document cannot make somebody a core alignment researcher.  That requires, not the ability to read this document and nod along with it, but the ability to spontaneously write it from scratch without anybody else prompting you; that is what makes somebody a peer of its author.  It's guaranteed that some of my analysis is mistaken, though not necessarily in a hopeful direction.  The ability to do new basic work noticing and fixing those flaws is the same ability as the ability to write this document before I published it, which nobody apparently did, despite my having had other things to do than write this up for the last five years or so.  Some of that silence may, possibly, optimistically, be due to nobody else in this field having the ability to write things comprehensibly – such that somebody out there had the knowledge to write all of this themselves, if they could only have written it up, but they couldn't write, so didn't try.  I'm not particularly hopeful of this turning out to be true in real life, but I suppose it's one possible place for a \"positive model violation\" (miracle).  The fact that, twenty-one years into my entering this death game, seven years into other EAs noticing the death game, and two years into even normies starting to notice the death game, it is still Eliezer Yudkowsky writing up this list, says that humanity still has only one gamepiece that can do that.  I knew I did not actually have the physical stamina to be a star researcher, I tried really really hard to replace myself before my health deteriorated further, and yet here I am writing this.  That's not what surviving worlds look like.\n42.  There's no plan.  Surviving worlds, by this point, and in fact several decades earlier, have a plan for how to survive.  It is a written plan.  The plan is not secret.  In this non-surviving world, there are no candidate plans that do not immediately fall to Eliezer instantly pointing at the giant visible gaping holes in that plan.  Or if you don't know who Eliezer is, you don't even realize you need a plan, because, like, how would a human being possibly realize that without Eliezer yelling at them?  It's not like people will yell at themselves about prospective alignment difficulties, they don't have an internal voice of caution.  So most organizations don't have plans, because I haven't taken the time to personally yell at them.  'Maybe we should have a plan' is deeper alignment mindset than they possess without me standing constantly on their shoulder as their personal angel pleading them into… continued noncompliance, in fact.  Relatively few are aware even that they should, to look better, produce a pretend plan that can fool EAs too 'modest' to trust their own judgments about seemingly gaping holes in what serious-looking people apparently believe.\n43.  This situation you see when you look around you is not what a surviving world looks like.  The worlds of humanity that survive have plans.  They are not leaving to one tired guy with health problems the entire responsibility of pointing out real and lethal problems proactively.  Key people are taking internal and real responsibility for finding flaws in their own plans, instead of considering it their job to propose solutions and somebody else's job to prove those solutions wrong.  That world started trying to solve their important lethal problems earlier than this.  Half the people going into string theory shifted into AI alignment instead and made real progress there.  When people suggest a planetarily-lethal problem that might materialize later – there's a lot of people suggesting those, in the worlds destined to live, and they don't have a special status in the field, it's just what normal geniuses there do – they're met with either solution plans or a reason why that shouldn't happen, not an uncomfortable shrug and 'How can you be sure that will happen' / 'There's no way you could be sure of that now, we'll have to wait on experimental evidence.'\nA lot of those better worlds will die anyways.  It's a genuinely difficult problem, to solve something like that on your first try.  But they'll die with more dignity than this.\n\nThe post AGI Ruin: A List of Lethalities appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "AGI Ruin: A List of Lethalities", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=1", "id": "01d4f6b6bed142d23f64def98fc220f3"} {"text": "Six Dimensions of Operational Adequacy in AGI Projects\n\n\n\n\nEditor's note:  The following is a lightly edited copy of a document written by Eliezer Yudkowsky in November 2017. Since this is a snapshot of Eliezer's thinking at a specific time, we've sprinkled reminders throughout that this is from 2017.\nA background note:\nIt's often the case that people are slow to abandon obsolete playbooks in response to a novel challenge. And AGI is certainly a very novel challenge.\nItalian general Luigi Cadorna offers a memorable historical example. In the Isonzo Offensive of World War I, Cadorna lost hundreds of thousands of men in futile frontal assaults against enemy trenches defended by barbed wire and machine guns.  As morale plummeted and desertions became epidemic, Cadorna began executing his own soldiers en masse, in an attempt to cure the rest of their \"cowardice.\" The offensive continued for 2.5 years.\nCadorna made many mistakes, but foremost among them was his refusal to recognize that this war was fundamentally unlike those that had come before.  Modern weaponry had forced a paradigm shift, and Cadorna's instincts were not merely miscalibrated—they were systematically broken.  No number of small, incremental updates within his obsolete framework would be sufficient to meet the new challenge.\nOther examples of this type of mistake include the initial response of the record industry to iTunes and streaming; or, more seriously, the response of most Western governments to COVID-19.\n \n\n \nAs usual, the real challenge of reference class forecasting is figuring out which reference class the thing you're trying to model belongs to.\nFor most problems, rethinking your approach from the ground up is wasteful and unnecessary, because most problems have a similar causal structure to a large number of past cases. When the problem isn't commensurate with existing strategies, as in the case of AGI, you need a new playbook.\n\n\n \n\n \nI've sometimes been known to complain, or in a polite way scream in utter terror, that \"there is no good guy group in AGI\", i.e., if a researcher on this Earth currently wishes to contribute to the common good, there are literally zero projects they can join and no project close to being joinable.  In its present version, this document is an informal response to an AI researcher who asked me to list out the qualities of such a \"good project\".\nIn summary, a \"good project\" needs:\n\nTrustworthy command:  A trustworthy chain of command with respect to both legal and pragmatic control of the intellectual property (IP) of such a project; a running AGI being included as \"IP\" in this sense.\nResearch closure:  The organizational ability to close and/or silo IP to within a trustworthy section and prevent its release by sheer default.\nStrong opsec:  Operational security adequate to prevent the proliferation of code (or other information sufficient to recreate code within e.g. 1 year) due to e.g. Russian intelligence agencies grabbing the code.\nCommon good commitment:  The project's command and its people must have a credible commitment to both short-term and long-term goodness.  Short-term goodness comprises the immediate welfare of present-day Earth; long-term goodness is the achievement of transhumanist astronomical goods.\nAlignment mindset:  Somebody on the project needs deep enough security mindset plus understanding of AI cognition that they can originate new, deep measures to ensure AGI alignment; and they must be in a position of technical control or otherwise have effectively unlimited political capital.  Everybody on the project needs to understand and expect that aligning an AGI will be terrifically difficult and terribly dangerous.\nRequisite resource levels:  The project must have adequate resources to compete at the frontier of AGI development, including whatever mix of computational resources, intellectual labor, and closed insights are required to produce a 1+ year lead over less cautious competing projects.\n\nI was asked what would constitute \"minimal, adequate, and good\" performance on each of these dimensions.  I tend to divide things sharply into \"not adequate\" and \"adequate\" but will try to answer in the spirit of the question nonetheless.\n\n \nTrustworthy command\nToken:  Not having pragmatic and legal power in the hands of people who are opposed to the very idea of trying to align AGI, or who want an AGI in every household, or who are otherwise allergic to the easy parts of AGI strategy.\nE.g.: Larry Page begins with the correct view that cosmopolitan values are good, speciesism is bad, it would be wrong to mistreat sentient beings just because they're implemented in silicon instead of carbon, and so on. But he then proceeds to reject the idea that goals and capabilities are orthogonal, that instrumental strategies are convergent, and that value is complex and fragile. As a consequence, he expects AGI to automatically be friendly, and is liable to object to any effort to align AI as an attempt to keep AI \"chained up\".\nOr, e.g.: As of December 2015, Elon Musk not only wasn't on board with closure, but apparently wanted to open-source superhumanly capable AI.\nElon Musk is not in his own person a majority of OpenAI's Board, but if he can pragmatically sway a majority of that Board then this measure is not being fulfilled even to a token degree.\n(Update: Elon Musk stepped down from the OpenAI Board in February 2018.)\nImproving:  There's a legal contract which says that the Board doesn't control the IP and that the alignment-aware research silo does.\nAdequate:  The entire command structure including all members of the finally governing Board are fully aware of the difficulty and danger of alignment.  The Board will not object if the technical leadership have disk-erasure measures ready in case the Board suddenly decides to try to open-source the AI anyway.\nExcellent:  Somehow no local authority poses a risk of stepping in and undoing any safety measures, etc.  I have no idea what incremental steps could be taken in this direction that would not make things worse.  If e.g. the government of Iceland suddenly understood how serious things had gotten and granted sanction and security to a project, that would fit this description, but I think that trying to arrange anything like this would probably make things worse globally because of the mindset it promoted.\n \nClosure\nToken:  It's generally understood organizationally that some people want to keep code, architecture, and some ideas a 'secret' from outsiders, and everyone on the project is okay with this even if they disagree.  In principle people aren't being pressed to publish their interesting discoveries if they are obviously capabilities-laden; in practice, somebody always says \"but someone else will probably publish a similar idea 6 months later\" and acts suspicious of the hubris involved in thinking otherwise, but it remains possible to get away with not publishing at moderate personal cost.\nImproving:  A subset of people on the project understand why some code, architecture, lessons learned, et cetera must be kept from reaching the general ML community if success is to have a probability significantly greater than zero (because tradeoffs between alignment and capabilities make the challenge unwinnable if there isn't a project with a reasonable-length lead time).  These people have formed a closed silo within the project, with the sanction and acceptance of the project leadership.  It's socially okay to be conservative about what counts as potentially capabilities-laden thinking, and it's understood that worrying about this is not a boastful act of pride or a trick to get out of needing to write papers.\nAdequate:  Everyone on the project understands and agrees with closure.  Information is siloed whenever not everyone on the project needs to know it.\n \n\n\nReminder: This is a 2017 document.\n\n\n \n\nOpsec\nToken:  Random people are not allowed to wander through the building.\nImproving:  Your little brother cannot steal the IP.  Stuff is encrypted.  Siloed project members sign NDAs.\nAdequate:  Major governments cannot silently and unnoticeably steal the IP without a nonroutine effort.  All project members undergo government-security-clearance-style screening.  AGI code is not running on AWS, but in an airgapped server room.  There are cleared security guards in the server room.\nExcellent:  Military-grade or national-security-grade security.  (It's hard to see how attempts to get this could avoid being counterproductive, considering the difficulty of obtaining trustworthy command and common good commitment with respect to any entity that can deploy such force, and the effect that trying would have on general mindsets.)\n \nCommon good commitment\nToken:  Project members and the chain of command are not openly talking about how dictatorship is great so long as they get to be the dictator.  The project is not directly answerable to Trump or Putin.  They say vague handwavy things about how of course one ought to promote democracy and apple pie (applause) and that everyone ought to get some share of the pot o' gold (applause).\nImproving:  Project members and their chain of command have come out explicitly in favor of being nice to people and eventually building a nice intergalactic civilization.  They would release a cancer cure if they had it, their state of deployment permitting, and they don't seem likely to oppose incremental steps toward a postbiological future and the eventual realization of most of the real value at stake.\nAdequate:  Project members and their chain of command have an explicit commitment to something like coherent extrapolated volition as a long-run goal, AGI tech permitting, and otherwise the careful preservation of values and sentient rights through any pathway of intelligence enhancement.  In the short run, they would not do everything that seems to them like a good idea, and would first prioritize not destroying humanity or wounding its spirit with their own hands.  (E.g., if Google or Facebook consistently thought like this, they would have become concerned a lot earlier about social media degrading cognition.)  Real actual moral humility with policy consequences is a thing.\n \nAlignment mindset\nToken:  At least some people in command sort of vaguely understand that AIs don't just automatically do whatever the alpha male in charge of the organization wants to have happen.  They've hired some people who are at least pretending to work on that in a technical way, not just \"ethicists\" to talk about trolley problems and which monkeys should get the tasty banana.\nImproving:  The technical work output by the \"safety\" group is neither obvious nor wrong.  People in command have ordinary paranoia about AIs.  They expect alignment to be somewhat difficult and to take some extra effort.  They understand that not everything they might like to do, with the first AGI ever built, is equally safe to attempt.\nAdequate:  The project has realized that building an AGI is mostly about aligning it.  Someone with full security mindset and deep understanding of AGI cognition as cognition has proven themselves able to originate new deep alignment measures, and is acting as technical lead with effectively unlimited political capital within the organization to make sure the job actually gets done.  Everyone expects alignment to be terrifically hard and terribly dangerous and full of invisible bullets whose shadow you have to see before the bullet comes close enough to hit you.  They understand that alignment severely constrains architecture and that capability often trades off against transparency.  The organization is targeting the minimal AGI doing the least dangerous cognitive work that is required to prevent the next AGI project from destroying the world.  The alignment assumptions have been reduced into non-goal-valent statements, have been clearly written down, and are being monitored for their actual truth.\nAlignment mindset is fundamentally difficult to obtain for a project because Graham's Design Paradox applies.  People with only ordinary paranoia may not be able to distinguish the next step up in depth of cognition, and happy innocents cannot distinguish useful paranoia from suits making empty statements about risk and safety.  They also tend not to realize what they're missing.  This means that there is a horrifically strong default that when you persuade one more research-rich person or organization or government to start a new project, that project will have inadequate alignment mindset unless something extra-ordinary happens.  I'll be frank and say relative to the present world I think this essentially has to go through trusting me or Nate Soares to actually work, although see below about Paul Christiano.  The lack of clear person-independent instructions for how somebody low in this dimension can improve along this dimension is why the difficulty of this dimension is the real killer.\nIf you insisted on trying this the impossible way, I'd advise that you start by talking to a brilliant computer security researcher rather than a brilliant machine learning researcher.\n \nResources\nToken:  The project has a combination of funding, good researchers, and computing power which makes it credible as a beacon to which interested philanthropists can add more funding and other good researchers interested in aligned AGI can join.  E.g., OpenAI would qualify as this if it were adequate on the other 5 dimensions.\nImproving:  The project has size and quality researchers on the level of say Facebook's AI lab, and can credibly compete among the almost-but-not-quite biggest players.  When they focus their attention on an unusual goal, they can get it done 1+ years ahead of the general field so long as Demis doesn't decide to do it first.  I expect e.g. the NSA would have this level of \"resources\" if they started playing now but didn't grow any further.\nAdequate:  The project can get things done with a 2-year lead time on anyone else, and it's not obvious that competitors could catch up even if they focused attention there.  DeepMind has a great mass of superior people and unshared tools, and is the obvious candidate for achieving adequacy on this dimension; though they would still need adequacy on other dimensions, and more closure in order to conserve and build up advantages.  As I understand it, an adequate resource advantage is explicitly what Demis was trying to achieve, before Elon blew it up, started an openness fad and an arms race, and probably got us all killed.  Anyone else trying to be adequate on this dimension would need to pull ahead of DeepMind, merge with DeepMind, or talk Demis into closing more research and putting less effort into unalignable AGI paths.\nExcellent:  There's a single major project which a substantial section of the research community understands to be The Good Project that good people join, with competition to it deemed unwise and unbeneficial to the public good.  This Good Project is at least adequate along all the other dimensions.  Its major competitors lack either equivalent funding or equivalent talent and insight.  Relative to the present world it would be extremely difficult to make any project like this exist with adequately trustworthy command and alignment mindset, and failed attempts to make it exist run the risk of creating still worse competitors developing unaligned AGI.\nUnrealistic:  There is a single global Manhattan Project which is somehow not answerable to non-common-good command such as Trump or Putin or the United Nations Security Council.  It has orders of magnitude more computing power and smart-researcher-labor than anyone else.  Something keeps other AGI projects from arising and trying to race with the giant project.  The project can freely choose transparency in all transparency-capability tradeoffs and take an extra 10+ years to ensure alignment.  The project is at least adequate along all other dimensions.  This is how our distant, surviving cousins are doing it in their Everett branches that diverged centuries earlier towards more competent civilizational equilibria.  You cannot possibly cause such a project to exist with adequately trustworthy command, alignment mindset, and common-good commitment, and you should therefore not try to make it exist, first because you will simply create a still more dire competitor developing unaligned AGI, and second because if such an AGI could be aligned it would be a hell of an s-risk given the probable command structure.  People who are slipping sideways in reality fantasize about being able to do this.\n\n \n\n\nReminder: This is a 2017 document.\n\n\n \nFurther Remarks\nA project with \"adequate\" closure and a project with \"improving\" closure will, if joined, aggregate into a project with \"improving\" (aka: inadequate) closure where the closed section is a silo within an open organization.  Similar remarks apply along other dimensions.  The aggregate of a project with NDAs, and a project with deeper employee screening, is a combined project with some unscreened people in the building and hence \"improving\" opsec.\n\"Adequacy\" on the dimensions of closure and opsec is based around my mainline-probability scenario where you unavoidably need to spend at least 1 year in a regime where the AGI is not yet alignable on a minimal act that ensures nobody else will destroy the world shortly thereafter, but during that year it's possible to remove a bunch of safeties from the code, shift transparency-capability tradeoffs to favor capability instead, ramp up to full throttle, and immediately destroy the world.\nDuring this time period, leakage of the code to the wider world automatically results in the world being turned into paperclips.  Leakage of the code to multiple major actors such as commercial espionage groups or state intelligence agencies seems to me to stand an extremely good chance of destroying the world because at least one such state actor's command will not reprise the alignment debate correctly and each of them will fear the others.\nI would also expect that, if key ideas and architectural lessons-learned were to leak from an insufficiently closed project that would otherwise have actually developed alignable AGI, it would be possible to use 10% as much labor to implement a non-alignable world-destroying AGI in a shorter timeframe.  The project must be closed tightly or everything ends up as paperclips.\n\"Adequacy\" on common good commitment is based on my model wherein the first task-directed AGI continues to operate in a regime far below that of a real superintelligence, where many tradeoffs have been made for transparency over capability and this greatly constrains self-modification.\nThis task-directed AGI is not able to defend against true superintelligent attack.  It cannot monitor other AGI projects in an unobtrusive way that grants those other AGI projects a lot of independent freedom to do task-AGI-ish things so long as they don't create an unrestricted superintelligence.  The designers of the first task-directed AGI are barely able to operate it in a regime where the AGI doesn't create an unaligned superintelligence inside itself or its environment.  Safe operation of the original AGI requires a continuing major effort at supervision.  The level of safety monitoring of other AGI projects required would be so great that, if the original operators deemed it good that more things be done with AGI powers, it would be far simpler and safer to do them as additional tasks running on the original task-directed AGI.  Therefore:  Everything to do with invocation of superhuman specialized general intelligence, like superhuman science and engineering, continues to have a single effective veto point.\nThis is also true in less extreme scenarios where AGI powers can proliferate, but must be very tightly monitored, because no aligned AGI can defend against an unconstrained superintelligence if one is deliberately or accidentally created by taking off too many safeties. Either way, there is a central veto authority that continues to actively monitor and has the power to prevent anyone else from doing anything potentially world-destroying with AGI.\nThis in turn means that any use of AGI powers along the lines of uploading humans, trying to do human intelligence enhancement, or building a cleaner and more stable AGI to run a CEV, would be subject to the explicit veto of the command structure operating the first task-directed AGI.  If this command structure does not favor something like CEV, or vetoes transhumanist outcomes from a transparent CEV, or doesn't allow intelligence enhancement, et cetera, then all future astronomical value can be permanently lost and even s-risks may apply.\nA universe in which 99.9% of the sapient beings have no civil rights because way back on Earth somebody decided or voted that emulations weren't real people, is a universe plausibly much worse than paperclips.  (I would see as self-defeating any argument from democratic legitimacy that ends with almost all sapient beings not being able to vote.)\nIf DeepMind closed to the silo level, put on adequate opsec, somehow gained alignment mindset within the silo, and allowed trustworthy command of that silo, then in my guesstimation it might be possible to save the Earth (we would start to leave the floor of the logistic success curve).\nOpenAI seems to me to be further behind than DeepMind along multiple dimensions.  OAI is doing significantly better \"safety\" research, but it is all still inapplicable to serious AGI, AFAIK, even if it's not fake / obvious.  I do not think that either OpenAI or DeepMind are out of the basement on the logistic success curve for the alignment-mindset dimension.  It's not clear to me from where I sit that the miracle required to grant OpenAI a chance at alignment success is easier than the miracle required to grant DeepMind a chance at alignment success.  If Greg Brockman or other decisionmakers at OpenAI are not totally insensible, neither is Demis Hassabis.  Both OAI and DeepMind have significant metric distance to cross on Common Good Commitment; this dimension is relatively easier to max out, but it's not maxed out just by having commanders vaguely nodding along or publishing a mission statement about moral humility, nor by a fragile political balance with some morally humble commanders and some morally nonhumble ones.  If I had a ton of money and I wanted to get a serious contender for saving the Earth out of OpenAI, I'd probably start by taking however many OpenAI researchers could pass screening and refounding a separate organization out of them, then using that as the foundation for further recruiting.\nI have never seen anyone except Paul Christiano try what I would consider to be deep macro alignment work.  E.g. if you look at Paul's AGI scheme there is a global alignment story with assumptions that can be broken down, and the idea of exact human imitation is a deep one rather than a shallow defense–although I don't think the assumptions have been broken down far enough; but nobody else knows they even ought to be trying to do anything like that.  I also think Paul's AGI scheme is orders-of-magnitude too costly and has chicken-and-egg alignment problems.  But I wouldn't totally rule out a project with Paul in technical command, because I would hold out hope that Paul could follow along with someone else's deep security analysis and understand it in-paradigm even if it wasn't his own paradigm; that Paul would suggest useful improvements and hold the global macro picture to a standard of completeness; and that Paul would take seriously how bad it would be to violate an alignment assumption even if it wasn't an assumption within his native paradigm.  Nobody else except myself and Paul is currently in the arena of comparison.  If we were both working on the same project it would still have unnervingly few people like that.  I think we should try to get more people like this from the pool of brilliant young computer security researchers, not just the pool of machine learning researchers.  Maybe that'll fail just as badly, but I want to see it tried.\nI doubt that it is possible to produce a written scheme for alignment, or any other kind of fixed advice, that can be handed off to a brilliant programmer with ordinary paranoia and allow them to actually succeed.  Some of the deep ideas are going to turn out to be wrong, inapplicable, or just plain missing.  Somebody is going to have to notice the unfixable deep problems in advance of an actual blowup, and come up with new deep ideas and not just patches, as the project goes on.\n \n \n\n\nReminder: This is a 2017 document.\n\n\n \n\nThe post Six Dimensions of Operational Adequacy in AGI Projects appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Six Dimensions of Operational Adequacy in AGI Projects", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=1", "id": "b291bff4cdb08bbf8651812f7172ac2a"} {"text": "Shah and Yudkowsky on alignment failures\n\n\n \nThis is the final discussion log in the Late 2021 MIRI Conversations sequence, featuring Rohin Shah and Eliezer Yudkowsky, with additional comments from Rob Bensinger, Nate Soares, Richard Ngo, and Jaan Tallinn.\nThe discussion begins with summaries and comments on Richard and Eliezer's debate. Rohin's summary has since been revised and published in the Alignment Newsletter.\nAfter this log, we'll be concluding this sequence with an AMA, where we invite you to comment with questions about AI alignment, cognition, forecasting, etc. Eliezer, Richard, Paul Christiano, Nate, and Rohin will all be participating.\n \nColor key:\n\n\n\n\n Chat by Rohin and Eliezer \n Other chat \n Emails \n Follow-ups \n\n\n\n\n \n19. Follow-ups to the Ngo/Yudkowsky conversation\n \n19.1. Quotes from the public discussion\n \n\n[Bensinger][9:22]\nInteresting extracts from the public discussion of Ngo and Yudkowsky on AI capability gains:\nEliezer:\n\nI think some of your confusion may be that you're putting \"probability theory\" and \"Newtonian gravity\" into the same bucket.  You've been raised to believe that powerful theories ought to meet certain standards, like successful bold advance experimental predictions, such as Newtonian gravity made about the existence of Neptune (quite a while after the theory was first put forth, though).  \"Probability theory\" also sounds like a powerful theory, and the people around you believe it, so you think you ought to be able to produce a powerful advance prediction it made; but it is for some reason hard to come up with an example like the discovery of Neptune, so you cast about a bit and think of the central limit theorem.  That theorem is widely used and praised, so it's \"powerful\", and it wasn't invented before probability theory, so it's \"advance\", right?  So we can go on putting probability theory in the same bucket as Newtonian gravity?\nThey're actually just very different kinds of ideas, ontologically speaking, and the standards to which we hold them are properly different ones.  It seems like the sort of thing that would take a subsequence I don't have time to write, expanding beyond the underlying obvious ontological difference between validities and empirical-truths, to cover the way in which \"How do we trust this, when\" differs between \"I have the following new empirical theory about the underlying model of gravity\" and \"I think that the logical notion of 'arithmetic' is a good tool to use to organize our current understanding of this little-observed phenomenon, and it appears within making the following empirical predictions…\"  But at least step one could be saying, \"Wait, do these two kinds of ideas actually go into the same bucket at all?\"\nIn particular it seems to me that you want properly to be asking \"How do we know this empirical thing ends up looking like it's close to the abstraction?\" and not \"Can you show me that this abstraction is a very powerful one?\"  Like, imagine that instead of asking Newton about planetary movements and how we know that the particular bits of calculus he used were empirically true about the planets in particular, you instead started asking Newton for proof that calculus is a very powerful piece of mathematics worthy to predict the planets themselves – but in a way where you wanted to see some highly valuable material object that calculus had produced, like earlier praiseworthy achievements in alchemy.  I think this would reflect confusion and a wrongly directed inquiry; you would have lost sight of the particular reasoning steps that made ontological sense, in the course of trying to figure out whether calculus was praiseworthy under the standards of praiseworthiness that you'd been previously raised to believe in as universal standards about all ideas.\n\nRichard:\n\nI agree that \"powerful\" is probably not the best term here, so I'll stop using it going forward (note, though, that I didn't use it in my previous comment, which I endorse more than my claims in the original debate).\nBut before I ask \"How do we know this empirical thing ends up looking like it's close to the abstraction?\", I need to ask \"Does the abstraction even make sense?\" Because you have the abstraction in your head, and I don't, and so whenever you tell me that X is a (non-advance) prediction of your theory of consequentialism, I end up in a pretty similar epistemic state as if George Soros tells me that X is a prediction of the theory of reflexivity, or if a complexity theorist tells me that X is a prediction of the theory of self-organisation. The problem in those two cases is less that the abstraction is a bad fit for this specific domain, and more that the abstraction is not sufficiently well-defined (outside very special cases) to even be the type of thing that can robustly make predictions.\nPerhaps another way of saying it is that they're not crisp/robust/coherent concepts (although I'm open to other terms, I don't think these ones are particularly good). And it would be useful for me to have evidence that the abstraction of consequentialism you're using is a crisper concept than Soros' theory of reflexivity or the theory of self-organisation. If you could explain the full abstraction to me, that'd be the most reliable way – but given the difficulties of doing so, my backup plan was to ask for impressive advance predictions, which are the type of evidence that I don't think Soros could come up with.\nI also think that, when you talk about me being raised to hold certain standards of praiseworthiness, you're still ascribing too much modesty epistemology to me. I mainly care about novel predictions or applications insofar as they help me distinguish crisp abstractions from evocative metaphors. To me it's the same type of rationality technique as asking people to make bets, to help distinguish post-hoc confabulations from actual predictions.\nOf course there's a social component to both, but that's not what I'm primarily interested in. And of course there's a strand of naive science-worship which thinks you have to follow the Rules in order to get anywhere, but I'd thank you to assume I'm at least making a more interesting error than that.\nLastly, on probability theory and Newtonian mechanics: I agree that you shouldn't question how much sense it makes to use calculus in the way that you described, but that's because the application of calculus to mechanics is so clearly-defined that it'd be very hard for the type of confusion I talked about above to sneak in. I'd put evolutionary theory halfway between them: it's partly a novel abstraction, and partly a novel empirical truth. And in this case I do think you have to be very careful in applying the core abstraction of evolution to things like cultural evolution, because it's easy to do so in a confused way.\n\n\n\n\n \n19.2. Rohin Shah's summary and thoughts\n \n\n[Shah][7:06]  (Nov. 6 email)\nNewsletter summaries attached, would appreciate it if Eliezer and Richard checked that I wasn't misrepresenting them. (Conversation is a lot harder to accurately summarize than blog posts or papers.)\n \nBest,\nRohin\n \nPlanned summary for the Alignment Newsletter:\n \nEliezer is known for being pessimistic about our chances of averting AI catastrophe. His main argument is roughly as follows:\n\n\n\n\n\n[Yudkowsky][9:56]  (Nov. 6 email reply)\n\n[…] Eliezer is known for being pessimistic about our chances of averting AI catastrophe. His main argument\n\nI request that people stop describing things as my \"main argument\" unless I've described them that way myself.  These are answers that I customized for Richard Ngo's questions.  Different questions would get differently emphasized replies.  \"His argument in the dialogue with Richard Ngo\" would be fine.\n\n\n\n[Shah][1:53]  (Nov. 8 email reply)\n\nI request that people stop describing things as my \"main argument\" unless I've described them that way myself.\n\nFair enough. It still does seem pretty relevant to know the purpose of the argument, and I would like to state something along those lines in the summary. For example, perhaps it is:\n\nOne of several relatively-independent lines of argument that suggest we're doomed; cutting this argument would make almost no difference to the overall take\nYour main argument, but with weird Richard-specific emphases that you wouldn't have necessarily included if making this argument more generally; if someone refuted the core of the argument to your satisfaction it would make a big difference to your overall take\nNot actually an argument you think much about at all, but somehow became the topic of discussion\nSomething in between these options\nSomething else entirely\n\nIf you can't really say, then I guess I'll just say \"His argument in this particular dialogue\".\nI'd also like to know what the main argument is (if there is a main argument rather than lots of independent lines of evidence or something else entirely); it helps me orient to the discussion, and I suspect would be useful for newsletter readers as well.\n\n\n\n\n[Shah][7:06]  (Nov. 6 email)\n1. We are very likely going to keep improving AI capabilities until we reach AGI, at which point either the world is destroyed, or we use the AI system to take some pivotal act before some careless actor destroys the world.\n2. In either case, the AI system must be producing high-impact, world-rewriting plans; such plans are \"consequentialist\" in that the simplest way to get them (and thus, the one we will first build) is if you are forecasting what might happen, thinking about the expected consequences, considering possible obstacles, searching for routes around the obstacles, etc. If you don't do this sort of reasoning, your plan goes off the rails very quickly; it is highly unlikely to lead to high impact. In particular, long lists of shallow heuristics (as with current deep learning systems) are unlikely to be enough to produce high-impact plans.\n3. We're producing AI systems by selecting for systems that can do impressive stuff, which will eventually produce AI systems that can accomplish high-impact plans using a general underlying \"consequentialist\"-style reasoning process (because that's the only way to keep doing more impressive stuff). However, this selection process does not constrain the goals towards which those plans are aimed. In addition, most goals seem to have convergent instrumental subgoals like survival and power-seeking that would lead to extinction. This suggests that, unless we find a way to constrain the goals towards which plans are aimed, we should expect an existential catastrophe.\n4. None of the methods people have suggested for avoiding this outcome seem like they actually avert this story.\n\n\n\n\n[Yudkowsky][9:56]  (Nov. 6 email reply)\n\n[…] This suggests that, unless we find a way to constrain the goals towards which plans are aimed, we should expect an existential catastrophe.\n\nI would not say we face catastrophe \"unless we find a way to constrain the goals towards which plans are aimed\".  This is, first of all, not my ontology, second, I don't go around randomly slicing away huge sections of the solution space.  Workable:  \"This suggests that we should expect an existential catastrophe by default.\" \n\n\n\n[Shah][1:53]  (Nov. 8 email reply)\n\nI would not say we face catastrophe \"unless we find a way to constrain the goals towards which plans are aimed\".\n\nShould I also change \"However, this selection process does not constrain the goals towards which those plans are aimed\", and if so what to? (Something along these lines seems crucial to the argument, but if this isn't your native ontology, then presumably you have some other thing you'd say here.)\n\n\n\n\n[Shah][7:06]  (Nov. 6 email)\nRichard responds to this with a few distinct points:\n1. It might be possible to build narrow AI systems that humans use to save the world, for example, by making AI systems that do better alignment research. Such AI systems do not seem to require the property of making long-term plans in the real world in point (3) above, and so could plausibly be safe. We might say that narrow AI systems could save the world but can't destroy it, because humans will put plans into action for the former but not the latter.\n2. It might be possible to build general AI systems that only state plans for achieving a goal of interest that we specify, without executing that plan.\n3. It seems possible to create consequentialist systems with constraints upon their reasoning that lead to reduced risk.\n4. It also seems possible to create systems that make effective plans, but towards ends that are not about outcomes in the real world, but instead are about properties of the plans — think for example of corrigibility (AN #35) or deference to a human user.\n5. (Richard is also more bullish on coordinating not to use powerful and/or risky AI systems, though the debate did not discuss this much.)\n \nEliezer's responses:\n1. This is plausible, but seems unlikely; narrow not-very-consequentialist AI (aka \"long lists of shallow heuristics\") will probably not scale to the point of doing alignment research better than humans.\n\n\n\n\n[Yudkowsky][9:56]  (Nov. 6 email reply)\n\n[…] This is plausible, but seems unlikely; narrow not-very-consequentialist AI (aka \"long lists of shallow heuristics\") will probably not scale to the point of doing alignment research better than humans.\n\nNo, your summarized-Richard-1 is just not plausible.  \"AI systems that do better alignment research\" are dangerous in virtue of the lethally powerful work they are doing, not because of some particular narrow way of doing that work.  If you can do it by gradient descent then that means gradient descent got to the point of doing lethally dangerous work.  Asking for safely weak systems that do world-savingly strong tasks is almost everywhere a case of asking for nonwet water, and asking for AI that does alignment research is an extreme case in point.\n\n\n\n[Shah][1:53]  (Nov. 8 email reply)\n\nNo, your summarized-Richard-1 is just not plausible. \"AI systems that do better alignment research\" are dangerous in virtue of the lethally powerful work they are doing, not because of some particular narrow way of doing that work.\n\nHow about \"AI systems that help with alignment research to a sufficient degree that it actually makes a difference are almost certainly already dangerous.\"?\n(Fwiw, I used the word \"plausible\" because of this sentence from the doc: \"Definitely, is among the more plausible advance-specified miracles we could get.\", though I guess the point was that it is still a miracle, it just also is more likely than other miracles.)\n\n\n\n\n[Ngo][9:59]  (Nov. 6 email reply)\nThanks Rohin! Your efforts are much appreciated.\nEliezer: when you say \"No, your summarized-Richard-1 is just not plausible\", do you mean the argument is implausible, or it's not a good summary of my position (which you also think is implausible)?\nFor my part the main thing I'd like to modify is the term \"narrow AI\". In general I'm talking about all systems that are not of literally world-destroying intelligence+agency. E.g. including oracle AGIs which I wouldn't call \"narrow\".\nMore generally, I don't think all AGIs are capable of destroying the world. E.g. humans are GIs. So it might be better to characterise Eliezer as talking about some level of general intelligence which leads to destruction, and me as talking about the things that can be done with systems that are less general or less agentic than that.\n\nWe might say that narrow AI systems could save the world but can't destroy it, because humans will put plans into action for the former but not the latter.\n\nI don't endorse this, I think plenty of humans would be willing to use narrow AI systems to do things that could destroy the world.\n\nsystems that make effective plans, but towards ends that are not about outcomes in the real world, but instead are about properties of the plans\n\nI'd change this to say \"systems with the primary aim of producing plans with certain properties (that aren't just about outcomes in the world)\" \n\n\n\n\n[Yudkowsky][10:18]  (Nov. 6 email reply)\n\nEliezer: when you say \"No, your summarized-Richard-1 is just not plausible\", do you mean the argument is implausible, or it's not a good summary of my position (which you also think is implausible)?\n\nI wouldn't have presumed to state on your behalf whether it's a good summary of your position!  I mean that the stated position is implausible, whether or not it was a good summary of your position.\n\n\n\n[Shah][7:06]  (Nov. 6 email)\n2. This might be an improvement, but not a big one. It is the plan itself that is risky; if the AI system made a plan for a goal that wasn't the one we actually meant, and we don't understand that plan, that plan can still cause extinction. It is the misaligned optimization that produced the plan that is dangerous, even if there was no \"agent\" that specifically wanted the goal that the plan was optimized for.\n3 and 4. It is certainly possible to do such things; the space of minds that could be designed is very large. However, it is difficult to do such things, as they tend to make consequentialist reasoning weaker, and on our current trajectory the first AGI that we build will probably not look like that.\n\n\n\n\n[Yudkowsky][9:56]  (Nov. 6 email reply)\n\n2. This might be an improvement, but not a big one. It is the plan itself that is risky; if the AI system made a plan for a goal that wasn't the one we actually meant, and we don't understand that plan, that plan can still cause extinction. It is the misaligned optimization that produced the plan that is dangerous, even if there was no \"agent\" that specifically wanted the goal that the plan was optimized for.\n\nNo, it's not a significant improvement if the \"non-executed plans\" from the system are meant to do things in human hands powerful enough to save the world.  They could of course be so weak as to make their human execution have no inhumanly big consequences, but this is just making the AI strategically isomorphic to a rock.  The notion of there being \"no 'agent' that specifically wanted the goal\" seems confused to me as well; this is not something I'd ever say as a restatement of one of my own opinions.  I'd shrug and tell someone to taboo the word 'agent' and would try to talk without using the word if they'd gotten hung up on that point.\n\n\n\n[Shah][7:06]  (Nov. 6 email)\nPlanned opinion:\n \nI first want to note my violent agreement with the notion that a major scary thing is \"consequentialist reasoning\", and that high-impact plans require such reasoning, and that we will end up building AI systems that produce high-impact plans. Nonetheless, I am still optimistic about AI safety relative to Eliezer, which I suspect comes down to three main disagreements:\n1. There are many approaches that don't solve the problem, but do increase the level of intelligence required before the problem leads to extinction. Examples include Richard's points 1-4 above. For example, if we build a system that states plans without executing them, then for the plans to cause extinction they need to be complicated enough that the humans executing those plans don't realize that they are leading to an outcome that was not what they wanted. It seems non-trivially probable to me that such approaches are sufficient to prevent extinction up to the level of AI intelligence needed before we can execute a pivotal act.\n2. The consequentialist reasoning is only scary to the extent that it is \"aimed\" at a bad goal. It seems non-trivially probable to me that it will be \"aimed\" at a goal sufficiently good to not lead to existential catastrophe, without putting in much alignment effort.3. I do expect some coordination to not do the most risky things.\nI wish the debate had focused more on the claim that narrow AI can't e.g. do better alignment research, as it seems like a major crux. (For example, I think that sort of intuition drives my disagreement #1.) I expect AI progress looks a lot like \"the heuristics get less and less shallow in a gradual / smooth / continuous manner\" which eventually leads to the sorts of plans Eliezer calls \"consequentialist\", whereas I think Eliezer expects a sharper qualitative change between \"lots of heuristics\" and that-which-implements-consequentialist-planning.\n\n\n\n \n20. November 6 conversation\n \n20.1. Concrete plans, and AI-mediated transparency\n \n\n[Yudkowsky][13:22]\nSo I have a general thesis about a failure mode here which is that, the moment you try to sketch any concrete plan or events which correspond to the abstract descriptions, it is much more obviously wrong, and that is why the descriptions stay so abstract in the mouths of everybody who sounds more optimistic than I am.\nThis may, perhaps, be confounded by the phenomenon where I am one of the last living descendants of the lineage that ever knew how to say anything concrete at all.  Richard Feynman – or so I would now say in retrospect – is noticing concreteness dying out of the world, and being worried about that, at the point where he goes to a college and hears a professor talking about \"essential objects\" in class, and Feynman asks \"Is a brick an essential object?\" – meaning to work up to the notion of the inside of a brick, which can't be observed because breaking a brick in half just gives you two new exterior surfaces – and everybody in the classroom has a different notion of what it would mean for a brick to be an essential object. \nRichard Feynman knew to try plugging in bricks as a special case, but the people in the classroom didn't, and I think the mental motion has died out of the world even further since Feynman wrote about it.  The loss has spread to STEM as well.  Though if you don't read old books and papers and contrast them to new books and papers, you wouldn't see it, and maybe most of the people who'll eventually read this will have no idea what I'm talking about because they've never seen it any other way…\nI have a thesis about how optimism over AGI works.  It goes like this: People use really abstract descriptions and never imagine anything sufficiently concrete, and this lets the abstract properties waver around ambiguously and inconsistently to give the desired final conclusions of the argument.  So MIRI is the only voice that gives concrete examples and also by far the most pessimistic voice; if you go around fully specifying things, you can see that what gives you a good property in one place gives you a bad property someplace else, you see that you can't get all the properties you want simultaneously.  Talk about a superintelligence building nanomachinery, talk concretely about megabytes of instructions going to small manipulators that repeat to lay trillions of atoms in place, and this shows you a lot of useful visible power paired with such unpleasantly visible properties as \"no human could possibly check what all those instructions were supposed to do\".\nAbstract descriptions, on the other hand, can waver as much as they need to between what's desirable in one dimension and undesirable in another.  Talk about \"an AGI that just helps humans instead of replacing them\" and never say exactly what this AGI is supposed to do, and this can be so much more optimistic so long as it never becomes too unfortunately concrete.\nWhen somebody asks you \"how powerful is it?\" you can momentarily imagine – without writing it down – that the AGI is helping people by giving them the full recipes for protein factories that build second-stage nanotech and the instructions to feed those factories, and reply, \"Oh, super powerful! More than powerful enough to flip the gameboard!\" Then when somebody asks how safe it is, you can momentarily imagine that it's just giving a human mathematician a hint about proving a theorem, and say, \"Oh, super duper safe, for sure, it's just helping people!\" \nOr maybe you don't even go through the stage of momentarily imagining the nanotech and the hint, maybe you just navigate straight in the realm of abstractions from the impossibly vague wordage of \"just help humans\" to the reassuring and also extremely vague \"help them lots, super powerful, very safe tho\".\n\n[…] I wish the debate had focused more on the claim that narrow AI can't e.g. do better alignment research, as it seems like a major crux. (For example, I think that sort of intuition drives my disagreement #1.) I expect AI progress looks a lot like \"the heuristics get less and less shallow in a gradual / smooth / continuous manner\" which eventually leads to the sorts of plans Eliezer calls \"consequentialist\", whereas I think Eliezer expects a sharper qualitative change between \"lots of heuristics\" and that-which-implements-consequentialist-planning.\n\nIt is in this spirit that I now ask, \"What the hell could it look like concretely for a safely narrow AI to help with alignment research?\"\nOr if you think that a left-handed wibble planner can totally make useful plans that are very safe because it's all leftish and wibbly: can you please give an example of a plan to do what?\nAnd what I expect is for minds to bounce off that problem as they first try to visualize \"Well, a plan to give mathematicians hints for proving theorems… oh, Eliezer will just say that's not useful enough to flip the gameboard… well, plans for building nanotech… Eliezer will just say that's not safe… darn it, this whole concreteness thing is such a conversational no-win scenario, maybe there's something abstract I can say instead\".\n\n\n\n[Shah][16:41]\nIt's reasonable to suspect failures to be concrete, but I don't buy that hypothesis as applied to me; I think I have sufficient personal evidence against it, despite the fact that I usually speak abstractly. I don't expect to convince you of this, nor do I particularly want to get into that sort of debate.\nI'll note that I have the exact same experience of not seeing much concreteness, both of other people and myself, about stories that lead to doom. To be clear, in what I take to be the Eliezer-story, the part where the misaligned AI designs a pathogen that wipes out all humans or solves nanotech and gains tons of power or some other pivotal act seems fine. The part that seems to lack concreteness is how we built the superintelligence and why the superintelligence was misaligned enough to lead to extinction. (Well, perhaps. I also wouldn't be surprised if you gave a concrete example and I disagreed that it would lead to extinction.)\nFrom my perspective, the simple concrete stories about the future are wrong and the complicated concrete stories about the future don't sound plausible, whether about safety or about doom.\nNonetheless, here's an attempt at some concrete stories. It is not the case that I think these would be convincing to you. I do expect you to say that it won't be useful enough to flip the gameboard (or perhaps that if it could possibly flip the gameboard then it couldn't be safe), but that seems to be because you think alignment will be way more difficult than I do (in expectation), and perhaps we should get into that instead.\n\nInstead of having to handwrite code that does feature visualization or other methods of \"naming neurons\", an AI assistant can automatically inspect a neural net's weights, perform some experiments with them, and give them human-understandable \"names\". What a \"name\" is depends on the system being analyzed, but you could imagine that sometimes it's short memorable phrases (e.g. for the later layers of a language model), or pictures of central concepts (e.g. for image classifiers), or paragraphs describing the concept (e.g. for novel concepts discovered by a scientist AI). Given these names, it is much easier for humans to read off \"circuits\" from the neural net to understand how it works.\nLike the above, except the AI assistant also reads out the circuits, and efficiently reimplements the neural network in, say, readable Python, that humans can then more easily mechanistically understand. (These two tasks could also be done by two different AI systems, instead of the same one; perhaps that would be easier / safer.)\nWe have AI assistants search for inputs on which the AI system being inspected would do something that humans would rate as bad. (We can choose any not-horribly-unnatural rating scheme we want that humans can understand, e.g. \"don't say something the user said not to talk about, even if it's in their best interest\" can be a tenet for finetuned GPT-N if we want.) We can either train on those inputs, or use them as a test for how well our other alignment schemes have worked.\n\n(These are all basically leveraging the fact that we could have AI systems that are really knowledgeable in the realm of \"connecting neural net activations to human concepts\", which seems plausible to do without being super general or consequentialist.)\nThere's also lots of meta stuff, like helping us with literature reviews, speeding up paper- and blog-post-writing, etc, but I doubt this is getting at what you care about\n\n\n\n\n[Yudkowsky][17:09]\nIf we thought that helping with literature review was enough to save the world from extinction, then we should be trying to spend at least $50M on helping with literature review right now today, and if we can't effectively spend $50M on that, then we also can't build the dataset required to train narrow AI to do literature review.  Indeed, any time somebody suggests doing something weak with AGI, my response is often \"Oh how about we start on that right now using humans, then,\" by which question its pointlessness is revealed.\n\n\n\n[Shah][17:11]\nI mean, doesn't seem crazy to just spend $50M on effective PAs, but in any case I agree with you that this is not the main thing to be thinking about\n\n\n\n\n[Yudkowsky][17:13]\nThe other cases of \"using narrow AI to help with alignment\" via pointing an AI, or rather a loss function, at a transparency problem, seem to seamlessly blend into all of the other clever-ideas we may have for getting more insight into the giant inscrutable matrices of floating-point numbers.  By this concreteness, it is revealed that we are not speaking of von-Neumann-plus-level AGIs who come over and firmly but gently set aside our paradigm of giant inscrutable matrices, and do something more alignable and transparent; rather, we are trying more tricks with loss functions to get human-language translations of the giant inscrutable matrices.\nI have thought of various possibilities along these lines myself.  They're on my list of things to try out when and if the EA community has the capacity to try out ML ideas in a format I could and would voluntarily access.\nThere's a basic reason I expect the world to die despite my being able to generate infinite clever-ideas for ML transparency, which, at the usual rate of 5% of ideas working, could get us as many as three working ideas in the impossible event that the facilities were available to test 60 of my ideas.\n\n\n\n[Shah][17:15]\n\nBy this concreteness, it is revealed that we are not speaking of von-Neumann-plus-level AGIs who come over and firmly but gently set aside our paradigm of giant inscrutable matrices, and do something more alignable and transparent; rather, we are trying more tricks with loss functions to get human-language translations of the giant inscrutable matrices.\n\nAgreed, but I don't see the point here\n(Beyond \"Rohin and Eliezer disagree on how impossible it is to align giant inscrutable matrices\")\n(I might dispute \"tricks with loss functions\", but that's nitpicky, I think)\n\n\n\n\n[Yudkowsky][17:16]\nIt's that, if we get better transparency, we are then left looking at stronger evidence that our systems are planning to kill us, but this will not help us because we will not have anything we can do to make the system not plan to kill us.\n\n\n\n[Shah][17:18]\nThe adversarial training case is one example where you are trying to change the system, and if you'd like I can generate more along these lines, but they aren't going to be that different and are still going to come down to what I expect you will call \"playing tricks with loss functions\"\n\n\n\n\n[Yudkowsky][17:18]\nWell, part of the point is that \"AIs helping us with alignment\" is, from my perspective, a classic case of something that might ambiguate between the version that concretely corresponds to \"they are very smart and can give us the Textbook From The Future that we can use to easily build a robust superintelligence\" (which is powerful, pivotal, unsafe, and kills you) or \"they can help us with literature review\" (safe, weak, unpivotal) or \"we're going to try clever tricks with gradient descent and loss functions and labeled datasets to get alleged natural-language translations of some of the giant inscrutable matrices\" (which was always the plan but which I expected to not be sufficient to avert ruin).\n\n\n\n[Shah][17:19]\nI'm definitely thinking of the last one, but I take your point that disambiguating between these is good\nAnd I also think it's revealing that this is not in fact the crux of disagreement\n\n\n\n \n20.2. Concrete disaster scenarios, out-of-distribution problems, and corrigibility\n \n\n[Yudkowsky][17:20]\n\nI'll note that I have the exact same experience of not seeing much concreteness, both of other people and myself, about stories that lead to doom.\n\nI have a boundless supply of greater concrete detail for the asking, though if you ask large questions I may ask for a narrower question to avoid needing to supply 10,000 words of concrete detail.\n\n\n\n[Shah][17:24]\nI guess the main thing is to have an example of a story which includes a method for building a superintelligence (yes, I realize this is info-hazard-y, sorry, an abstract version might work) + how it becomes misaligned and what its plans become optimized for. Though as I type this out I realize that I'm likely going to disagree on the feasibility of the method for building a superintelligence?\n\n\n\n\n[Yudkowsky][17:25]\nI mean, I'm obviously not going to want to make any suggestions that I think could possibly work and which are not very very very obvious.\n\n\n\n[Shah][17:25]\nYup, makes sense\n\n\n\n\n[Yudkowsky][17:25]\nBut I don't think that's much of an issue.\nI could just point to MuZero, say, and say, \"Suppose something a lot like this scaled.\"\nDo I need to explain how you would die in this case?\n\n\n\n[Shah][17:26]\nWhat sort of domain and what training data?\nLike, do we release a robot in the real world, have it collect data, build a world model, and run MuZero with a reward for making a number in a bank account go up?\n\n\n\n\n[Yudkowsky][17:28]\nSupposing they're naive about it: playing all the videogames, predicting all the text and images, solving randomly generated computer puzzles, accomplishing sets of easily-labelable sensorymotor tasks using robots and webcams\n\n\n\n[Shah][17:29]\nOkay, so far I'm with you. Is there a separate deployment step, and if so, how did they finetune the agent for the deployment task? Or did it just take over the world halfway through training?\n\n\n\n\n[Yudkowsky][17:29]\n(though this starts to depart from the Mu Zero architecture if it has the ability to absorb knowledge via learning on more purely predictive problems)\n\n\n\n[Shah][17:30]\n(I'm okay with that, I think)\n\n\n\n\n[Yudkowsky][17:32]\nvaguely plausible rough scenario: there was a big ongoing debate about whether or not to try letting the system trade stocks, and while the debate was going on, the researchers kept figuring out ways to make Something Zero do more with less computing power, and then it started visibly talking at people and trying to manipulate them, and there was an enormous fuss, and what happens past this point depends on whether or not you want me to try to describe a scenario in which we die with an unrealistic amount of dignity, or a realistic scenario where we die much faster\nI shall assume the former.\n\n\n\n[Shah][17:32]\nActually I think I want concreteness earlier\n\n\n\n\n[Yudkowsky][17:32]\nOkay.  I await your further query.\n\n\n\n[Shah][17:32]\n\nit started visibly talking at people and trying to manipulate them\n\nWhat caused this?\nWas it manipulating people in order to make e.g. sensory stuff easier to predict?\n\n\n\n\n[Yudkowsky][17:36]\nCumulative lifelong learning from playing videogames took its planning abilities over a threshold; cumulative solving of computer games and multimodal real-world tasks took its internal mechanisms for unifying knowledge and making them coherent over a threshold; and it gained sufficient compressive understanding of the data it had implicitly learned by reading through hundreds of terabytes of Common Crawl, not so much the semantic knowledge contained in those pages, but the associated implicit knowledge of the Things That Generate Text (aka humans). \nThese combined to form an imaginative understanding that some of its real-world problems were occurring in interactions with the Things That Generate Text, and it started making plans which took that into account and tried to have effects on the Things That Generate Text in order to affect the further processes of its problems.\nOr perhaps somebody trained it to write code in partnership with programmers and it already had experience coworking with and manipulating humans.\n\n\n\n[Shah][17:39]\nChecking understanding: At this point it is able to make novel plans that involve applying knowledge about humans and their role in the data-generating process in order to create a plan that leads to more reward for the real-world problems?\n(Which we call \"manipulating humans\")\n\n\n\n\n[Yudkowsky][17:40]\nYes, much as it might have gained earlier experience with making novel Starcraft plans that involved \"applying knowledge about humans and their role in the data-generating process in order to create a plan that leads to more reward\", if it was trained on playing Starcraft against humans at any point, or even needed to make sense of how other agents had played Starcraft\nThis in turn can be seen as a direct outgrowth and isomorphism of making novel plans for playing Super Mario Brothers which involve understanding Goombas and their role in the screen-generating process\nexcept obviously that the Goombas are much less complicated and not themselves agents\n\n\n\n[Shah][17:41]\nYup, makes sense. Not sure I totally agree that this sort of thing is likely to happen as quickly as it sounds like you believe but I'm happy to roll with it; I do think it will happen eventually\nSo doesn't seem particularly cruxy\nI can see how this leads to existential catastrophe, if you don't expect the programmers to be worried at this early manipulation warning sign. (This is potentially cruxy for p(doom), but doesn't feel like the main action.)\n\n\n\n\n[Yudkowsky][17:46]\nOn my mainline, where this is all happening at Deepmind, I do expect at least one person in the company has ever read anything I've written.  I am not sure if Demis understands he is looking straight at death, but I am willing to suppose for the sake of discussion that he does understand this – which isn't ruled out by my actual knowledge – and talk about how we all die from there.\nThe very brief tl;dr is that they know they're looking at a warning sign but they cannot fix the warning sign actually fix the real underlying problem that the warning sign is about, and AGI is getting easier for other people to develop too.\n\n\n\n[Shah][17:46]\nI assume this is primarily about social dynamics + the ability to patch things such that things look fixed?\nYeah, makes sense\nI assume the \"real underlying problem\" is somehow not the fact that the task you were training your AI system to do was not what you actually wanted it to do?\n\n\n\n\n[Yudkowsky][17:48]\nIt's about the unavailability of any actual fix and the technology continuing to get easier.  Even if Deepmind understands that surface patches are lethal and understands that the easy ways of hammering down the warning signs are just eliminating the visibility rather than the underlying problems, there is nothing they can do about that except wait for somebody else to destroy the world instead.\nI do not know of any pivotal task you could possibly train an AI system to do using tons of correctly labeled data.  This is part of why we're all dead.\n\n\n\n[Shah][17:50]\nYeah, I think if I adopted (my understanding of) your beliefs about alignment difficulty, and there wasn't already a non-racing scheme set in place, seems like we're in trouble\n\n\n\n\n[Yudkowsky][17:50]\nLike, \"the real underlying problem is the fact that the task you were training your AI system to do was not what you actually wanted it to do\" is one way of looking at one of the several problems that are truly fundamental, but this has no remedy that I know of, besides training your AI to do something small enough to be unpivotal.\n\n\n\n[Shah][17:51][17:52]\nI don't actually know the response you'd have to \"why not just do value alignment?\" I can name several guesses\n\n\n\n\n\n\nFragility of value\nNot sufficiently concrete\nCan't give correct labels for human values\n\n\n\n\n\n[Yudkowsky][17:52][17:52]\nTo be concrete, you can't ask the AGI to build one billion nanosystems, label all the samples that wiped out humanity as bad, and apply gradient descent updates\n\n\n\n\nIn part, you can't do that because one billion samples will get you one billion lethal systems, but even if that wasn't true, you still couldn't do it.\n\n\n\n[Shah][17:53]\n\neven if that wasn't true, you still couldn't do it.\n\nWhy not? Nearest unblocked strategy?\n\n\n\n\n[Yudkowsky][17:53]\n…no, because the first supposed output for training generated by the system at superintelligent levels kills everyone and there is nobody left to label the data.\n\n\n\n[Shah][17:54]\nOh, I thought you were asking me to imagine away that effect with your second sentence\nIn fact, I still don't understand what it was supposed to mean\n(Specifically this one:\n\nIn part, you can't do that because one billion samples will get you one billion lethal systems, but even if that wasn't true, you still couldn't do it.\n\n)\n\n\n\n\n[Yudkowsky][17:55]\nthere's a separate problem where you can't apply reinforcement learning when there's no good examples, even assuming you live to label them\nand, of course, yet another form of problem where you can't tell the difference between good and bad samples\n\n\n\n[Shah][17:56]\nOkay, makes sense\nLet me think a bit\n\n\n\n\n[Yudkowsky][18:00]\nand lest anyone start thinking that was an exhaustive list of fundamental problems, note the absence of, for example, \"applying lots of optimization using an outer loss function doesn't necessarily get you something with a faithful internal cognitive representation of that loss function\" aka \"natural selection applied a ton of optimization power to humans using a very strict very simple criterion of 'inclusive genetic fitness' and got out things with no explicit representation of or desire towards 'inclusive genetic fitness' because that's what happens when you hill-climb and take wins in the order a simple search process through cognitive engines encounters those wins\"\n\n\n\n[Shah][18:02]\n(Agreed that is another major fundamental problem, in the sense of something that could go wrong, as opposed to something that almost certainly goes wrong)\nI am still curious about the \"why not value alignment\" question, where to expand, it's something like \"let's get a wide range of situations and train the agent with gradient descent to do what a human would say is the right thing to do\". (We might also call this \"imitation\"; maybe \"value alignment\" isn't the right term, I was thinking of it as trying to align the planning with \"human values\".)\nMy own answer is that we shouldn't expect this to generalize to nanosystems, but that's again much more of a \"there's not great reason to expect this to go right, but also not great reason to go wrong either\".\n(This is a place where I would be particularly interested in concreteness, i.e. what does the AI system do in these cases, and how does that almost-necessarily follow from the way it was trained?)\n\n\n\n\n[Yudkowsky][18:05]\nwhat's an example element from the \"wide range of situations\" and what is the human labeling?\n(I could make something up and let you object, but it seems maybe faster to ask you to make something up)\n\n\n\n[Shah][18:09]\nUh, let's say that the AI system is being trained to act well on the Internet, and it's shown some tweet / email / message that a user might have seen, and asked to reply to the tweet / email / message. User says whether the replies are good or not (perhaps via comparisons, a la Deep RL from Human Preferences)\nIf I were not making it up on the spot, it would be more varied than that, but would not include \"building nanosystems\"\n\n\n\n\n[Yudkowsky][18:10]\nAnd presumably, in this example, the AI system is not smart enough that exposing humans to text it generates is already a world-wrecking threat if the AI is hostile?\ni.e., does not just hack the humans\n\n\n\n[Shah][18:10]\nYeah, let's assume that for the moment\n\n\n\n\n[Yudkowsky][18:11]\nso what you want to do is train on 'weak-safe' domains where the AI isn't smart enough to do damage, and the humans can label the data pretty well because the AI isn't smart enough to fool them\n\n\n\n[Shah][18:11]\n\"want to do\" is putting it a bit strongly. This is more like a scenario I can't prove is unsafe, but do not strongly believe is safe\n\n\n\n\n[Yudkowsky][18:12]\nbut the domains where the AI can execute a world-saving pivotal act are out-of-distribution for those domains.  extremely out-of-distribution.  fundamentally out-of-distribution.  the AI's own thought processes are out-of-distribution for any inscrutable matrices that were learned to influence those thought processes in a corrigible direction.\nit's not like trying to generalize experience from playing Super Mario Bros to Metroid.\n\n\n\n[Shah][18:13]\nDefinitely, but my reaction to this is \"okay, no particular reason for it to be safe\" — but also not huge reason for it to be unsafe. Like, it would not hugely shock me if what-we-want is sufficiently \"natural\" that the AI system picks up on the right thing form the 'weak-safe' domains alone\n\n\n\n\n[Yudkowsky][18:14]\nyou have this whole big collection of possible AI-domain tuples that are powerful-dangerous and they have properties that aren't in any of the weak-safe training situations, that are moving along third dimensions where all the weak-safe training examples were flat\nnow, just because something is out-of-distribution, doesn't mean that nothing can ever generalize there\n\n\n\n[Shah][18:15]\nI mean, you correctly would not accept this argument if I said that by training blue-car-driving robots solely on blue cars I am ensuring they would be bad on red-car-driving\n\n\n\n\n[Yudkowsky][18:15]\nhumans generalize from the savannah to the vacuum\nso the actual problem is that I expect the optimization to generalize and the corrigibility to fail\n\n\n\n[Shah][18:15]\n^Right, that\nI am not clear on why you expect this so strongly\nMaybe you think generalization is extremely rare and optimization is a special case because of how it is so useful for basically everything?\n\n\n\n\n[Yudkowsky][18:16]\nno\ndid you read the section of my dialogue with Richard Ngo where I tried to explain why corrigibility is anti-natural, or where Nate tried to give the example of why planning to get a laser from point A to point B without being scattered by fog is the sort of thing that also naturally says to prevent humans from filling the room with fog?\n\n\n\n[Shah][18:19]\nAh, right, I should have predicted that. (Yes, I did read it.)\n\n\n\n\n[Yudkowsky][18:19]\nor for that matter, am I correct in remembering that these sections existed\nk\nso, do you need more concrete details about some part of that?\na bunch of the reason why I suspect that corrigibility is anti-natural is from trying to work particular problems there in MIRI's earlier history, and not finding anything that wasn't contrary to coherence the overlap in the shards of inner optimization that, when ground into existence by the outer optimization loop, coherently mix to form the part of cognition that generalizes to do powerful things; and nobody else finding it either, etc.\n\n\n\n[Shah][18:22]\nI think I disagreed with that part more directly, in that it seemed like in those sections the corrigibility was assumed to be imposed \"from the outside\" on top of a system with a goal, rather than having a goal that was corrigible. (I also had a similar reaction to the 2015 Corrigibility paper.)\nSo, for example, it seems to me like CIRL is an example of an objective that can be maximized in which the agent is corrigible-in-a-certain-sense. I agree that due to updated deference it will eventually stop seeking information from the human / be subject to corrections by the human. I don't see why, at that point, it wouldn't have just learned to do what the humans actually want it to do.\n(There are objections like misspecification of the reward prior, or misspecification of the P(behavior | reward), but those feel like different concerns to the ones you're describing.)\n\n\n\n\n[Yudkowsky][18:25]\na thing that MIRI tried and failed to do was find a sensible generalization of expected utility which could contain a generalized utility function that would look like an AI that let itself be shut down, without trying to force you to shut it down\nand various workshop attendees not employed by MIRI, etc\n\n\n\n[Shah][18:26]\nI do agree that a CIRL agent would not let you shut it down\nAnd this is something that should maybe give you pause, and be a lot more careful about potential misspecification problems\n\n\n\n\n[Yudkowsky][18:27]\nif you could give a perfectly specified prior such that the result of updating on lots of observations would be a representation of the utility function that CEV outputs, and you could perfectly inner-align an optimizer to do that thing in a way that scaled to arbitrary levels of cognitive power, then you'd be home free, sure.\n\n\n\n[Shah][18:28]\nI'm not trying to claim this is a solution. I'm more trying to point at a reason why I am not convinced that corrigibility is anti-natural.\n\n\n\n\n[Yudkowsky][18:28]\nthe reason CIRL doesn't get off the ground is that there isn't any known, and isn't going to be any known, prior over (observation|'true' utility function) such that an AI which updates on lots of observations ends up with our true desired utility function.\nif you can do that, the AI doesn't need to be corrigible\nthat's why it's not a counterexample to corrigibility being anti-natural\nthe AI just boomfs to superintelligence, observes all the things, and does all the goodness\nit doesn't listen to you say no and won't let you shut it down, but by hypothesis this is fine because it got the true utility function yay\n\n\n\n[Shah][18:31]\nIn the world where it doesn't immediately start out as a superintelligence, it spends a lot of time trying to figure out what you want, asking you what you prefer it does, making sure to focus on the highest-EV questions, being very careful around any irreversible actions, etc\n\n\n\n\n[Yudkowsky][18:31]\nand making itself smarter as fast as possible\n\n\n\n[Shah][18:32]\nYup, that too\n\n\n\n\n[Yudkowsky][18:32]\nI'd do that stuff too if I was waking up in an alien world\nand, with all due respect to myself, I am not corrigible\n\n\n\n[Shah][18:33]\nYou'd do that stuff because you'd want to make sure you don't accidentally get killed by the aliens; a CIRL agent does it because it \"wants to help the human\"\n\n\n\n\n[Yudkowsky][18:34]\nno, a CIRL agent does it because it wants to implement the True Utility Function, which it may, early on, suspect to consist of helping* humans, and maybe to have some overlap (relative to its currently reachable short-term outcome sets, though these are of vanishingly small relative utility under the True Utility Function) with what some humans desire some of the time\n(*) 'help' may not be help\nseparately it asks a lot of questions because the things humans do are evidence about the True Utility Function\n\n\n\n[Shah][18:35]\nI agree this is also an accurate description of CIRL\nA more accurate description, even\nWait why is it vanishingly small relative utility? Is the assumption that the True Utility Function doesn't care much about humans? Or was there something going on with short vs. long time horizons that I didn't catch\n\n\n\n\n[Yudkowsky][18:39]\nin the short term, a weak CIRL tries to grab the hand of a human about to fall off a cliff, because its TUF probably does prefer the human who didn't fall off the cliff, if it has only exactly those two options, and this is the sort of thing it would learn was probably true about the TUF early on, given the obvious ways of trying to produce a CIRL-ish thing via gradient descent\nhumans eat healthy in the ancestral environment when ice cream doesn't exist as an option\nin the long run, the things the CIRL agent wants do not overlap with anything humans find more desirable than paperclips (because there is no known scheme that takes in a bunch of observations, updates a prior, and outputs a utility function whose achievable maximum is galaxies living happily forever after)\nand plausible TUF schemes are going to notice that grabbing the hand of a current human is a vanishing fraction of all value eventually at stake\n\n\n\n[Shah][18:42]\nOkay, cool, short vs. long time horizons\nMakes sense\n\n\n\n\n[Yudkowsky][18:42]\nright, a weak but sufficiently reflective CIRL agent will notice an alignment of short-term interests with humans but deduce misalignment of long-term interests\nthough I should maybe call it CIRL* to denote the extremely probable case that the limit of its updating on observation does not in fact converge to CEV's output\n\n\n\n[Soares][18:43]\n(Attempted rephrasing of a point I read Eliezer as making upstream, in hopes that a rephrasing makes it click for Rohin:) \nCorrigibility isn't for bug-free CIRL agents with a prior that actually dials in on goodness given enough observation; if you have one of those you can just run it and call it a day. Rather, corrigibility is for surviving your civilization's inability to do the job right on the first try.\nCIRL doesn't have this property; it instead amounts to the assertion \"if you are optimizing with respect to a distribution on utility functions that dials in on goodness given enough observation then that gets you just about as much good as optimizing goodness\"; this is somewhat tangential to corrigibility.\n\n\n\n\n[Yudkowsky: +1]\n\n\n\n\n\n\n\n\n[Yudkowsky][18:44]\nand you should maybe update on how, even though somebody thought CIRL was going to be more corrigible, in fact it made absolutely zero progress on the real problem\n\n\n\n\n[Ngo: ]\n\n\n\n\nthe notion of having an uncertain utility function that you update from observation is coherent and doesn't yield circular preferences, running in circles, incoherent betting, etc.\nso, of course, it is antithetical in its intrinsic nature to corrigibility\n\n\n\n[Shah][18:47]\nI guess I am not sure that I agree that this is the purpose of corrigibility-as-I-see-it. The point of corrigibility-as-I-see-it is that you don't have to specify the object-level outcomes that your AI system must produce, and instead you can specify the meta-level processes by which your AI system should come to know what the object-level outcomes to optimize for are\n(At CHAI we had taken to talking about corrigibility_MIRI and corrigibility_Paul as completely separate concepts and I have clearly fallen out of that good habit)\n\n\n\n\n[Yudkowsky][18:48]\nspeaking as the person who invented the concept, asked for name submissions for it, and selected 'corrigibility' as the winning submission, that is absolutely not how I intended the word to be used\nand I think that the thing I was actually trying to talk about is important and I would like to retain a word that talks about it\n'corrigibility' is meant to refer to the sort of putative hypothetical motivational properties that prevent a system from wanting to kill you after you didn't build it exactly right\nlow impact, mild optimization, shutdownability, abortable planning, behaviorism, conservatism, etc.  (note: some of these may be less antinatural than others)\n\n\n\n[Shah][18:51]\nCool. Sorry for the miscommunication, I think we should probably backtrack to here\n\nso the actual problem is that I expect the optimization to generalize and the corrigibility to fail\n\nand restart.\nThough possibly I should go to bed, it is quite late here and there was definitely a time at which I would not have confused corrigibility_MIRI with corrigibility_Paul, and I am a bit worried at my completely having missed that this time\n\n\n\n\n[Yudkowsky][18:51]\nthe thing you just said, interpreted literally, is what I would call simply \"going meta\" but my guess is you have a more specific metaness in mind\n…does Paul use \"corrigibility\" to mean \"going meta\"? I don't think I've seen Paul doing that.\n\n\n\n[Shah][18:54]\nNot exactly \"going meta\", no (and I don't think I exactly mean that either). But I definitely infer a different concept from https://www.alignmentforum.org/posts/fkLYhTQteAu5SinAc/corrigibility than the one you're describing here. It is definitely possible that this comes from me misunderstanding Paul; I have done so many times\n\n\n\n\n[Yudkowsky][18:55]\nThat looks to me like Paul used 'corrigibility' around the same way I meant it, if I'm not just reading my own face into those clouds.  maybe you picked up on the exciting metaness of it and thought 'corrigibility' was talking about the metaness part? \nbut I also want to create an affordance for you to go to bed\nhopefully this last conversation combined with previous dialogues has created any sense of why I worry that corrigibility is anti-natural and hence that \"on the first try at doing it, the optimization generalizes from the weak-safe domains to the strong-lethal domains, but the corrigibility doesn't\"\nso I would then ask you what part of this you were skeptical about\nas a place to pick up when you come back from the realms of Morpheus\n\n\n\n[Shah][18:58]\nYup, sounds good. Talk to you tomorrow!\n\n\n\n \n21. November 7 conversation\n \n21.1. Corrigibility, value learning, and pessimism\n \n\n[Shah][3:23]\nQuick summary of discussion so far (in which I ascribe views to Eliezer, for the sake of checking understanding, omitting for brevity the parts about how these are facts about my beliefs about Eliezer's beliefs and not Eliezer's beliefs themselves):\n\nSome discussion of \"how to use non-world-optimizing AIs to help with AI alignment\", which are mostly in the category \"clever tricks with gradient descent and loss functions and labeled datasets\" rather than \"textbook from the future\". Rohin thinks these help significantly (and that \"significant help\" = \"reduced x-risk\"). Eliezer thinks that whatever help they provide is not sufficient to cross the line from \"we need a miracle\" to \"we have a plan that has non-trivial probability of success without miracles\". The crux here seems to be alignment difficulty.\nSome discussion of how doom plays out. I agree with Eliezer that if the AI is catastrophic by default, and we don't have a technique that stops the AI from being catastrophic by default, and we don't already have some global coordination scheme in place, then bad things happen. Cruxes seem to be alignment difficulty and the plausibility of a global coordination scheme, of which alignment difficulty seems like the bigger one.\nOn alignment difficulty, an example scenario is \"train on human judgments about what the right thing to do is on a variety of weak-safe domains, and hope for generalization to potentially-lethal domains\". Rohin views this as neither confidently safe nor confidently unsafe. Eliezer views this as confidently unsafe, because he strongly expects the optimization to generalize while the corrigibility doesn't, because corrigibility is anti-natural.\n\n(Incidentally, \"optimization generalizes but corrigibility doesn't\" is an example of the sort of thing I wish were more concrete, if you happen to be able to do that)\nMy current take on \"corrigibility\":\n\nPrior to this discussion, in my head there was corrigibility_A and corrigibility_B. Corrigibility_A, which I associated with MIRI, was about imposing a constraint \"from the outside\". Given an AI system, it is a method of modifying that AI system to (say) allow you to shut it down, by performing some sort of operation on its goal. Corrigibility_B, which I associated with Paul, was about building an AI system which would have particular nice behaviors like learning about the user's preferences, accepting corrections about what it should do, etc.\nAfter this discussion, I think everyone meant corrigibility_B all along. The point of the 2015 MIRI paper was to check whether it is possible to build a version of corrigibility_B that was compatible with expected utility maximization with a not-terribly-complicated utility function; the point of this was to see whether corrigibility could be made compatible with \"plans that lase\".\nWhile I think people agree on the behaviors of corrigibility, I am not sure they agree on why we want it. Eliezer wants it for surviving failures, but maybe others want it for \"dialing in on goodness\". When I think about a \"broad basin of corrigibility\", that intuitively seems more compatible with the \"dialing in on goodness\" framing (but this is an aesthetic judgment that could easily be wrong).\nI don't think I meant \"going meta\", e.g. I wouldn't have called indirect normativity an example of corrigibility. I think I was pointing at \"dialing in on goodness\" vs. \"specifying goodness\".\nI agree CIRL doesn't help survive failures. But if you instead talk about \"dialing in on goodness\", CIRL does in fact do this, at least conceptually (and other alternatives don't).\nI am somewhat surprised that \"how to conceptually dial in on goodness\" is not something that seems useful to you. Maybe you think it is useful, but you're objecting to me calling it corrigibility, or saying we knew how to do it before CIRL?\n\n(A lot of the above on corrigibility is new, because the distinction between surviving-failures and dialing-in-on-goodness as different use cases for very similar kinds of behaviors is new to me. Thanks for discussion that led me to making such a distinction.)\nPossible avenues for future discussion, in the order of my-guess-at-usefulness:\n\nDiscussing anti-naturality of corrigibility. As a starting point: you say that an agent that makes plans but doesn't execute them is also dangerous, because it is the plan itself that lases, and corrigibility is antithetical to lasing. Does this mean you predict that you, or I, with suitably enhanced intelligence and/or reflectivity, would not be capable of producing a plan to help an alien civilization optimize their world, with that plan being corrigible w.r.t the aliens? (This seems like a strange and unlikely position to me, but I don't see how to not make this prediction under what I believe to be your beliefs. Maybe you just bite this bullet.)\nDiscussing why it is very unlikely for the AI system to generalize correctly both on optimization and values-or-goals-that-guide-the-optimization (which seems to be distinct from corrigibility). Or to put it another way, why is \"alignment by default according to John Wentworth\" doomed to fail? https://www.lesswrong.com/posts/Nwgdq6kHke5LY692J/alignment-by-default\nMore checking of where I am failing to pass your ITT\nWhy is \"dialing in on goodness\" not a reasonable part of the solution space (to the extent you believe that)?\nMore concreteness on how optimization generalizes but corrigibility doesn't, in the case where the AI was trained by human judgment on weak-safe domains Just to continue to state it so people don't misinterpret me: in most of the cases that we're discussing, my position is not that they are safe, but rather that they are not overwhelmingly likely to be unsafe.\n\n\n\n\n\n[Ngo][3:41]\nI don't understand what you mean by dialling in on goodness. Could you explain how CIRL does this better than, say, reward modelling?\n\n\n\n\n[Shah][3:49]\nReward modeling does not by default (a) choose relevant questions to ask the user in order to get more information about goodness, (b) act conservatively, especially in the face of irreversible actions, while it is still uncertain about what goodness is, or (c) take actions that are known to be robustly good, while still waiting for future information that clarifies the nuances of goodness\nYou could certainly do something like Deep RL from Human Preferences, where the preferences are things like \"I prefer you ask me relevant questions to get more information about goodness\", in order to get similar behavior. In this case you are transferring desired behaviors from a human to the AI system, whereas in CIRL the behaviors \"fall out of\" optimization for a specific objective\nIn Eliezer/Nate terms, the CIRL story shows that dialing on goodness is compatible with \"plans that lase\", whereas reward modeling does not show this\n\n\n\n\n[Ngo][4:04]\nThe meta-level objective that CIRL is pointing to, what makes that thing deserve the name \"goodness\"? Like, if I just gave an alien CIRL, and I said \"this algorithm dials an AI towards a given thing\", and they looked at it without any preconceptions of what the designers wanted to do, why wouldn't they say \"huh, it looks like an algorithm for dialling in on some extrapolation of the unintended consequences of people's behaviour\" or something like that?\nSee also this part of my second discussion with Eliezer, where he brings up CIRL: [https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#3_2__Brain_functions_and_outcome_pumps] He was emphasising that CIRL, and most other proposals for alignment algorithms, just shuffle the problematic consequentialism from the original place to a less visible place. I didn't engage much with this argument because I mostly agree with it.\n\n\n\n\n[Yudkowsky: +1]\n\n\n\n\n\n\n\n\n[Shah][5:28]\nI think you are misunderstanding my point. I am not claiming that we know how to implement CIRL such that it produces good outcomes; I agree this depends a ton on having a sufficiently good P(obs | reward). Similarly, if you gave CIRL to aliens, whether or not they say it is about getting some extrapolation of unintended consequences depends on exactly what P(obs | reward) you ended up using. There is some not-too-complicated P(obs | reward) such that you do end up getting to \"goodness\", or something sufficiently close that it is not an existential catastrophe; I do not claim we know what it is.\nI am claiming that behaviors like (a), (b) and (c) above are compatible with expected utility theory, and thus compatible with \"plans that lase\". This is demonstrated by CIRL. It is not demonstrated by reward modeling, see e.g. these three papers for problems that arise (which make it so that it is working at cross purposes with itself and seems incompatible with \"plans that lase\"). (I'm most confident in the first supporting my point, it's been a long time since I read them so I might be wrong about the others.) To my knowledge, similar problems don't arise with CIRL (and they shouldn't, because it is a nice integrated Bayesian agent doing expected utility theory).\nI could imagine an objection that P(obs | reward), while not as complicated as \"the utility function that rationalizes a twitching robot\", is still too complicated to really show compatibility with plans-that-lase, but pointing out that P(obs | reward) could be misspecified doesn't seem particularly relevant to whether behaviors (a), (b) and (c) are compatible with plans-that-lase.\nRe: shuffling around the problematic consequentialism: it is not my main plan to avoid consequentialism in the sense of plans-that-lase. I broadly agree with Eliezer that you need consequentialism to do high-impact stuff. My plan is for the consequentialism to be aimed at good ends. So I agree that there is still consequentialism in CIRL, and I don't see this as a damning point; when I talk about \"dialing in to goodness\", I am thinking of aiming the consequentialism at goodness, not getting rid of consequentialism.\n(You can still do things like try to be domain-specific rather than domain-general; I don't mean to completely exclude such approaches. They do seem to give additional safety. But the mainline story is that the consequentialism / optimization is directed at what we want rather than something else.)\n\n\n\n\n[Ngo][6:21]\nIf you don't know how to implement CIRL in such a way that it actually aims at goodness, then you don't have an algorithm with properties a, b and c above.\nOr, to put it another way: suppose I replace the word \"goodness\" with \"winningness\". Now I can describe AlphaStar as follows:\n\nit choose relevant questions to ask (read: scouts to send) in order to get more information about winningness\nit acts conservatively while it is still uncertain about what winningness is\nit take actions that are known to be robustly good winningish, while still waiting for future information that clarifies the nuances of winningness\n\nNow, you might say that the difference is that CIRL implements uncertainty over possible utility functions, not possible empirical beliefs. But this is just a semantic difference which shuffles the problem around without changing anything substantial. E.g. it's exactly equivalent if we think of CIRL as an agent with a fixed (known) utility function, which just has uncertainty about some empirical parameter related to the humans it interacts with.\n\n\n\n\n[Yudkowsky: +1]\n\n\n\n\n\n\n\n\n[Soares][6:55]\n\n[…] it take actions that are known to be robustly good, while still waiting for future information that clarifies the nuances of winningness\n\n(typo: \"known to be robustly good\" -> \"known to be robustly winningish\" :-p)\n\n\n\n\n[Ngo: ]\n\n\n\n\nSome quick reactions, some from me and some from my model of Eliezer:\n\nEliezer thinks that whatever help they provide is not sufficient […] The crux here seems to be alignment difficulty.\n\nI'd be more hesitant to declare the crux \"alignment difficulty\". My understanding of Eliezer's position on your \"use AI to help with alignment\" proposals (which focus on things like using AI to make paradigmatic AI systems more transparent) is \"that was always the plan, and it doesn't address the sort of problems I'm worried about\". Maybe you understand the problems Eliezer's worried about, and believe them not to be very difficult to overcome, thus putting the crux somewhere like \"alignment difficulty\", but I'm not convinced. \nI'd update towards your crux-hypothesis if you provided a good-according-to-Eliezer summary of what other problems Eliezer sees and the reasons-according-to-Eliezer that \"AI make our tensors more transparent\" doesn't much address them.\n\nCorrigibility_A […] Corrigibility_B […]\n\nOf the two Corrigibility_B does sound a little closer to my concept, though neither of your descriptions cause me to be confident that communication has occurred. Throwing some checksums out there:\n\nThere are three reasons a young weak AI system might accept your corrections. It could be corrigible, or it could be incorrigibly pursuing goodness, or it could be incorrigibly pursuing some other goal while calculating that accepting this correction is better according to its current goals than risking a shutdown.\nOne way you can tell that CIRL is not corrigible is that it does not accept corrections when old and strong.\nThere's an intuitive notion of \"you're here to help us implement a messy and fragile concept not yet clearly known to us; work with us here?\" that makes sense to humans, that includes as a side effect things like \"don't scan my brain and then disregard my objections; there could be flaws in how you're inferring my preferences from my objections; it's actually quite important that you be cautious and accept brain surgery even in cases where your updated model says we're about to make a big mistake according to our own preferences\".\n\n\nThe point of the 2015 MIRI paper was to check whether it is possible to build a version of corrigibility_B that was compatible with expected utility maximization with a not-terribly-complicated utility function; the point of this was to see whether corrigibility could be made compatible with \"plans that lase\".\n\nMore like:\n\nCorrigibility seems, at least on the surface, to be in tension with the simple and useful patterns of optimization that tend to be spotlit by demands for cross-domain success, similar to how acting like two oranges are worth one apple and one apple is worth one orange is in tension with those patterns.\nIn practice, this tension seems to run more than surface-deep. In particular, various attempts to reconcile the tension fail, and cause the AI to have undesirable preferences (eg, incentives to convince you to shut it down whenever its utility is suboptimal), exploitably bad beliefs (eg, willingness to bet at unreasonable odds that it won't be shut down), and/or to not be corrigible in the first place (eg, a preference for destructively uploading your mind against your protests, at which point further protests from your coworkers are screened off by its access to that upload).\n\n\n\n\n\n[Yudkowsky: ]\n\n\n\n\n(There's an argument I occasionally see floating around these parts that goes \"ok, well what if the AI is fractally corrigible, in the sense that instead of its cognition being oriented around pursuit of some goal, its cognition is oriented around doing what it predicts a human would do (or what a human would want it to do) in a corrigible way, at every level and step of its cognition\". This is perhaps where you perceive a gap between your A-type and B-type notions, where MIRI folk tend to be more interested in reconciling the tension between corrigibility and coherence, and Paulian folk tend to place more of their chips on some such fractal notion? \nI admit I don't find much hope in the \"fractally corrigible\" view myself, and I'm not sure whether I could pass a proponent's ITT, but fwiw my model of the Yudkowskian rejoinder is \"mindspace is deep and wide; that could plausibly be done if you had sufficient mastery of minds; you're not going to get anywhere near close to that in practice, because of the way that basic normal everyday cross-domain training will highlight patterns that you'd call orienting-cognition-around-a-goal\".)\nAnd my super-quick takes on your avenues for future discussion:\n\n1. Discussing anti-naturality of corrigibility.\n\nHopefully the above helps.\n\n2. Discussing why it is very unlikely for the AI system to generalize correctly both on optimization and values-or-goals-that-guide-the-optimization\n\nThe concept \"patterns of thought that are useful for cross-domain success\" is latent in the problems the AI faces, and known to have various simple mathematical shadows, and our training is more-or-less banging the AI over the head with it day in and day out. By contrast, the specific values we wish to be pursued are not latent in the problems, are known to lack a simple boundary, and our training is much further removed from it.\n\n3. More checking of where I am failing to pass your ITT\n\n+1\n\n4. Why is \"dialing in on goodness\" not a reasonable part of the solution space?\n\nIt has long been the plan to say something less like \"the following list comprises goodness: …\" and more like \"yo we're tryin to optimize some difficult-to-name concept; help us out?\". \"Find a prior that, with observation of the human operators, dials in on goodness\" is a fine guess at how to formalize the latter. \nIf we had been planning to take the former tack, and you had come in suggesting CIRL, that might have helped us switch to the latter tack, which would have been cool. In that sense, it's a fine part of the solution. \nIt also provides some additional formality, which is another iota of potential solution-ness, for that part of the problem. \nIt doesn't much address the rest of the problem, which is centered much more around \"how do you point powerful cognition in any direction at all\" (such as towards your chosen utility function or prior thereover).\n\n5. More concreteness on how optimization generalizes but corrigibility doesn't, in the case where the AI was trained by human judgment on weak-safe domains\n\n+1\n\n\n\n\n[Shah][13:23]\n\nIf you don't know how to implement CIRL in such a way that it actually aims at goodness, then you don't have an algorithm with properties a, b and c above.\n\nI want clarity on the premise here:\n\nIs the premise \"Rohin cannot write code that when run exhibits properties a, b, and c\"? If so, I totally agree, but I'm not sure what the point is. All alignment work ever until the very last step will not lead you to writing code that when run exhibits an aligned superintelligence, but this does not mean that the prior alignment work was useless.\nIs the premise \"there does not exist code that (1) we would call an implementation of CIRL and (2) when run has properties a, b, and c\"? If so, I think your premise is false, for the reasons given previously (I can repeat them if needed)\n\nI imagine it is neither of the above, and you are trying to make a claim that some conclusion that I am drawing from or about CIRL is invalid, because in order for me to draw that conclusion, I need to exhibit the correct P(obs | reward). If so, I want to know which conclusion is invalid and why I have to exhibit the correct P(obs | reward) before I can reach that conclusion.\nI agree that the fact that you can get properties (a), (b) and (c) are simple straightforward consequences of being Bayesian about a quantity you are uncertain about and care about, as with AlphaStar and \"winningness\". I don't know what you intend to imply by this — because it also applies to other Bayesian things, it can't imply anything about alignment? I also agree the uncertainty over reward is equivalent to uncertainty over some parameter of the human (and have proved this theorem myself in the paper I wrote on the topic). I do not claim that anything in here is particularly non-obvious or clever, in case anyone thought I was making that claim.\nTo state it again, my claim is that behaviors like (a), (b) and (c) are consistent with \"plans-that-lase\", and as evidence for this claim I cite the existence of an expected-utility-maximizing algorithm that displays them, specifically CIRL with the correct p(obs | reward). I do not claim that I can write down the code, I am just claiming that it exists. If you agree with the claim but not the evidence then let's just drop the point. If you disagree with the claim then tell me why it's false. If you are unsure about the claim then point to the step in the argument you think doesn't work.\nThe reason I care about this claim is that it seems to me like even if you think that superintelligences only involve plans-that-lase, it seems to me like this does not rule out what we might call \"dialing in to goodness\" or \"assisting the user\", and thus it seems like this is a valid target for you to try to get your superintelligence to do.\nI suspect that I do not agree with Eliezer about what plans-that-lase can do, but it seems like the two of us should at least agree that behaviors like (a), (b) and (c) can be exhibited in plans-that-lase, and if we don't agree on that some sort of miscommunication has happened.\n \n\nThrowing some checksums out there\n\nThe checksums definitely make sense. (Technically I could name more reasons why a young AI might accept correction, such as \"it's still sphexish in some areas, accepting corrections is one of those reasons\", and for the third reason the AI could be calculating negative consequences for things other than shutdown, but that seems nitpicky and I don't think it means I have misunderstood you.) \nI think the third one feels somewhat slippery and vague, in that I don't know exactly what it's claiming, but it clearly seems to be the same sort of thing as corrigibility. Mostly it's more like I wouldn't be surprised if the Textbook from the Future tells us that we mostly had the right concept of corrigibility, but that third checksum is not quite how they would describe it any more. I would be a lot more surprised if the Textbook says we mostly had the right concept but then says checksums 1 and 2 were misguided.\n\n\"The point of the 2015 MIRI paper was to check whether it is possible to build a version of corrigibility_B that was compatible with expected utility maximization with a not-terribly-complicated utility function; the point of this was to see whether corrigibility could be made compatible with 'plans that lase'.\"\nMore like:\n\nCorrigibility seems, at least on the surface, to be in tension with the simple and useful patterns of optimization that tend to be spotlit by demands for cross-domain success, similar to how as acting like an two oranges are worth one apple and one apple is worth one orange is in tension with those patterns.\nIn practice, this tension seems to run more than surface-deep. In particular, various attempts to reconcile the tension fail, and cause the AI to have undesirable preferences (eg, incentives to convince you to shut it down whenever its utility is suboptimal), exploitably bad beliefs (eg, willingness to bet at unreasonable odds that it won't be shut down), and/or to not be corrigible in the first place (eg, a preference for destructively uploading your mind against your protests, at which point further protests from your coworkers are screened off by its access to that upload).\n\n\nOn the 2015 Corrigibility paper, is this an accurate summary: \"it wasn't that we were checking whether corrigibility could be compatible with useful patterns of optimization; it was already obvious at least at a surface level that corrigibility was in tension with these patterns, and we wanted to check and/or show that this tension persisted more deeply and couldn't be easily fixed\".\n(My other main hypothesis is that there's an important distinction between \"simple and useful patterns of optimization\" (term in your message) and \"plans that lase\" (term in my message) but if so I don't know what it is.)\n\n\n\n\n[Soares][13:52]\nWhat we wanted to do was show that the apparent tension was merely superficial. We failed.\n\n\n\n\n[Shah: ]\n\n\n\n\n(Also, IIRC — and it's been a long time since I checked — the 2015 paper contains only one exploration, relating to an idea of Stuart Armstrong's. There were another host of ideas raised and shot down in that era, that didn't make it into that paper, pro'lly b/c they came afterwards.)\n\n\n\n\n[Shah][13:55]\n\nWhat we wanted to do was show that the apparent tension was merely superficial. We failed.\n\n(That sounds like what I originally said? I'm a bit confused why you didn't just agree with my original phrasing:\n\nThe point of the 2015 MIRI paper was to check whether it is possible to build a version of corrigibility_B that was compatible with expected utility maximization with a not-terribly-complicated utility function; the point of this was to see whether corrigibility could be made compatible with \"plans that lase\".\n\n)\n(I'm kinda worried that there's some big distinction between \"EU maximization\", \"plans that lase\", and \"simple and useful patterns of optimization\", that I'm not getting; I'm treating them as roughly equivalent at the moment when putting on my MIRI-ontology-hat.)\n\n\n\n\n[Soares][14:01]\n(There are a bunch of aspects of your phrasing that indicated to me a different framing, and one I find quite foreign. For instance, this talk of \"building a version of corrigibility_B\" strikes me as foreign, and the talk of \"making it compatible with 'plans that lase'\" strikes me as foreign. It's plausible to me that you, who understand your original framing, can tell that my rephrasing matches your original intent. I do not yet feel like I could emit the description you emitted without contorting my thoughts about corrigibility in foreign ways, and I'm not sure whether that's an indication that there are distinctions, important to me, that I haven't communicated.)\n\n(I'm kinda worried that there's some big distinction between \"EU maximization\", \"plans that lase\", and \"simple and useful patterns of optimization\", that I'm not getting; I'm treating them as roughly equivalent at the moment when putting on my MIRI-ontology-hat.)\n\nI, too, believe them to be basically equivalent (with the caveat that the reason for using expanded phrasings is because people have a history of misunderstanding \"utility maximization\" and \"coherence\", and so insofar as you round them all to \"coherence\" and then argue against some very narrow interpretation of coherence, I'm gonna protest that you're bailey-and-motting).\n\n\n\n\n[Shah: ]\n\n\n\n\n\n\n\n\n[Shah][14:12]\n\nHopefully the above helps.\n\nI'm still interested in the question \"Does this mean you predict that you, or I, with suitably enhanced intelligence and/or reflectivity, would not be capable of producing a plan to help an alien civilization optimize their world, with that plan being corrigible w.r.t the aliens?\" I don't currently understand how you avoid making this prediction given other stated beliefs. (Maybe you just bite the bullet and do predict this?)\n\nBy contrast, the specific values we wish to be pursued are not latent in the problems, are known to lack a simple boundary, and our training is much further removed from it.\n\nI'm not totally sure what is meant by \"simple boundary\", but it seems like a lot of human values are latent in text prediction on the Internet, and when training from human feedback the training is not very removed from values.\n\nIt has long been the plan to say something less like \"the following list comprises goodness: …\" and more like \"yo we're tryin to optimize some difficult-to-name concept; help us out?\". […]\n\nI take this to mean that \"dialing in on goodness\" is a reasonable part of the solution space? If so, I retract that question. I thought from previous comments that Eliezer thought this part of solution space was more doomed than corrigibility.\n(I get the sense that people think that I am butthurt about CIRL not getting enough recognition or something. I do in fact think this, but it's not part of my agenda here. I originally brought it up to make the argument that corrigibility is not in tension with EU maximization, then realized that I was mistaken about what \"corrigibility\" meant, but still care about the argument that \"dialing in on goodness\" is not in tension with EU maximization. But if we agree on that claim then I'm happy to stop talking about CIRL.)\n\n\n\n\n[Soares][14:13]\nI'd be capable of helping aliens optimize their world, sure. I wouldn't be motivated to, but I'd be capable.\n\n\n\n\n[Shah][14:14]\n\n(There are a bunch of aspects of your phrasing that indicated to me a different framing, and one I find quite foreign. For instance, this talk of \"building a version of corrigibility_B\" strikes me as foreign, and the talk of \"making it compatible with 'plans that lase'\" strikes me as foreign. It's plausible to me that you, who understand your original framing, can tell that my rephrasing matches your original intent. I do not yet feel like I could emit the description you emitted without contorting my thoughts about corrigibility in foreign ways, and I'm not sure whether that's an indication that there are distinctions, important to me, that I haven't communicated.)\n\nThis makes sense. I guess you might think of these concepts as quite pinned down? Like, in your head, EU maximization is just a kind of behavior (= set of behaviors), corrigibility is just another kind of behavior (= set of behaviors), and there's a straightforward yes-or-no question about whether the intersection is empty which you set out to answer, you can't \"make\" it come out one way or the other, nor can you \"build\" a new kind of corrigibility\n\n\n\n\n[Soares][14:17]\nRe: CIRL, my current working hypothesis is that by \"use CIRL\" you mean something analogous to what I say when I say \"do CEV\" — namely, direct the AI to figure out what we \"really\" want in some correct sense, rather than attempting to specify what we want concretely. And to be clear, on my model, this is part of the solution to the overall alignment problem, and it's more-or-less why we wouldn't die immediately on the \"value is fragile / we can't name exactly what we want\" step if we solved the other problems.\nMy guess as to the disagreement about how much credit CIRL should get, is that there is in fact a disagreement, but it's not coming from MIRI folk saying \"no we should be specifying the actual utility function by hand\", it's coming from MIRI folk saying \"this is just the advice 'do CEV' dressed up in different clothing and presented as a reason to stop worrying about corrigibility, which is irritating, given that it's orthogonal to corrigibility\".\nIf you wanna fight that fight, I'd start by asking: Do you think CIRL is doing anything above and beyond what \"use CEV\" is doing? If so, what?\nRegardless, I think it might be a good idea for you to try to pass my (or Eliezer's) ITT about what parts of the problem remain beyond the thing I'd call \"do CEV\" and why they're hard. (Not least b/c if my working hypothesis is wrong, demonstrating your mastery of that subject might prevent a bunch of toil covering ground you already know.)\n\n\n\n\n[Shah][14:17]\n\nI'd be capable of helping aliens optimize their world, sure. I wouldn't be motivated to, but I'd be capable.\n\nOkay, so it seems like the danger requires the thing-producing-the-plan to be badly-motivated. But then I'm not sure why it seems so impossible to have a (not-badly-motivated) thing that, when given a goal, produces a plan to corrigibly get that goal. (This is a scenario Richard mentioned earlier.)\n\n\n\n\n[Soares][14:19]\n\nThis makes sense. I guess you might think of these concepts as quite pinned down? Like, in your head, EU maximization is just a kind of behavior (= set of behaviors), corrigibility is just another kind of behavior (= set of behaviors), and there's a straightforward yes-or-no question about whether the intersection is empty which you set out to answer, you can't \"make\" it come out one way or the other, nor can you \"build\" a new kind of corrigibility\n\nThat sounds like one of the big directions in which your framing felt off to me, yeah :-). (I don't fully endorse that rephrasing, but it seems directionally correct to me.)\n\nOkay, so it seems like the danger requires the thing-producing-the-plan to be badly-motivated. But then I'm not sure why it seems so impossible to have a (not-badly-motivated) thing that, when given a goal, produces a plan to corrigibly get that goal. (This is a scenario Richard mentioned earlier.)\n\nOn my model, aiming the powerful optimizer is the hard bit.\nLike, once I grant \"there's a powerful optimizer, and all it does is produce plans to corrigibly attain a given goal\", I agree that the problem is mostly solved.\nThere's maybe some cleanup, but the bulk of the alignment challenge preceded that point.\n\n\n\n\n[Shah: ]\n\n\n\n\n(This is hard for all the usual reasons, that I suppose I could retread.)\n\n\n\n\n[Shah][14:24]\n\n[…] Regardless, I think it might be a good idea for you to try to pass my (or Eliezer's) ITT about what parts of the problem remain beyond the thing I'd call \"do CEV\" and why they're hard. (Not least b/c if my working hypothesis is wrong, demonstrating your mastery of that subject might prevent a bunch of toil covering ground you already know.)\n\n(Working on ITT)\n\n\n\n\n[Soares][14:30]\n(To clarify some points of mine, in case this gets published later to other readers: (1) I might call it more centrally something like \"build a DWIM system\" rather than \"use CEV\"; and (2) this is not advice about what your civilization should do with early AGI systems, I strongly recommend against trying to pull off CEV under that kind of pressure.)\n\n\n\n\n[Shah][14:32]\nI don't particularly want to have fights about credit. I just didn't want to falsely state that I do not care about how much credit CIRL gets, when attempting to head off further comments that seemed designed to appease my sense of not-enough-credit. (I'm also not particularly annoyed at MIRI, here.)\nOn passing ITT, about what's left beyond \"use CEV\" (stated in my ontology because it's faster to type; I think you'll understand, but I can also translate if you think that's important):\n\nThe main thing is simply how to actually get the AI system to care about pursuing CEV. I think MIRI ontology would call this the target loading problem.\nThis is hard because (a) you can't just train on CEV, because you can't just implement CEV and provide that as training and (b) even if you magically could train on CEV, that does not establish that the resulting AI system then wants to optimize CEV. It could just as well optimize some other objective that correlated with CEV in the situations you trained, but no longer correlates in some new situation (like when you are building a nanosystem). (Point (b) is how I would talk about inner alignment.)\nThis is made harder for a variety of reasons, including (a) you're working with inscrutable matrices that you can't look at the details of, (b) there are clear racing incentives when the prize is to take over the world (or even just lots of economic profit), (c) people are unlikely to understand the issues at stake (unclear to me of the exact reasons, I'd guess it would be that the issues are too subtle / conceptual, + pressure to rationalize it away), (d) there's very little time in which we have a good understanding of the situation we face, because of fast / discontinuous takeoff\n\n\n\n\n\n[Soares: ]\n\n\n\n\n\n\n\n\n[Soares][14:37]\nPassable ^_^ (Not exhaustive, obviously; \"it will have a tendency to kill you on the first real try if you get it wrong\" being an example missing piece, but I doubt you were trying to be exhaustive.) Thanks.\n\n\n\n\n[Shah: ]\n\n\n\n\n\nOkay, so it seems like the danger requires the thing-producing-the-plan to be badly-motivated. But then I'm not sure why it seems so impossible to have a (not-badly-motivated) thing that, when given a goal, produces a plan to corrigibly get that goal. (This is a scenario Richard mentioned earlier.)\n\nI'm uncertain where the disconnect is here. Like, I could repeat some things from past discussions about how \"it only outputs plans, it doesn't execute them\" does very little (not nothing, but very little) from my perspective? Or you could try to point at past things you'd expect me to repeat and name why they don't seem to apply to you?\n\n\n\n\n[Shah][14:40]\n(Flagging that I should go to bed soon, though it doesn't have to be right away)\n\n\n\n\n[Yudkowsky][14:50]\n…I do not know if this is going to help anything, but I have a feeling that there's a frequent disconnect wherein I invented an idea, considered it, found it necessary-but-not-sufficient, and moved on to looking for additional or varying solutions, and then a decade or in this case 2 decades later, somebody comes along and sees this brilliant solution which MIRI is for some reason neglecting\nthis is perhaps exacerbated by a deliberate decision during the early days, when I looked very weird and the field was much more allergic to weird, to not even try to stamp my name on all the things I invented.  eg, I told Nick Bostrom to please use various of my ideas as he found appropriate and only credit them if he thought that was strategically wise.\nI expect that some number of people now in the field don't know I invented corrigibility, and any number of other things that I'm a little more hesitant to claim here because I didn't leave Facebook trails for inventing them\nand unless you had been around for quite a while, you definitely wouldn't know that I had been (so far as I know) the first person to perform the unexceptional-to-me feat of writing down, in 2001, the very obvious idea I called \"external reference semantics\", or as it's called nowadays, CIRL\n\n\n\n[Shah][14:53]\nI really honestly am not trying to say that MIRI didn't think of CIRL-like things, nor am I trying to get credit for CIRL. I really just wanted to establish that \"learn what is good to do\" seems not-ruled-out by EU maximization. That's all. It sounds like we agree on this point and if so I'd prefer to drop it.\n\n\n\n\n[Soares: ]\n\n\n\n\n\n\n\n\n[Yudkowsky][14:53]\nHaving a prior over utility functions that gets updated by evidence is not ruled out by EU maximization.  That exact thing is hard for other reasons than it being contrary to the nature of EU maximization.\nIf it was ruled out by EU maximization for any simple reason, I would have noticed that back in 2001.\n\n\n\n[Ngo][14:54]\nI think we all agree on this point.\n\n\n\n\n[Shah: ]\n[Soares: ]\n\n\n\n\nOne thing I'd note is that during my debate with Eliezer, I'd keep saying \"oh so you think X is impossible\" and he'd say \"no, all these things are possible, they're just really really hard\".\n\n\n\n\n[Yudkowsky][14:58]\n…to do correctly on your first try when a failed attempt kills you.\n\n\n\n[Shah][14:58]\nMaybe it's fine; perhaps the point is just that target loading is hard, and the question is why target loading is so hard.\nFrom my perspective, the main confusing thing about the Eliezer/Nate view is how confident it is. With each individual piece, I (usually) find myself nodding along and saying \"yes, it seems like if we wanted to guarantee safety, we would need to solve this\". What I don't do is say \"yes, it seems like without a solution to this, we're near-certainly dead\". The uncharitable view (which I share mainly to emphasize where the disconnect is, not because I think it is true) would be something like \"Eliezer/Nate are falling to a Murphy bias, where they assume that unless they have an ironclad positive argument for safety, the worst possible thing will happen and we all die\". I try to generate things that seem more like ironclad (or at least \"leatherclad\") positive arguments for doom, and mostly don't succeed; when I say \"human values are very complicated\" there's the rejoinder that \"a superintelligence will certainly know about human values; pointing at them shouldn't take that many more bits\"; when I say \"this is ultimately just praying for generalization\", there's the rejoinder \"but it may in fact actually generalize\"; add to all of this the fact that a bunch of people will be trying to prevent the problem and it seems weird to be so confident in doom.\nA lot of my questions are going to be of the form \"it seems like this is a way that we could survive; it definitely involves luck and does not say good things about our civilization, but it does not seem as improbable as the word 'miracle' would imply\"\n\n\n\n\n[Yudkowsky][15:00]\nheh.  from my standpoint, I'd say of this that it reflects those old experiments where if you ask people for their \"expected case\" it's indistinguishable from their \"best case\" (since both of these involve visualizing various things going on their imaginative mainline, which is to say, as planned) and reality is usually worse than their \"worst case\" (because they didn't adjust far enough away from their best-case anchor towards the statistical distribution for actual reality when they were trying to imagine a few failures and disappointments of the sort that reality had previously delivered)\nit rhymes with the observation that it's incredibly hard to find people – even inside the field of computer security – who really have what Bruce Schneier termed the security mindset, of asking how to break a cryptography scheme, instead of imagining how your cryptography scheme could succeed\nfrom my perspective, people are just living in a fantasy reality which, if we were actually living in it, would not be full of failed software projects or rocket prototypes that blow up even after you try quite hard to get a system design about which you made a strong prediction that it wouldn't explode\nthey think something special has to go wrong with a rocket design, that you must have committed some grave unusual sin against rocketry, for the rocket to explode\nas opposed to every rocket wanting really strongly to explode and needing to constrain every aspect of the system to make it not explode and then the first 4 times you launch it, it blows up anyways\nwhy? because of some particular technical issue with O-rings, with the flexibility of rubber in cold weather?\n\n\n\n[Shah][15:05]\n(I have read your Rocket Alignment and security mindset posts. Not claiming this absolves me of bias, just saying that I am familiar with them)\n\n\n\n\n[Yudkowsky][15:05]\nno, because the strains and temperatures in rockets are large compared to the materials that we use to make up the rockets\nthe fact that sometimes people are wrong in their uncertain guesses about rocketry does not make their life easier in this regard\nthe less they understand, the less ability they have to force an outcome within reality\nit's no coincidence that when you are Wrong about your rocket, the particular form of Being Wrong that reality delivers to you as a surprise message, is not that you underestimated the strength of steel and so your rocket went to orbit and came back with fewer scratches on the hull than expected\nwhen you are working with powerful forces there is not a symmetry around pleasant and unpleasant surprises being equally likely relative to your first-order model.  if you're a good Bayesian, they will be equally likely relative to your second-order model, but this requires you to be HELLA pessimistic, indeed, SO PESSIMISTIC that sometimes you are pleasantly surprised\nwhich looks like such a bizarre thing to a mundane human that they will gather around and remark at the case of you being pleasantly surprised\nthey will not be used to seeing this\nand they shall say to themselves, \"haha, what pessimists\"\nbecause to be unpleasantly surprised is so ordinary that they do not bother to gather and gossip about it when it happens\nmy fundamental sense about the other parties in this debate, underneath all the technical particulars, is that they've constructed a Murphy-free fantasy world from the same fabric that weaves crazy optimistic software project estimates and brilliant cryptographic codes whose inventors didn't quite try to break them, and are waiting to go through that very common human process of trying out their optimistic idea, letting reality gently correct them, predictably becoming older and wiser and starting to see the true scope of the problem, and so in due time becoming one of those Pessimists who tell the youngsters how ha ha of course things are not that easy\nthis is how the cycle usually goes\nthe problem is that instead of somebody's first startup failing and them then becoming much more pessimistic about lots of things they thought were easy and then doing their second startup\nthe part where they go ahead optimistically and learn the hard way about things in their chosen field which aren't as easy as they hoped\n\n\n\n[Shah][15:13]\nDo you want to bet on that? That seems like a testable prediction about beliefs of real people in the not-too-distant future\n\n\n\n\n[Yudkowsky][15:13]\nkills everyone\nnot just them\neveryone\nthis is an issue\nhow on Earth would we bet on that if you think the bet hasn't already resolved? I'm describing the attitudes of people that I see right now today.\n\n\n\n[Shah][15:15]\nNever mind, I wanted to bet on \"people becoming more pessimistic as they try ideas and see them fail\", but if your idea of \"see them fail\" is \"superintelligence kills everyone\" then obviously we can't bet on that\n(people here being alignment researchers, obviously ones who are not me)\n\n\n\n\n[Yudkowsky][15:17]\nthere is some element here of the Bayesian not updating in a predictable direction, of executing today the update you know you'll make later, of saying, \"ah yes, I can see that I am in the same sort of situation as the early AI pioneers who thought maybe it would take a summer and actually it was several decades because Things Were Not As Easy As They Imagined, so instead of waiting for reality to correct me, I will imagine myself having already lived through that and go ahead and be more pessimistic right now, not just a little more pessimistic, but so incredibly pessimistic that I am as likely to be pleasantly surprised as unpleasantly surprised by each successive observation, which is even more pessimism than even some sad old veterans manage\", an element of genre-savviness, an element of knowing the advice that somebody would predictably be shouting at you from outside, of not just blindly enacting the plot you were handed\nand I don't quite know why this is so much less common than I would have naively thought it would be\nwhy people are content with enacting the predictable plot where they start out cheerful today and get some hard lessons and become pessimistic later\nthey are their own scriptwriters, and they write scripts for themselves about going into the haunted house and then splitting up the party\nI would not have thought that to defy the plot was such a difficult thing for an actual human being to do\nthat it would require so much reflectivity or something, I don't know what else\nnor do I know how to train other people to do it if they are not doing it already\nbut that from my perspective is the basic difference in gloominess\nI am a time-traveler who came back from the world where it (super duper predictably) turned out that a lot of early bright hopes didn't pan out and various things went WRONG and alignment was HARD and it was NOT SOLVED IN ONE SUMMER BY TEN SMART RESEARCHERS\nand now I am trying to warn people about this development which was, from a certain perspective, really quite obvious and not at all difficult to see coming\nbut people are like, \"what the heck are you doing, you are enacting the wrong part of the plot, people are currently supposed to be cheerful, you can't prove that anything will go wrong, why would I turn into a grizzled veteran before the part of the plot where reality hits me over the head with the awful real scope of the problem and shows me that my early bright ideas were way too optimistic and naive\"\nand I'm like \"no you don't get it, where I come from, everybody died and didn't turn into grizzled veterans\"\nand they're like \"but that's not what the script says we do next\"… or something, I do not know what leads people to think like this because I do not think like that myself\n\n\n\n[Soares][15:24]\n(I think what they actually do is say \"it's not obvious to me that this is one of those scenarios where we become grizzled veterans, as opposed to things just actually working out easily\")\n(\"many things work out easily all the time; obviously society spends a bunch more focus on things that don't work out easily b/c the things that work easily tend to get resolved fairly quickly and then you don't notice them\", or something)\n(more generally, I kinda suspect that bickering closer to the object level is likely more productive)\n(and i suspect this convo might be aided by Rohin naming a concrete scenario where things go well, so that Eliezer can lament the lack of genre saviness in various specific points)\n\n\n\n\n[Yudkowsky][15:26]\nthere are, of course, lots of more local technical issues where I can specifically predict the failure mode for somebody's bright-eyed naive idea, especially when I already invented a more sophisticated version a decade or two earlier, and this is what I've usually tried to discuss\n\n\n\n\n[Soares: ]\n\n\n\n\nbecause conversations like that can sometimes make any progress\n\n\n\n[Soares][15:26]\n(and possibly also Eliezer naming a concrete story where things go poorly, so that Rohin may lament the seemingly blind pessimism & premature grizzledness)\n\n\n\n\n[Yudkowsky][15:27]\nwhereas if somebody lacks the ability to see the warning signs of which genre they are in, I do not know how to change the way they are by talking at them\n\n\n\n[Shah][15:28]\nUnsurprisingly I have disagreements with the meta-level story, but it seems really thorny to make progress on and I'm kinda inclined to not discuss it. I also should go to sleep now.\nOne thing it did make me think of — it's possible that the \"do it correctly on your first try when a failed attempt kills you\" could be the crux here. There's a clearly-true sense which is \"the first time you build a superintelligence that you cannot control, if you have failed in your alignment, then you die\". There's a different sense which is \"and also, anything you try to do with non-superintelligences that you can control, will tell you approximately nothing about the situation you face when you build a superintelligence\". I mostly don't agree with the second sense, but if Eliezer / Nate do agree with it, that would go a long way to explaining the confidence in doom.\nTwo arguments I can see for the second sense: (1) the non-superintelligences only seem to respond well to alignment schemes because they don't yet have the core of general intelligence, and (2) the non-superintelligences only seem to respond well to alignment schemes because despite being misaligned they are doing what we want in order to survive and later execute a treacherous turn. EDIT: And (3) fast takeoff = not much time to look at the closest non-dangerous examples\n(I still should sleep, but would be interested in seeing thoughts tomorrow, and if enough people think it's actually worthwhile to engage on the meta level I can do that. I'm cheerful about engaging on specific object-level ideas.)\n\n\n\n\n[Soares: ]\n\n\n\n\n\n\n\n\n[Yudkowsky][15:28]\nit's not that early failures tell you nothing\nthe failure of the 1955 Dartmouth Project to produce strong AI over a summer told those researchers something\nit told them the problem was harder than they'd hoped on the first shot\nit didn't show them the correct way to build AGI in 1957 instead\n\n\n\n[Bensinger][16:41]\nLinking to a chat log between Eliezer and some anonymous people (and Steve Omohundro) from early September: [https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions]\nEliezer tells me he thinks it pokes at some of Rohin's questions\n\n\n\n\n[Yudkowsky][16:48]\nI'm not sure that I can successfully, at this point, go back up and usefully reply to the text that scrolled past – I also note some internal grinding about this having turned into a thing which has Pending Replies instead of Scheduled Work Hours – and this maybe means that in the future we shouldn't have such a general chat here, which I didn't anticipate before the fact.  I shall nonetheless try to pick out some things and reply to them.\n\n\n\n\n[Shah: ]\n\n\n\n\n\n\nWhile I think people agree on the behaviors of corrigibility, I am not sure they agree on why we want it. Eliezer wants it for surviving failures, but maybe others want it for \"dialing in on goodness\". When I think about a \"broad basin of corrigibility\", that intuitively seems more compatible with the \"dialing in on goodness\" framing (but this is an aesthetic judgment that could easily be wrong).\n\n\nThis is a weird thing to say in my own ontology.\nThere's a general project of AGI alignment where you try to do some useful pivotal thing, which has to be powerful enough to be pivotal, and so you somehow need a system that thinks powerful thoughts in the right direction without it killing you.\nThis could include, for example:\n\nTrying to train in \"low impact\" via an RL loss function that penalizes a sufficiently broad range of \"impacts\" that we hope the learned impact penalty generalizes to all the things we'd consider impacts – even as we scale up the system, without the sort of obvious pathologies that would materialize only over options available to sufficiently powerful systems, like sending out nanosystems to erase the visibility of its actions from human observers\nTweaking MCTS search code so that it behaves in the fashion of \"mild optimization\" or \"taskishness\" instead of searching as hard as it has power available to search\nExposing the system to lots of labeled examples of relatively simple and safe instructions being obeyed, hoping that it generalizes safe instruction-following to regimes too dangerous for us to inspect outputs and label results\nWriting code that tries to recognize cases of activation vectors going outside the bounds they occupied during training, as a check on whether internal cognitive conservatism is being violated or something is seeking out adversarial counterexamples to a constraint\n\nYou could say that only parts 1 and 3 are \"dialing in on goodness\" because only those parts involve iteratively refining a target, or you could say that all 4 parts are \"dialing in on goodness\" because parts 2 and 4 help you stay alive while you're doing the iterative refining.  But I don't see this distinction as fundamental or particularly helpful.  What if, on part 4, you were training something to recognize out-of-bounds activations, instead of trying to hardcode it?  Is that dialing in on goodness?  Or is it just dialing in on survivability or corrigibility or whatnot?  Or maybe even part 3 isn't really \"dialing in on goodness\" because the true distinction between Good and Evil is still external in the programmers and not inside the system?\nI don't see this as an especially useful distinction to draw.  There's a hardcoded/learned distinction that probably does matter in several places.  There's a maybe-useful forest-level distinction between \"actually doing the pivotal thing\" and \"not destroying the world as a side effect\" which breaks down around the trees because the very definition of \"that pivotal thing you want to do\" is to do that thing and not to destroy the world.\nAnd all of this is a class of shallow ideas that I can generate in great quantity.  I now and then consider writing up the ideas like this, just to make clear that I've already thought of way more shallow ideas like this than the net public output of the entire rest of the alignment field, so it's not that my concerns of survivability stem from my having missed any of the obvious shallow ideas like that.\nThe reason I don't spend a lot of time talking about it is not that I haven't thought of it, it's that I've thought of it, explored it for a while, and decided not to write it up because I don't think it can save the world and the infinite well of shallow ideas seems more like a distraction from the level of miracle we would actually need.\n–\n\nAs a starting point: you say that an agent that makes plans but doesn't execute them is also dangerous, because it is the plan itself that lases, and corrigibility is antithetical to lasing. Does this mean you predict that you, or I, with suitably enhanced intelligence and/or reflectivity, would not be capable of producing a plan to help an alien civilization optimize their world, with that plan being corrigible w.r.t the aliens? (This seems like a strange and unlikely position to me, but I don't see how to not make this prediction under what I believe to be your beliefs. Maybe you just bite this bullet.)\n\nI 'could' corrigibly help the Babyeaters in the sense that I have a notion of what it would mean to corrigibly help them, and if I wanted to do that thing for some reason, like an outside super-universal entity offering to pay me a googolplex flops of eudaimonium if I did that one thing, then I could do that thing.  Absent the superuniversal entity bribing me, I wouldn't want to behave corrigibly towards the Babyeaters.  \nThis is not a defect of myself as an individual.  The Superhappies would also be able to understand what it would be like to be corrigible; they wouldn't want to behave corrigibly towards the Babyeaters, because, like myself, they don't want exactly what the Babyeaters want.  In particular, we would rather the universe be other than it is with respect to the Babyeaters eating babies.\n\n\n\n\n[Shah: ]\n\n\n\n\n\n\n \n22. Follow-ups\n \n\n[Shah][0:33]\n\n[…] Absent the superuniversal entity bribing me, I wouldn't want to behave corrigibly towards the Babyeaters. […]\n\nGot it. Yeah I think I just misunderstood a point you were saying previously. When Richard asked about systems that simply produce plans rather than execute them, you said something like \"the plan itself is dangerous\", which I now realize meant \"you don't get additional safety from getting to read the plan, the superintelligence would have just chosen a plan that was convincing to you but nonetheless killed everyone / otherwise worked in favor of the superintelligence's goals\", but at the time I interpreted it as \"any reasonable plan that can actually build nanosystems is going to be dangerous, regardless of the source\", which seemed obviously false in the case of a well-motivated system.\n\n[…] This is a weird thing to say in my own ontology. […]\n\nWhen I say \"dialing in on goodness\", I mean a specific class of strategies for getting a superintelligence to do a useful pivotal thing, in which you build it so that the superintelligence is applying its force towards figuring out what it is that you actually want it to do and pursuing that, which among other things would involve taking a pivotal act to reduce x-risk to ~zero.\nI previously had the mistaken impression that you thought this class of strategies was probably doomed because it was incompatible with expected utility theory, which seemed wrong to me. (I don't remember why I had this belief; possibly it was while I was still misunderstanding what you meant by \"corrigibility\" + the claim that corrigibility is anti-natural.)\nI now think that you think it is probably doomed for the same reason that most other technical strategies are probably doomed, which is that there still doesn't seem to be any plausible way of loading in the right target to the superintelligence, even when that target is a process for learning-what-to-optimize, rather than just what-to-optimize.\n\nLinking to a chat log between Eliezer and some anonymous people (and Steve Omohundro) from early September: [https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions]\nEliezer tells me he thinks it pokes at some of Rohin's questions\n\nI'm surprised that you think this addresses (or even pokes at) my questions. As far as I can tell, most of the questions there are either about social dynamics, which I've been explicitly avoiding, and the \"technical\" questions seem to treat \"AGI\" or \"superintelligence\" as a symbol; there don't seem to be any internal gears underlying that symbol. The closest anyone got to internal gears was mentioning iterated amplification as a way of bootstrapping known-safe things to solving hard problems, and that was very brief.\nI am much more into the question \"how difficult is technical alignment\". It seems like answers to this question need to be in one of two categories: (1) claims about the space of minds that lead to intelligent behavior (probably weighted by simplicity, to account for the fact that we'll get the simple ones first), (2) claims about specific methods of building superintelligences. As far as I can tell the only thing in that doc which is close to an argument of this form is \"superintelligent consequentialists would find ways to manipulate humans\", which seems straightforwardly true (when they are misaligned). I suppose one might also count the assertion that \"the speedup step of iterated amplification will introduce errors\" as an argument of this form.\nIt could be that you are trying to convince me of some other beliefs that I wasn't asking about, perhaps in the hopes of conveying some missing mood, but I suspect that it is just that you aren't particularly clear on what my beliefs are / what I'm interested in. (Not unreasonable, given that I've been poking at your models, rather than the other way around.) I could try saying more about that, if you'd like.\n\n\n\n\n[Tallinn][11:39]\nFWIW, a voice from the audience: +1 to going back to sketching concrete scenarios. even though i learned a few things from the abstract discussion of goodness/corrigibility/etc myself (eg, that \"corrigible\" was meant to be defined at the limit of self-improvement till maturity, not just as a label for code that does not resist iterated development), the progress felt more tangible during the \"scaled up muzero\" discussion above.\n\n\n\n\n[Yudkowsky][15:03]\nanybody want to give me a prompt for a concrete question/scenario, ideally a concrete such prompt but I'll take whatever?\n\n\n\n[Soares][15:34]\nNot sure I count, but one I'd enjoy a concrete response to: \"The leading AI lab vaguely thinks it's important that their systems are 'mere predictors', and wind up creating an AGI that is dangerous; how concretely does it wind up being a scary planning optimizer or whatever, that doesn't run through a scary abstract \"waking up\" step\".\n(asking for a friend; @Joe Carlsmith or whoever else finds this scenario unintuitive plz clarify with more detailed requests if interested)\n\n\n\n \n23. November 13 conversation\n \n23.1. GPT-n and goal-oriented aspects of human reasoning\n \n\n[Shah][1:46]\nI'm still interested in:\n\n5. More concreteness on how optimization generalizes but corrigibility doesn't, in the case where the AI was trained by human judgment on weak-safe domains\n\nSpecifically, we can go back to the scaled-up MuZero example. Some (lightly edited) details we had established there:\n\nPretraining: playing all the videogames, predicting all the text and images, solving randomly generated computer puzzles, accomplishing sets of easily-labelable sensorymotor tasks using robots and webcams\nFinetuning: The AI system is being trained to act well on the Internet, and it's shown some tweet / email / message that a user might have seen, and asked to reply to the tweet / email / message. User says whether the replies are good or not (perhaps via comparisons, a la Deep RL from Human Preferences). It would be more varied than that, but would not include \"building nanosystems\".\nThe AI system is not smart enough that exposing humans to text it generates is already a world-wrecking threat if the AI is hostile.\n\nAt that point we moved from concrete to abstract:\n\nAbstract description: train on 'weak-safe' domains where the AI isn't smart enough to do damage, and the humans can label the data pretty well because the AI isn't smart enough to fool them\nAbstract problem: Optimization generalizes and corrigibility fails\n\nI would be interested in a more concrete description here. I'm not sure exactly what details I'm looking for — on my ontology the question is something like \"what algorithm is the AI system forced to learn; how does that lead to generalized optimization and failed corrigibility; why weren't there simple safer algorithms that were compatible with the training, or if there were such algorithms why didn't the AI system learn them\". I don't really see how to answer all of that without abstraction, but perhaps you'll have an answer anyway\n(I am hoping to get some concrete detail on \"how did it go from non-hostile to hostile\", though I suppose you might confidently predict that it was already hostile after pretraining, conditional on it being an AGI at all. I can try devising a different concrete scenario if that's a blocker.)\n\n\n\n\n[Yudkowsky][11:09]\n\nI am hoping to get some concrete detail on \"how did it go from non-hostile to hostile\"\n\nMu Zero is intrinsically dangerous for reasons essentially isomorphic to the way that AIXI is intrinsically dangerous: It tries to remove humans from its environment when playing Reality for the same reasons it stomps a Goomba if it learns how to play Super Mario Bros 1, because it has some goal and the Goomba is in the way.  It doesn't need to learn anything more to be that way, except for learning what a Goomba/human is within the current environment. \nThe question is more \"What kind of patches might it learn for a weak environment if optimized by some hill-climbing optimization method and loss function not to stomp Goombas there, and how would those patches fail to generalize to not stomping humans?\"\nAgree or disagree so far?\n\n\n\n[Shah][12:07]\nAgree assuming that it is pursuing a misaligned goal, but I am also asking what misaligned goal it is pursuing (and depending on the answer, maybe also how it came to be pursuing that misaligned goal given the specified training setup).\nIn fact I think \"what misaligned goal is it pursuing\" is probably the more central question for me\n\n\n\n\n[Yudkowsky][12:14]\nwell, obvious abstract guess is: something whose non-maximal \"optimum\" (that is, where the optimization ended up, given about how powerful the optimization was) coincided okayish with the higher regions of the fitness landscape (lower regions of the loss landscape) that could be reached at all, relative to its ancestral environment\nI feel like it would be pretty hard to blindly guess, in advance, at my level of intelligence, without having seen any precedents, what the hell a Human would look like, as a derivation of \"inclusive genetic fitness\"\n\n\n\n[Shah][12:15]\nYeah I agree with that in the abstract, but have had trouble giving compelling-to-me concrete examples\nYeah I also agree with that\n\n\n\n\n[Yudkowsky][12:15]\nI could try to make up some weird false specifics if that helps?\n\n\n\n[Shah][12:16]\nTo be clear I am fine with \"this is a case where we predictably can't have good concrete stories and this does not mean we are safe\" (and indeed argued the same thing in a doc I linked here many messages ago)\nBut weird false specifics could still be interesting\nAlthough let me think if it is actually valuable\nProbably it is not going to change my mind very much on alignment difficulty, if it is \"weird false specifics\", so maybe this isn't the most productive line of discussion. I'd be \"selfishly\" interested in that \"weird false specifics\" seems good for me to generate novel thoughts about these sorts of scenarios, but that seems like a bad use of this Discord\nI think given the premises that (1) superintelligence is coming soon, (2) it pursues a misaligned goal by default, and (3) we currently have no technical way of preventing this and no realistic-seeming avenues for generating such methods, I am very pessimistic. I think (2) and (3) are the parts that I don't believe and am interested in digging into, but perhaps \"concrete stories\" doesn't really work for this.\n\n\n\n\n[Yudkowsky][12:26]\nwith any luck – though I'm not sure I actually expect that much luck – this would be something Redwood Research could tell us about, if they can learn a nonviolence predicate over GPT-3 outputs and then manage to successfully mutate the distribution enough that we can get to see what was actually inside the predicate instead of \"nonviolence\"\n\n\n\n\n[Shah: ]\n\n\n\n\nor, like, 10% of what was actually inside it\nor enough that people have some specifics to work with when it comes to understanding how gradient descent learning a function over outcomes from human feedback relative to a distribution, doesn't just learn the actual function the human is using to generate the feedback (though, if this were learned exactly, it would still be fatal given superintelligence)\n\n\n\n[Shah][12:33]\nIn this framing I do buy that you don't learn exactly the function that generates the feedback — I have ~5 contrived specific examples where this is the case (i.e. you learn something that wasn't what the feedback function would have rewarded in a different distribution)\n(I'm now thinking about what I actually want to say about this framing)\nActually, maybe I do think you might end up learning the function that generates the feedback. Not literally exactly, if for no other reason than rounding errors, but well enough that the inaccuracies don't matter much. The AGI presumably already knows and understands the concepts we use based on its pretraining, is it really so shocking if gradient descent hooks up those concepts in the right way? (GPT-3 on the other hand doesn't already know and understand the relevant concepts, so I wouldn't predict this of GPT-3.) I do feel though like this isn't really getting at my reason for (relative) optimism, and that reason is much more like \"I don't really buy that AGI must be very coherent in a way that would prevent corrigibility from working\" (which we could discuss if desired)\nOn the comment that learning the exact feedback function is still fatal — I am unclear on why you are so pessimistic on having \"human + AI\" supervise \"AI\", in order to have the supervisor be smarter than the thing being supervised. (I think) I understand the pessimism that the learned function won't generalize correctly, but if you imagine that magically working, I'm not clear what additional reason prevents the \"human + AI\" supervising \"AI\" setup.\n\nI can see how you die if the AI ever becomes misaligned, i.e. there isn't a way to fix mistakes, but I don't see how you get the misaligned AI in the first place.\nI could also see things like \"Just like a student can get away with plagiarism even when the teacher is smarter than the student, the AI knows more about its cognition than the human + AI system, and so will likely be incentivized to do bad things that it knows are bad but the human + AI system doesn't know is bad\". But that sort of thing seems solvable with future research, e.g. debate, interpretability, red teaming all seem like feasible approaches.\n\n\n\n\n\n[Yudkowsky][13:06]\nwhat's a \"human + AI\"? can you give me a more concrete version of that scenario, either one where you expect it to work, or where you yourself have labeled the first point you expect it to fail and you want to know whether I see an earlier failure than that?\n\n\n\n[Shah][13:09]\nOne concrete training algorithm would be debate, ideally with mechanisms that allow the AI systems to \"look into each other's thoughts\" and make credible statements about them, but we can skip that for now as it isn't very concrete\nWould you like a training domain and data as well?\nI don't like the fact that a smart AI system in this position could notice that it is playing against itself and decide not to participate in a zero-sum game, but I am not sure if that worry actually makes sense or not\n(Debate can be thought of as simultaneously \"human + first AI evaluate second AI\" and \"human + second AI evaluate first AI\")\n\n\n\n\n[Yudkowsky][13:12]\nfurther concreteness, please! what pivotal act is it training for? what are the debate contents about?\n\n\n\n[Shah][13:16]\nYou start with \"easy\" debates like mathematical theorem proving or fact-based questions, and ramp up until eventually the questions are roughly \"what is the next thing to do in order to execute a pivotal act\"\nIntermediate questions might be things like \"is it a good idea to have a minimum wage\"\n\n\n\n\n[Yudkowsky][13:17]\nso, like, \"email ATTTTGAGCTTGCC… to the following address, mix the proteins you receive by FedEx in a water-saline solution at 2 degrees Celsius…\" for the final stage?\n\n\n\n[Shah][13:17]\nYup, that could be it\nHumans are judging debates based on reasoning though, not just outcomes-after-executing-the-plan\n\n\n\n\n[Yudkowsky][13:19]\nokay.  let's suppose you manage to prevent both AGIs from using logical decision theory to coordinate with each other.  both AIs tell their humans that the other AI's plans are murderous.  now what?\n\n\n\n[Shah][13:19]\nSo assuming perfect generalization there should be some large implicit debate tree that justifies the plan in human-understandable form\n\n\n\n\n[Yudkowsky][13:20]\nyah, I flatly disbelieve that entire development scheme, so we should maybe back up.\npeople fiddled around with GPT-4 derivatives and never did get them to engage in lines of printed reasoning that would design interesting new stuff.  now what?\nLiving Zero (a more architecturally complicated successor of Mu Zero) is getting better at designing complicated things over on its side while that's going on, whatever it is\n\n\n\n[Shah][13:23]\nOkay, so the worry is that this just won't scale, not that (assuming perfect generalization) it is unsafe? Or perhaps you also think it is unsafe but it's hard to engage with because you don't believe it will scale?\nAnd the issue is that relying on reasoning confines you to a space of possible thoughts that doesn't include the kinds of thoughts required to develop new stuff (e.g. intuition)?\n\n\n\n\n[Yudkowsky][13:25]\nmostly I have found these alleged strategies to be too permanently abstract, never concretized, to count as admissible hypotheses.  if you ask me to concretize them myself, I think that unelaborated giant transformer stacks trained on massive online text corpuses fail to learn smart-human-level engineering reasoning before the world ends.  If that were not true, I would expect Paul-style schemes to blow up on the distillation step, but first failures first.\n\n\n\n[Shah][13:26]\nWhat additional concrete detail do you want?\nIt feels like I specified something that we could code up a stupidly inefficient version of now\n\n\n\n\n[Yudkowsky][13:27]\nGreat.  Describe the stupidly inefficient version?\n\n\n\n[Shah][13:33]\nIn terms of what actually happens: Each episode, there is an initial question specified by the human. Agent A and agent B, which are copies of the same neural net, simultaneously produce statements (\"answers\"). They then have a conversation. At the end the human judge decides which answer is better, and rewards the appropriate agent. The agents are updated using some RL algorithm.\nI can say stuff about why we might hope this works, or about tricks you have to play in order to get learning to happen at all, or other things\n\n\n\n\n[Yudkowsky][13:35]\nAre the agents also playing Starcraft or have they spent their whole lives inside the world of text?\n\n\n\n[Shah][13:35]\nFor the stupidly inefficient version they could have spent their whole lives inside text\n\n\n\n\n[Yudkowsky][13:37]\nOkay.  I don't think the pure-text versions of GPT-5 are being very good at designing nanosystems while Living Zero is ending the world.\n\n\n\n[Shah][13:37]\nIn the stupidly inefficient version human feedback has to teach the agents facts about the real world\n\n\n\n\n[Yudkowsky][13:37]\n(It's called \"Living Zero\" because it does lifelong learning, in the backstory I've been trying to separately sketch out in a draft.)\n\n\n\n[Shah][13:38]\nOh I definitely agree this is not competitive\nSo when you say this is too abstract, you mean that there isn't a story for how they incorporate e.g. physical real-world knowledge?\n\n\n\n\n[Yudkowsky][13:39]\nno, I mean that when I talk to Paul about this, I can't get Paul to say anything as concrete as the stuff you've already said\nthe reason why I don't expect the GPT-5s to be competitive with Living Zero is that gradient descent on feedforward transformer layers, in order how to learn science by competing to generate text that humans like, would have to pick up on some very deep latent patterns generating that text, and I don't think there's an incremental pathway there for gradient descent to follow – if gradient descent even follows incremental pathways as opposed to finding lottery tickets, but that's a whole separate open question of artificial neuroscience.\nin other words, humans play around with legos, and hominids play around with chipping flint handaxes, and mammals play around with spatial reasoning, and that's part of the incremental pathway to developing deep patterns for causal investigation and engineering, which then get projected into human text and picked up by humans reading text\nit's just straightforwardly not clear to me that GPT-5 pretrained on human text corpuses, and then further posttrained by RL on human judgment of text outputs, ever runs across the deep patterns\nwhere relatively small architectural changes might make the system no longer just a giant stack of transformers, even if that resulting system is named \"GPT-5\", and in this case, bets might be off, but also in this case, things will go wrong with it that go wrong with Living Zero, because it's now learning the more powerful and dangerous kind of work\n\n\n\n[Shah][13:45]\nThat does seem like a disagreement, in that I think this process does eventually reach the \"deep patterns\", but I do agree it is unlikely to be competitive\n\n\n\n\n[Yudkowsky][13:45]\nI mean, if you take a feedforward stack of transformer layers the size of a galaxy and train it via gradient descent using all the available energy in the reachable universe, it might find something, sure\nthough this is by no means certain to be the case\n\n\n\n[Shah][13:50]\nIt would be quite surprising to me if it took that much. It would be especially surprising to me if we couldn't figure out some alternative reasonably-simple training scheme like \"imitate a human doing good reasoning\" that still remained entirely in text that could reach the \"deep patterns\". (This is now no longer a discussion about whether the training scheme is aligned, not sure if we should continue it.)\nI realize that this might be hard to do, but if you imagine that GPT-5 + human feedback finetuning does run across the deep patterns and could in theory do the right stuff, and also generalization magically works, what's the next failure?\n\n\n\n\n[Yudkowsky][13:56]\nwhat sort of deep thing does a hill-climber run across in the layers, such that the deep thing is the most predictive thing it found for human text about science?\nif you don't visualize this deep thing in any detail, then it can in one moment be powerful, and in another moment be safe.  it can have all the properties that you want simultaneously.  who's to say otherwise? the mysterious deep thing has no form within your mind.\nif one were to name specifically \"well, it ran across a little superintelligence with long-term goals that it realized it could achieve by predicting well in all the cases that an outer gradient descent loop would probably be updating on\", that sure doesn't end well for you.\nthis perhaps is not the first thing that gradient descent runs across.  it wasn't the first thing that natural selection ran across to build things that ran the savvanah and made more of themselves.  but what deep pattern that is not pleasantly and unfrighteningly formless would gradient descent run across instead?\n\n\n\n[Shah][14:00]\n(Tbc by \"human feedback finetuning\" I mean debate, and I suspect that \"generalization magically works\" will be meant to rule out the thing that you say next, but seems worth checking so let me write an answer)\n\nthe deep thing is the most predictive thing it found for human text about science?\n\nWait, the most predictive thing? I was imagining it as just a thing that is present in addition to all the other things. Like, I don't think I've learned a \"deep thing\" that is most useful for riding a bike. Probably I'm just misunderstanding what you mean here.\nI don't think I can give a good answer here, but to give some answer, it has a belief that there is a universe \"out there\", that lots but not all of the text it reads is making claims about (some aspect of) the universe, those claims can be true or false, there are some claims that are known to be true, there are some ways to take assumed-true claims and generate new assumed-true claims, which includes claims about optimal actions for goals, as well as claims about how to build stuff, or what the effect of a specified machine is\n\n\n\n\n[Yudkowsky][14:10]\nhell of a lot of stuff for gradient descent to run across in a stack of transformer layers.  clearly the lottery-ticket hypothesis must have been very incorrect, and there was an incremental trail of successively more complicated gears that got trained into the system.\nbtw by \"claims\" are you meaning to make the jump to English claims? I was reading them as giant inscrutable vectors encoding meaningful propositions, but maybe you meant something else there.\n\n\n\n[Shah][14:11]\nIn fact I am skeptical of some strong versions of the lottery ticket hypothesis, though it's been a while since I read the paper and I don't remember exactly what the original hypothesis was\nGiant inscrutable vectors encoding meaningful propositions\n\n\n\n\n[Yudkowsky][14:13]\noh, I'm not particularly confident of the lottery-ticket hypothesis either, though I sure do find it grimly amusing that a species which hasn't already figured that out one way or another thinks it's going to have deep transparency into neural nets all wrapped up in time to survive.  but, separate issue.\n\"How does gradient descent even work?\" \"Lol nobody knows, it just does.\"\nbut, separate issue\n\n\n\n[Shah][14:16]\nHow does strong lottery ticket hypothesis explain GPT-3? Seems like that should already be enough to determine that there's an incremental trail of successively more complicated gears\n\n\n\n\n[Yudkowsky][14:18]\ncould just be that in 175B parameters, combinatorially combined through possible execution pathways, there is some stuff that was pretty close to doing all the stuff that GPT-3 ended up doing.\nanyways, for a human to come up with human text about science, the human has to brood and think for a bit about different possible hypotheses that could account for the data, notice places where those hypotheses break down, tweak the hypotheses in their mind to make the errors go away; they would engineer an internal mental construct towards the engineering goal of making good predictions.  if you're looking at orbital mechanics and haven't invented calculus yet, you invent calculus as a persistent mental tool that you can use to craft those internal mental constructs.\ndoes the formless deep pattern of GPT-5 accomplish the same ends, by some mysterious means that is, formless, able to produce the same result, but not by any detailed means where if you visualized them you would be able to see how it was unsafe?\n\n\n\n[Shah][14:24]\nI expect that probably we will figure out some way to have adaptive computation time be a thing (it's been investigated for years now, but afaik hasn't worked very well), which will allow for this sort of thing to happen\nIn the stupidly inefficient version, you have a really really giant and deep neural net that does all of that in successive layers of the neural net. (And when it doesn't need to do that, those layers are noops.)\n\n\n\n\n[Yudkowsky][14:26][14:32]\nokay, so my question is, is there a little goal-oriented mind inside there that solves science problems the same way humans solve them, by engineering mental constructs that serve a goal of prediction, including backchaining for prediction goals and forward chaining from alternative hypotheses / internal tweaked states of the mental construct? or is there something else which solves the same problem, not how humans do it, without any internal goal orientation?\nPeople who would not in the first place realize that humans solve prediction problems by internally engineering internal mental constructs in a goal-oriented way, would of course imagine themselves able to imagine a formless spirit which produces \"predictions\" without being \"goal-oriented\" because they lack an understanding of internal machinery and so can combine whatever surface properties and English words they want to yield a beautiful optimism\nOr perhaps there is indeed some way to produce \"predictions\" without being \"goal-oriented\", which gradient descent on a great stack of transformer layers would surely run across; but you will pardon my grave lack of confidence that someone has in fact seen so much further than myself, when they don't seem to have appreciated in advance of my own questions why somebody who understood something about human internals would be skeptical of this.\nIf they're sort of visibly trying to come up with it on the spot after I ask the question, that's not such a great sign either.\n\n\n\n\nThis is not aimed particularly at you, but I hope the reader may understand something of why Eliezer Yudkowsky goes about sounding so gloomy all the time about other people's prospects for noticing what will kill them, by themselves, without Eliezer constantly hovering over their shoulder every minute prompting them with almost all of the answer.\n\n\n\n[Shah][14:31]\nJust to check my understanding: if we're talking about, say, how humans might go about understanding neural nets, there's a goal of \"have a theory that can retrodict existing observations and make new predictions\", backchaining might say \"come up with hypotheses that would explain double descent\", forward chaining might say \"look into bias and variance measurements\"?\nIf so, yes, I think the AGI / GPT-5-that-is-an-AGI is doing something similar\n\n\n\n\n[Yudkowsky][14:33]\nyour understanding sounds okay, though it might make more sense to talk about a domain that human beings understand better than artificial neuroscience, for purposes of illustrating how scientific thinking works, since human beings haven't actually gotten very far with artificial neuroscience.\n\n\n\n[Shah][14:33]\nFair point re using a different domain\nTo be clear I do not in fact think that GPT-N is safe because it is trained with supervised learning and I am confused at the combination of views that GPT-N will be AGI and GPT-N will be safe because it's just doing predictions\nMaybe there is marginal additional safety but you clearly can't say it is \"definitely safe\" without some additional knowledge that I have not seen so far\nGoing back to the original question, of what the next failure mode of debate would be assuming magical generalization, I think it's just not one that makes sense to ask on your worldview / ontology; \"magical generalization\" is the equivalent of \"assume that the goal-oriented mind somehow doesn't do dangerous optimization towards its goal, yet nonetheless produces things that can only be produced by dangerous optimization towards a goal\", and so it is assuming the entire problem away\n\n\n\n\n[Yudkowsky][14:41]\nwell YES\nfrom my perspective the whole field of mental endeavor as practiced by alignment optimists consists of ancient alchemists wondering if they can get collections of surface properties, like a metal as shiny as gold, as hard as steel, and as self-healing as flesh, where optimism about such wonderfully combined properties can be infinite as long as you stay ignorant of underlying structures that produce some properties but not others\nand, like, maybe you can get something as hard as steel, as shiny as gold, and resilient or self-healing in various ways, but you sure don't get it by ignorance of the internals\nand not for a while\nso if you need the magic sword in 2 years or the world ends, you're kinda dead\n\n\n\n[Shah][14:46]\nPotentially dumb question: when humans do science, why don't they then try to take over the world to do the best possible science? (If humans are doing dangerous goal-directed optimization when doing science, why doesn't that lead to catastrophe?)\nYou could of course say that they just aren't smart enough to do so, but it sure feels like (most) humans wouldn't want to do the best possible science even if they were smarter\nI think this is similar to a question I asked before about plans being dangerous independent of their source, and the answer was that the source was misaligned\nBut in the description above you didn't say anything about the thing-doing-science being misaligned, so I am once again confused\n\n\n\n\n[Yudkowsky][14:48]\nboy, so many dumb answers to this dumb question:\n\neven relatively \"smart\" humans are not very smart compared to other humans, such that they don't have a \"take over the world\" option available.\nmost humans who use Science were not smart enough to invent the underlying concept of Science for themselves from scratch; and Francis Bacon, who did, sure did want to take over the world with it.\ngroups of humans with relatively more Engineering sure did take over large parts of the world relative to groups that had relatively less.\nEliezer Yudkowsky clearly demonstrates that when you are smart enough you start trying to use Science and Engineering to take over your whole future lightcone, the other humans you're thinking of just aren't that smart, and, if they were, would inevitably converge towards Eliezer Yudkowsky, who is really a very typical example of a person that smart, even if he looks odd to you because you're not seeing the population of other dath ilani\n\nI am genuinely not sure how to come up with a less dumb answer and it may require a more precise reformulation of the question\n\n\n\n[Shah][14:50]\nBut like, in Eliezer's case, there is a different goal that is motivating him to use Science and Engineering for this purpose\nIt is not the prediction-goal that he instantiated in his mind as part of the method of doing Science\n\n\n\n\n[Yudkowsky][14:52]\nsure, and the mysterious formless thing within GPT-5 with \"adaptive computation time\" that broods and thinks, may be pursuing its prediction-subgoal for the sake of other goals, or be pursuing different subgoals of prediction separately without ever once having a goal of prediction, or have 66,666 different shards of desire across different kinds of predictive subproblems that were entrained by gradient descent which does more brute memorization and less Occam bias than natural selection\noh, are you asking why humans, when they do goal-oriented Science for the sake of their other goals, don't (universally always) stomp on their other goals while pursuing the Science part?\n\n\n\n[Shah][14:54]\nWell, that might also be interesting to hear the answer to — I don't know how I'd answer that through an Eliezer-lens — though it wasn't exactly what I was asking\n\n\n\n\n[Yudkowsky][14:56]\nbasically the answer is \"well, first of all, they do stomp on themselves to the extent that they're stupid; and to the extent that they're smart, pursuing X on the pathway to Y has a 'natural' structure for not stomping on Y which is simple and generalizes and obeys all the coherence theorems and can incorporate arbitrarily fine wiggles via epistemic modeling of those fine wiggles because those fine wiggles have a very compact encoding relative to the epistemic model, aka, predicting which forms of X lead to Y; and to the extent that group structures of humans can't do that simple thing coherently because of their cognitive and motivational partitioning, the group structures of humans are back to not being able to coherently pursue the final goal again\"\n\n\n\n[Shah][14:58]\n(Going back to what I meant to ask) It seems to me like humans demonstrate that you can have a prediction goal without that being your final/terminal goal. So it seems like with AI you similarly need to talk about the final/terminal goal. But then we talked about GPT and debate and so on for a while, and then you explained how GPTs would have deep patterns that do dangerous optimization, where the deep patterns involved instantiating a prediction goal. Notably, you didn't say anything about a final/terminal goal. Do you see why I am confused?\n\n\n\n\n[Yudkowsky][15:00]\nso you can do prediction because it's on the way to some totally other final goal – the way that any tiny superintelligence or superhumanly-coherent agent, if an optimization method somehow managed to run across that early on, with an arbitrary goal, which also understood the larger picture, would make good predictions while it thought the outer loop was probably doing gradient descent updates, and bide its time to produce rather different \"predictions\" once it suspected the results were not going to be checked given what the inputs had looked like.\nyou can imagine a thing that does prediction the same way that humans optimize inclusive genetic fitness, by pursuing dozens of little goals that tend to cohere to good prediction in the ancestral environment\nboth of these could happen in order; you could get a thing that pursued 66 severed shards of prediction as a small mind, and which, when made larger, cohered into a utility function around the 66 severed shards that sum to something which is not good prediction and which you could pursue by transforming the universe, and then strategically made good predictions while it expected the results to go on being checked\n\n\n\n[Shah][15:02]\nOH you mean that the outer objective is prediction\n\n\n\n\n[Yudkowsky][15:02]\n?\n\n\n\n[Shah][15:03]\nI have for quite a while thought that you meant that Science involves internally setting a subgoal of \"predict a confusing part of reality\"\n\n\n\n\n[Yudkowsky][15:03]\nit… does?\nI mean, that is true.\n\n\n\n[Shah][15:04]\nOkay wait. There are two things. One is that GPT-3 is trained with a loss function that one might call a prediction objective for human text. Two is that Science involves looking at a part of reality and figuring out how to predict it. These two things are totally different. I am now unsure which one(s) you were talking about in the conversation above\n\n\n\n\n[Yudkowsky][15:06]\nwhat I'm saying is that for GPT-5 to successfully do AGI-complete prediction of human text about Science, gradient descent must identify some formless thing that does Science internally in order to optimize the outer loss function for predicting human text about Science\njust like, if it learns to predict human text about multiplication, it must have learned something internally that does multiplication\n(afk, lunch/dinner)\n\n\n\n[Shah][15:07]\nYeah, so you meant the first thing, and I misinterpreted as the second thing\n(I will head to bed in this case — I was meaning to do that soon anyway — but I'll first summarize.)\n\n\n\n\n[Yudkowsky][15:08]\nI am concerned that there is still a misinterpretation going on, because the case I am describing is both things at once\nthere is an outer loss function that scores text predictions, and an internal process which for purposes of predicting what Science would say must actually somehow do the work of Science\n\n\n\n[Shah][15:09]\nOkay let me look back at the conversation\n\nis there a little goal-oriented mind inside there that solves science problems the same way humans solve them, by engineering mental constructs that serve a goal of prediction, including backchaining for prediction goals and forward chaining from alternative hypotheses / internal tweaked states of the mental construct?\n\nHere, is the word \"prediction\" meant to refer to the outer objective and/or predicting what English sentences about Science one might say, or is it referring to a subpart of the Process Of Science in which one aims to predict some aspect of reality (which is typically not in the form of English sentences)?\n\n\n\n\n[Yudkowsky][15:20]\nit's here referring to the inner Science problem\n\n\n\n[Shah][15:21]\nOkay I think my original understanding was correct in that case\n\nfrom my perspective the whole field of mental endeavor as practiced by alignment optimists consists of ancient alchemists wondering if they can get collections of surface properties, like a metal as shiny as gold, as hard as steel, and as self-healing as flesh, where optimism about such wonderfully combined properties can be infinite as long as you stay ignorant of underlying structures that produce some properties but not others\n\nI actually think something like this might be a crux for me, though obviously I wouldn't put it the way you're putting it. More like \"are arguments about internal mechanisms more or less trustworthy than arguments about what you're selecting for\" (limiting to arguments we actually have access to, of course in the limit of perfect knowledge internal mechanisms beats selection). But that is I think a discussion for another day.\n\n\n\n\n[Yudkowsky][15:29]\nI think the critical insight – though it has a format that basically nobody except me ever visibly invokes in those terms, and I worry maybe it can only be taught by a kind of life experience that's very hard to obtain – is the realization that any consistent reasonable story about underlying mechanisms will give you less optimistic forecasts than the ones you get by freely combining surface desiderata\n\n\n\n[Shah] [1:38]\n(For the reader, I don't think that \"arguments about what you're selecting for\" is the same thing as \"freely combining surface desiderata\", though I do expect they look approximately the same to Eliezer)\nYeah, I think I do not in fact understand why that is true for any consistent reasonable story.\nFrom my perspective, when I posit a hypothetical, you demonstrate that there is an underlying mechanism that produces strong capabilities that generalize combined with real world knowledge. I agree that a powerful AI system that we build capable of executing a pivotal act will have strong capabilities that generalize and real world knowledge. I am happy to assume for the purposes of this discussion that it involves backchaining from a target and forward chaining from things that you currently know or have. I agree that such capabilities could be used to cause an existential catastrophe (at least in a unipolar world, multipolar case is more complicated, but we can stick with unipolar for now). None of my arguments so far are meant to factor through the route of \"make it so that the AGI can't cause an existential catastrophe even if it wants to\".\nThe main question according to me is why those capabilities are aimed towards achievement of a misaligned goal.\nIt feels like when I try to ask why we have misaligned goals, I often get answers that are of the form \"look at the deep patterns underlying the strong capabilities that generalize, obviously given a misaligned goal they would generate the plan of killing the humans who are an obstacle towards achieving that goal\". This of course doesn't work since it's a circular argument.\nI can generate lots of arguments for why it would be aimed towards achievement of a misaligned goal, such as (1) only a tiny fraction of goals are aligned; the rest are misaligned, (2) the feedback we provide is unlikely to be the right goal and even small errors are fatal, (3) lots of misaligned goals are compatible with the feedback we provide even if the feedback is good, since the AGI might behave well until it can execute a treacherous turn, (4) the one example of strategically aware intelligence (i.e. humans) is misaligned relative to its creator. (I'm not saying I agree with these arguments, but I do understand them.)\nAre these the arguments that make you think that you get misaligned goals by default? Or is it something about \"deep patterns\" that isn't captured by \"strong capabilities that generalize, real-world knowledge, ability to cause an existential catastrophe if it wants to\"?\n\n\n\n \n24. Follow-ups\n \n\n[Yudkowsky][15:59]\nSo I realize it's been a bit, but looking over this last conversation, I feel unhappy about the MIRI conversations sequence stopping exactly here, with an unanswered major question, after I ran out of energy last time.  I shall attempt to answer it, at least at all.  CC @rohin @RobBensinger .\n\n\n\n\n[Shah: ]\n[Ngo: ]\n[Bensinger: ]\n\n\n\n\nOne basic large class of reasons has the form, \"Outer optimization on a precise loss function doesn't get you inner consequentialism explicitly targeting that outer objective, just inner consequentialism targeting objectives which empirically happen to align with the outer objective given that environment and those capability levels; and at some point sufficiently powerful inner consequentialism starts to generalize far out-of-distribution, and, when it does, the consequentialist part generalizes much further than the empirical alignment with the outer objective function.\"\nThis, I hope, is by now recognizable to individuals of interest as an overly abstract description of what happened with humans, who one day started building Moon rockets without seeming to care very much about calculating and maximizing their personal inclusive genetic fitness while doing that.  Their capabilities generalized much further out of the ancestral training distribution, than the empirical alignment of those capabilities on inclusive genetic fitness in the ancestral training distribution.\nOne basic large class of reasons has the form, \"Because the real objective is something that cannot be precisely and accurately shown to the AGI and the differences are systematic and important.\"\nSuppose you have a bunch of humans classifying videos of real events or text descriptions of real events or hypothetical fictional scenarios in text, as desirable or undesirable, and assigning them numerical ratings.  Unless these humans are perfectly free of, among other things, all the standard and well-known cognitive biases about eg differently treating losses and gains, the value of this sensory signal is not \"The value of our real CEV rating what is Good or Bad and how much\" nor even \"The value of a utility function we've got right now, run over the real events behind these videos\".  Instead it is in a systematic and real and visible way, \"The result of running an error-prone human brain over this data to produce a rating on it.\"\nThis is not a mistake by the AGI, it's not something the AGI can narrow down by running more experiments, the correct answer as defined is what contains the alignment difficulty.  If the AGI, or for that matter the outer optimization loop, correctly generalizes the function that is producing the human feedback, it will include the systematic sources of error in that feedback.  If the AGI essays an experimental test of a manipulation that an ideal observer would see as \"intended to produce error in humans\" then the experimental result will be \"Ah yes, this is correctly part of the objective function, the objective function I'm supposed to maximize sure does have this in it according to the sensory data I got about this objective.\"\nPeople have fantasized about having the AGI learn something other than the true and accurate function producing its objective-describing data, as its actual objective, from the objective-describing data that it gets; I, of course, was the first person to imagine this and say it should be done, back in 2001 or so; unlike a lot of latecomers to this situation, I am skeptical of my own proposals and I know very well that I did not in fact come up with any reliable-looking proposal for learning 'true' human values off systematically erroneous human feedback.\nDifficulties here are fatal, because a true and accurate learning of what is producing the objective-describing signal, will correctly imply that higher values of this signal obtain as the humans are manipulated or as they are bypassed with physical interrupts for control of the feedback signal.  In other words, even if you could do a bunch of training on an outer objective, and get inner optimization perfectly targeted on that, the fact that it was perfectly targeted would kill you.\n\n\n\n[Bensinger][23:15]  (Feb. 27, 2022 follow-up comment)\nThis is the last log in the Late 2021 MIRI Conversations. We'll be concluding the sequence with a public Ask Me Anything (AMA) this Wednesday; you can start posting questions there now.\nMIRI has found the Discord format useful, and we plan to continue using it going into 2022. This includes follow-up conversations between Eliezer and Rohin, and a forthcoming conversation between Eliezer and Scott Alexander of Astral Codex Ten.\nSome concluding thoughts from Richard Ngo:\n\n\n\n\n[Ngo][6:20]  (Nov. 12 follow-up comment)\nMany thanks to Eliezer and Nate for their courteous and constructive discussion and moderation, and to Rob for putting the transcripts together.\nThis debate updated me about 15% of the way towards Eliezer's position, with Eliezer's arguments about the difficulties of coordinating to ensure alignment responsible for most of that shift. While I don't find Eliezer's core intuitions about intelligence too implausible, they don't seem compelling enough to do as much work as Eliezer argues they do. As in the Foom debate, I think that our object-level discussions were constrained by our different underlying attitudes towards high-level abstractions, which are hard to pin down (let alone resolve).\nGiven this, I think that the most productive mode of intellectual engagement with Eliezer's worldview going forward is probably not to continue debating it (since that would likely hit those same underlying disagreements), but rather to try to inhabit it deeply enough to rederive his conclusions and find new explanations of them which then lead to clearer object-level cruxes. I hope that these transcripts shed sufficient light for some readers to be able to do so.\n\n\n\n \n\nThe post Shah and Yudkowsky on alignment failures appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Shah and Yudkowsky on alignment failures", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=1", "id": "814f981a47e1fbd87c8ec78c60a2997b"} {"text": "Ngo and Yudkowsky on scientific reasoning and pivotal acts\n\nThis is a transcript of a conversation between Richard Ngo and Eliezer Yudkowsky, facilitated by Nate Soares (and with some comments from Carl Shulman). This transcript continues the Late 2021 MIRI Conversations sequence, following Ngo's view on alignment difficulty.\n \nColor key:\n\n\n\n\n Chat by Richard and Eliezer \n Other chat \n\n\n\n\n \n\n \n14. October 4 conversation\n \n14.1. Predictable updates, threshold functions, and the human cognitive range\n \n\n[Ngo][15:05]\nTwo questions which I'd like to ask Eliezer:\n1. How strongly does he think that the \"shallow pattern-memorisation\" abilities of GPT-3 are evidence for Paul's view over his view (if at all)\n2. How does he suggest we proceed, given that he thinks directly explaining his model of the chimp-human difference would be the wrong move?\n\n\n\n\n[Yudkowsky][15:07]\n1 – I'd say that it's some evidence for the Dario viewpoint which seems close to the Paul viewpoint.  I say it's some evidence for the Dario viewpoint because Dario seems to be the person who made something like an advance prediction about it.  It's not enough to make me believe that you can straightforwardly extend the GPT architecture to 3e14 parameters and train it on 1e13 samples and get human-equivalent performance.\n\n\n\n[Ngo][15:09]\nDid you make any advance predictions, around the 2008-2015 period, of what capabilities we'd have before AGI?\n\n\n\n\n[Yudkowsky][15:10]\nnot especially that come to mind?  on my model of the future this is not particularly something I am supposed to know unless there is a rare flash of predictability.\n\n\n\n\n[Ngo][15:11]\n\n1 – I'd say that it's some evidence for the Dario viewpoint which seems close to the Paul viewpoint. I say it's some evidence for the Dario viewpoint because Dario seems to be the person who made something like an advance prediction about it. It's not enough to make me believe that you can straightforwardly extend the GPT architecture to 3e14 parameters and train it on 1e13 samples and get human-equivalent performance.\n\nFor the record I remember Paul being optimistic about language when I visited OpenAI in summer 2018. But I don't know how advanced internal work on GPT-2 was by then.\n\n\n\n\n[Yudkowsky][15:13]\n2 – in lots of cases where I learned more specifics about X, and updated about Y, I had the experience of looking back and realizing that knowing anything specific about X would have predictably produced a directional update about Y.  like, knowing anything in particular about how the first AGI eats computation, would cause you to update far away from thinking that biological analogies to the computation consumed by humans were a good way to estimate how many computations an AGI needs to eat.  you know lots of details about how humans consume watts of energy, and you know lots of details about how modern AI consumes watts, so it's very visible that these quantities are so incredibly different and go through so many different steps that they're basically unanchored from each other.\nI have specific ideas about how you get AGI that isn't just scaling up Stack More Layers, which lead me to think that the way to estimate the computational cost of it is not \"3e14 parameters trained at 1e16 ops per step for 1e13 steps, because that much computation and parameters seems analogous to human biology and 1e13 steps is given by past scaling laws\", a la recent OpenPhil publication.  But it seems to me that it should be possible to have the abstract insight that knowing more about general intelligence in AGIs or in humans would make the biological analogy look less plausible, because you wouldn't be matching up an unknown key to an unknown lock.\nUnfortunately I worry that this depends on some life experience with actual discoveries to get something this abstract-sounding on a gut level, because people basically never seem to make abstract updates of this kind when I try to point to them as predictable directional updates?\nBut, in principle, I'd hope there would be aspects of this where I could figure out how to show that any knowledge of specifics would probably update you in a predictable direction, even if it doesn't seem best for Earth for me to win that argument by giving specifics conditional on those specifics actually being correct, and it doesn't seem especially sound to win that argument by giving specifics that are wrong.\n\n\n\n[Ngo][15:17]\nI'm confused by this argument. Before I thought much about the specifics of the chimpanzee-human transition, I found the argument \"humans foomed (by biological standards) so AIs will too\" fairly compelling. But after thinking more about the specifics, it seems to me that the human foom was in part caused by a factor (sharp cultural shift) that won't be present when we train AIs.\n\n\n\n\n[Yudkowsky][15:17]\nsure, and other factors will be present in AIs but not in humans\n\n\n\n[Ngo][15:17]\nThis seems like a case where more specific knowledge updated me away from your position, contrary to what you're claiming.\n\n\n\n\n[Yudkowsky][15:18]\neg, human brains don't scale and mesh, while it's far more plausible that with AI you could just run more and more of it\nthat's a huge factor leading one to expect AI to scale faster than human brains did\nit's like communication between humans, but squared!\nthis is admittedly a specific argument and I'm not sure how it would abstract out to any specific argument\n\n\n\n[Ngo][15:20]\nAgain, this is an argument that I believed less after looking into the details, because right now it's pretty difficult to throw more compute at neural networks at runtime.\nWhich is not to say that it's a bad argument, the differences in compute-scalability between humans and AIs are clearly important. But I'm confused about the structure of your argument that knowing more details will predictably update me in a certain direction.\n\n\n\n\n[Yudkowsky][15:21]\nI suppose the genericized version of my actual response to that would be, \"architectures that have a harder time eating more compute are architectures which, for this very reason, are liable to need better versions invented of them, and this in particular seems like something that plausibly happens before scaling to general intelligence is practically possible\"\n\n\n\n[Soares][15:23]\n(Eliezer, I see Richard as requesting that you either back down from, or clarify, your claim that any specific observations about how much compute AI systems require will update him in a predictable direction.)\n\n\n\n\n[Ngo: ]\n\n\n\n\n\n\n\n\n[Yudkowsky][15:24]\nI'm not saying I know how to make that abstractized argument for exactly what Richard cares about, in part because I don't understand Richard's exact model, just that it's one way to proceed past the point where the obvious dilemma crops up of, \"If a theory about AGI capabilities is true, it is a disservice to Earth to speak it, and if a theory about AGI capabilities is false, an argument based on it is not sound.\"\n\n\n\n[Ngo][15:25]\nAh, I see.\n\n\n\n\n[Yudkowsky][15:26]\npossible viewpoint to try: that systems in general often have threshold functions as well as smooth functions inside them.\nonly in ignorance, then, do we imagine that the whole thing is one smooth function.\nthe history of humanity has a threshold function of, like, communication or culture or whatever.\nthe correct response to this is not, \"ah, so this was the unique, never-to-be-seen-again sort of fact which cropped up in the weirdly complicated story of humanity in particular, which will not appear in the much simpler story of AI\"\nthis only sounds plausible because you don't know the story of AI so you think it will be a simple story\nthe correct generalization is \"guess some weird thresholds will also pop up in whatever complicated story of AI will appear in the history books\"\n\n\n\n[Ngo][15:28]\nHere's a quite general argument about why we shouldn't expect too many threshold functions in the impact of AI: because at any point, humans will be filling in the gaps of whatever AIs can't do. (The lack of this type of smoothing is, I claim, why culture was a sharp threshold for humans – if there had been another intelligent species we could have learned culture from, then we would have developed more gradually.)\n\n\n\n\n[Yudkowsky][15:30]\nsomething like this indeed appears in my model of why I expect not much impact on GDP before AGI is powerful enough to bypass human economies entirely\nduring the runup phase, pre-AGI won't be powerful to do \"whole new things\" that depend on doing lots of widely different things that humans can't do\njust marginally new things that depend on doing one thing humans can't do, or can do but a bunch worse\n\n\n\n[Ngo][15:31]\nOkay, that's good to know.\nWould this also be true in a civilisation of village idiots?\n\n\n\n\n[Yudkowsky][15:32]\nthere will be sufficient economic reward for building out industries that are mostly human plus one thing that pre-AGI does, and people will pocket those economic rewards, go home, and not be more ambitious than that.  I have trouble empathically grasping why almost all the CEOs are like this in our current Earth, because I am very much not like that myself, but observationally, the current Earth sure does seem to behave like rich people would almost uniformly rather not rock the boat too much.\nI did not understand the whole thing about village idiots actually\ndo you want to copy and paste the document, or try rephrasing the argument?\n\n\n\n[Ngo][15:35]\nRephrasing:\nClaim 1: AIs will be better at doing scientific research (and other similar tasks) than village idiots, before we reach AGI.\nClaim 2: Village idiots still have the core of general intelligence (which you claim chimpanzees don't have).\nClaim 3: It would be surprising if narrow AI's research capabilities fell specifically into the narrow gap between village idiots and Einsteins, given that they're both general intelligences and are very similar in terms of architecture, algorithms, etc.\n(If you deny claim 2, then we can substitute, say, someone at the 10th percentile of human intelligence – I don't know what specific connotations \"village idiot\" has to you.)\n\n\n\n\n[Yudkowsky][15:37]\nMy models do not have an easy time of visualizing \"as generally intelligent as a chimp, but specialized to science research, gives you superhuman scientific capability and the ability to make progress in novel areas of science\".\n(this is a reference back to the pre-rephrase in the document)\nit seems like, I dunno, \"gradient descent can make you generically good at anything without that taking too much general intelligence\" must be a core hypothesis there?\n\n\n\n[Ngo][15:39]\nI mean, we both agree that gradient descent can produce some capabilities without also producing much general intelligence. But claim 1 plus your earlier claims that narrow AIs won't surpass humans at scientific research, lead to the implication that the limitations of gradient-descent-without-much-general-intelligence fall in a weirdly narrow range.\n\n\n\n\n[Yudkowsky][15:42]\nI do credit the Village Idiot to Einstein Interval with being a little broader as a target than I used to think, since the Alpha series of Go-players took a couple of years to go from pro to world-beating even once they had a scalable algorithm.  Still seems to me that, over time, the wall clock time to traverse those ranges has been getting shorter, like Go taking less time than Chess.  My intuitions still say that it'd be quite weird to end up hanging out for a long time with AGIs that conduct humanlike conversations and are ambitious enough to run their own corporations while those AGIs are still not much good at science.\nBut on my present model, I suspect the limitations of \"gradient-descent-without-much-general-intelligence\" to fall underneath the village idiot side?\n\n\n\n[Ngo][15:43]\nOh, interesting.\nThat seems like a strong prediction\n\n\n\n\n[Yudkowsky][15:43]\nYour model, as I understand it, is saying, \"But surely, GD-without-GI must suffice to produce better scientists than village idiots, by specializing chimps on science\" and my current reply, though it's not a particular question I've thought a lot about before, is, \"That… does not quite seem to me like a thing that should happen along the mainline?\"\nthough, as always, in the limit of superintelligences doing things, or our having the Textbook From The Future, we could build almost any kind of mind on purpose if we knew how, etc.\n\n\n\n[Ngo][15:44]\nFor example, I expect that if I prompt GPT-3 in the right way, it'll say some interesting and not-totally-nonsensical claims about advanced science.\nWhereas it would be very hard to prompt a village idiot to do the same.\n\n\n\n\n[Yudkowsky][15:44]\neg, a superintelligence could load up chimps with lots of domain-specific knowledge they were not generally intelligent enough to learn themselves.\nehhhhhh, it is not clear to me that GPT-3 is better than a village idiot at advanced science, even in this narrow sense, especially if the village idiot is allowed some training\n\n\n\n[Ngo][15:46]\nIt's not clear to me either. But it does seem plausible, and then it seems even more plausible that this will be true of GPT-4\n\n\n\n\n[Yudkowsky][15:46]\nI wonder if we're visualizing different village idiots\nmy choice of \"village idiot\" originally was probably not the best target for visualization, because in a lot of cases, a village idiot – especially the stereotype of a village idiot – is, like, a damaged general intelligence with particular gears missing?\n\n\n\n[Ngo][15:47]\nI'd be happy with \"10th percentile intelligence\"\n\n\n\n\n[Yudkowsky][15:47]\nwhereas it seems like what you want is something more like \"Homo erectus but it has language\"\noh, wow, 10th percentile intelligence?\nthat's super high\nGPT-3 is far far out of its league\n\n\n\n[Ngo][15:49]\nI think GPT-3 is far below this person's league in a lot of ways (including most common-sense reasoning) but I become much less confident when we're talking about abstract scientific reasoning.\n\n\n\n\n[Yudkowsky][15:51]\nI think that if scientific reasoning were as easy as you seem to be imagining(?), the publication factories of the modern world would be much more productive of real progress.\n\n\n\n[Ngo][15:51]\nWell, a 10th percentile human is very unlikely to contribute to real scientific progress either way\n\n\n\n\n[Yudkowsky][15:53]\nLike, on my current model of how the world really works, China pours vast investments into universities and sober-looking people with PhDs and classes and tests and postdocs and journals and papers; but none of this is the real way of Science which is actually, secretly, unbeknownst to China, passed down in rare lineages and apprenticeships from real scientist mentor to real scientist student, and China doesn't have much in the way of lineages so the extra money they throw at stuff doesn't turn into real science.\n\n\n\n[Ngo][15:52]\nCan you think of any clear-cut things that they could do and GPT-3 can't?\n\n\n\n\n[Yudkowsky][15:53]\nLike… make sense… at all?  Invent a handaxe when nobody had ever seen a handaxe before?\n\n\n\n[Ngo][15:54]\nYou're claiming that 10th percentile humans invent handaxes?\n\n\n\n\n[Yudkowsky][15:55]\nThe activity of rearranging scientific sentences into new plausible-sounding paragraphs is well within the reach of publication factories, in fact, they often use considerably more semantic sophistication than that, and yet, this does not cumulate into real scientific progress even in quite large amounts.\nI think GPT-3 is basically just Not Science Yet to a much greater extent than even these empty publication factories.\nIf 10th percentile humans don't invent handaxes, GPT-3 sure as hell doesn't.\n\n\n\n[Ngo][15:55]\nI don't think we're disagreeing. Publication factories are staffed with people who do better academically than 90+% of all humans.\nIf 90th-percentile humans are very bad at science, then of course GPT-3 and 10th-percentile humans are very very bad at science. But it still seems instructive to compare them (e.g. on tasks like \"talk cogently about a complex abstract topic\")\n\n\n\n\n[Yudkowsky][15:58]\nI mean, while it is usually weird for something to be barely within a species's capabilities while being within those capabilities at all, such that only relatively smarter individual organisms can do it, in the case of something that a social species has only very recently started to do collectively, it's plausible that the thing appeared at the point where it was barely accessible to the smartest members.  Eg, it wouldn't be surprising if it would have taken a long time or forever for humanity to invent science from scratch, if all the Francis Bacons and Newtons and even average-intelligence people were eliminated leaving only the bottom 10%.  Because our species just started doing that, at the point where our species was barely able to start doing that, meaning, at the point where some rare smart people could spearhead it, historically speaking.  It's not obvious whether or not less smart people can do it over a longer time.\nI'm not sure we disagree much about the human part of this model.\nMy guess is that our disagreement is more about GPT-3.\n\"Talk 'cogently' about a complex abstract topic\" doesn't seem like much of anything significant to me, if GPT-3 is 'cogent'.  It fails to pass the threshold for inventing science and, I expect, for most particular sciences.\n\n\n\n[Ngo][16:00]\nHow much training do you think a 10th-percentile human would need in a given subject matter (say, economics) before they could answer questions as well as GPT-3 can?\n(Right now I think GPT-3 does better by default because it at least recognises the terminology, whereas most humans don't at all.)\n\n\n\n\n[Yudkowsky][16:01]\nI also expect that if you offer a 10th-percentile human lots of money, they can learn to talk more cogently than GPT-3 about narrower science areas.  GPT-3 is legitimately more well-read at its lower level of intelligence, but train the 10-percentiler in a narrow area and they will become able to write better nonsense about that narrow area.\n\n\n\n[Ngo][16:01]\nThis sounds like an experiment we can actually run.\n\n\n\n\n[Yudkowsky][16:02]\nLike, what we've got going on here is a real breadth advantage that GPT-3 has in some areas, but the breadth doesn't add up because it lacks the depth of a 10%er.\n\n\n\n[Ngo][16:02]\nIf we asked them to read a single introductory textbook and then quiz both them and GPT-3 about items covered in that textbook, do you expect that the human would come out ahead?\n\n\n\n\n[Yudkowsky][16:02]\nAI has figured out how to do a subhumanly shallow kind of thinking, and it is to be expected that when AI can do anything at all, it can soon do more of that thing than the whole human species could do.\nNo, that's nothing remotely like giving the human the brief training the human needs to catch up to GPT-3's longer training.\nA 10%er does not learn in an instant – they learn faster than GPT-3, but not in an instant.\nThis is more like a scenario of paying somebody to, like, sit around for a year with an editor, learning how to mix-and-match economics sentences until they can learn to sound more like they're making an argument than GPT-3 does, despite still not understanding any economics.\nA lot of the learning would just go into producing sensible-sounding nonsense at all, since lots of 10%ers have not been to college and have not learned how to regurgitate rearranged nonsense for college teachers.\n\n\n\n[Ngo][16:05]\nWhat percentage of humans do you think could learn to beat GPT-3's question-answering by reading a single textbook over, say, a period of a month?\n\n\n\n\n[Yudkowsky][16:06]\n¯\\_(ツ)_/¯\n\n\n\n[Ngo][16:06]\nMore like 0.5 or 5 or 50?\n\n\n\n\n[Yudkowsky][16:06]\nHumans cannot in general pass the Turing Test for posing as AIs!\nWhat percentage of humans can pass as a calculator by reading an arithmetic textbook?\nZero!\n\n\n\n[Ngo][16:07]\nI'm not asking them to mimic GPT-3, I'm asking them to produce better answers.\n\n\n\n\n[Yudkowsky][16:07]\nThen it depends on what kind of answers!\nI think a lot of 10%ers could learn to do wedding-cake multiplication, if sufficiently well-paid as adults rather than being tortured in school, out to 6 digits, thus handily beating the current GPT-3 at 'multiplication'.\n\n\n\n[Ngo][16:08]\nFor example: give them an economics textbook to study for a month, then ask them what inflation is, whether it goes up or down if the government prints more money, whether the price of something increases or decreases when the supply increases.\n\n\n\n\n[Yudkowsky][16:09]\nGPT-3 did not learn to produce its responses by reading textbooks.\nYou're not matching the human's data to GPT-3's data.\n\n\n\n[Ngo][16:10]\nI know, this is just the closest I can get in an experiment that seems remotely plausible to actually run.\n\n\n\n\n[Yudkowsky][16:10]\nYou would want to collect, like, 1,000 Reddit arguments about inflation, and have the human read that, and have the human produce their own Reddit arguments, and have somebody tell them whether they sounded like real Reddit arguments or not.\nThe textbook is just not the same thing at all.\nI'm not sure we're at the core of the argument, though.\nTo me it seems like GPT-3 is allowed to be superhuman at producing remixed and regurgitated sentences about economics, because this is about as relevant to Science talent as a calculator being able to do perfect arithmetic, only less so.\n\n\n\n[Ngo][16:15]\nSuppose that the remixed and regurgitated sentences slowly get more and more coherent, until GPT-N can debate with a professor of economics and sustain a reasonable position.\n\n\n\n\n[Yudkowsky][16:15]\nAre these points that GPT-N read elsewhere on the Internet, or are they new good points that no professor of economics on Earth has ever made before?\n\n\n\n[Ngo][16:15]\nI guess you don't expect this to happen, but I'm trying to think about what experiments we could run to get evidence for or against it.\nThe latter seems both very hard to verify, and also like a very high bar – I'm not sure if most professors of economics have generated new good arguments that no other professor has ever made before.\nSo I guess the former.\n\n\n\n\n[Yudkowsky][16:18]\nThen I think that you can do this without being able to do science.  It's a lot like if somebody with a really good memory was lucky enough to have read that exact argument on the Internet yesterday, and to have a little talent for paraphrasing.  Not by coincidence, having this ability gives you – on my model – no ability to do science, invent science, be the first to build handaxes, or design nanotechnology.\nI admit, this does reflect my personal model of how Science works, presumably not shared by many leading bureaucrats, where in fact the papers full of regurgitated scientific-sounding sentences are not accomplishing much.\n\n\n\n[Ngo][16:20]\nSo it seems like your model doesn't rule out narrow AIs producing well-reviewed scientific papers, since you don't trust the review system very much.\n\n\n\n\n[Yudkowsky][16:23]\nI'm trying to remember whether or not I've heard of that happening, like, 10 years ago.\nMy vague recollection is that things in the Sokal Hoax genre where the submissions succeeded, used humans to hand-generate the nonsense rather than any submissions in the genre having been purely machine-generated.\n\n\n\n[Ngo][16:24]\nWhich doesn't seem like an unreasonable position, but it does make it harder to produce tests that we have opposing predictions on.\n\n\n\n\n[Yudkowsky][16:24]\nObviously, that doesn't mean it couldn't have been done 10 years ago, because 10 years ago it's plausibly a lot easier to hand-generate passing nonsense than to write an AI program that does it.\noh, wait, I'm wrong!\nhttps://news.mit.edu/2015/how-three-mit-students-fooled-scientific-journals-0414\n\nIn April of 2005 the team's submission, \"Rooter: A Methodology for the Typical Unification of Access Points and Redundancy,\" was accepted as a non-reviewed paper to the World Multiconference on Systemics, Cybernetics and Informatics (WMSCI), a conference that Krohn says is known for \"being spammy and having loose standards.\"\n\n \n\nin 2013 IEEE and Springer Publishing removed more than 120 papers from their sites after a French researcher's analysis determined that they were generated via SCIgen\n\n\n\n\n[Ngo][16:26]\nOh, interesting\nMeta note: I'm not sure where to take the direction of the conversation at this point. Shall we take a brief break?\n\n\n\n\n[Yudkowsky][16:27]\n\nThe creators continue to get regular emails from computer science students proudly linking to papers they've snuck into conferences, as well as notes from researchers urging them to make versions for other disciplines.\n\nSure! Resume 5p?\n\n\n\n[Ngo][16:27]\nYepp\n\n\n\n \n14.2. Domain-specific heuristics and nanotechnology\n \n\n[Soares][16:41]\nA few takes:\n1. It looks to me like there's some crux in \"how useful will the 'shallow' stuff get before dangerous things happen\". I would be unsurprised if this spiraled back into the gradualness debate. I'm excited about attempts to get specific and narrow disagreements in this domain (not necessarily bettable; I nominate distilling out specific disagreements before worrying about finding bettable ones).\n2. It seems plausible to me we should have some much more concrete discussion about possible ways things could go right, according to Richard. I'd be up for playin the role of beeping when things seem insufficiently concrete.\n3. It seems to me like Richard learned a couple things about Eliezer's model in that last bout of conversation. I'd be interested to see him try to paraphrase his current understanding of it, and to see Eliezer produce beeps where it seems particularly off.\n\n\n\n\n[Yudkowsky][17:00]\n\n\n\n\n[Ngo][17:02]\nHmm, I'm not sure that I learned too much about Eliezer's model in this last round.\n\n\n\n\n[Soares][17:03]\n(dang :-p)\n\n\n\n\n[Ngo][17:03]\nIt seems like Eliezer thinks that the returns of scientific investigation are very heavy-tailed.\nWhich does seem pretty plausible to me.\nBut I'm not sure how useful this claim is for thinking about the development of AI that can do science.\nI attempted in my document to describe some interventions that would help things go right.\nAnd the levels of difficulty involved.\n\n\n\n\n[Yudkowsky][17:07]\n(My model is something like: there are some very shallow steps involved in doing science, lots of medium steps, occasional very deep steps, assembling the whole thing into Science requires having all the lego blocks available.  As soon as you look at anything with details, it ends up 'heavy-tailed' because it has multiple pieces and says how things don't work if all the pieces aren't there.)\n\n\n\n[Ngo][17:08]\nEliezer, do you have an estimate of how much slower science would proceed if everyone's IQs were shifted down by, say, 30 points?\n\n\n\n\n[Yudkowsky][17:10]\nIt's not obvious to me that science proceeds significantly past its present point.  I would not have the right to be surprised if Reality told me the correct answer was that a civilization like that just doesn't reach AGI, ever.\n\n\n\n[Ngo][17:12]\nDoesn't your model take a fairly big hit from predicting that humans just happen to be within 30 IQ points of not being able to get any more science?\nIt seems like a surprising coincidence.\nOr is this dependent on the idea that doing science is much harder now than it used to be?\nAnd so if we'd been dumber, we might have gotten stuck before newtonian mechanics, or else before relativity?\n\n\n\n\n[Yudkowsky][17:13]\nNo, humanity is exactly the species that finds it barely possible to do science.\n\n\n\n[Ngo][17:14]\nIt seems to me like humanity is exactly the species that finds it barely possible to do civilisation.\n\n\n\n\n[Yudkowsky][17:14]\nIf it were possible to do it with less intelligence, we'd be having this conversation over the Internet that we'd developed with less intelligence.\n\n\n\n[Ngo][17:15]\nAnd it seems like many of the key inventions that enabled civilisation weren't anywhere near as intelligence-bottlenecked as modern science.\n\n\n\n\n[Yudkowsky][17:15]\nYes, it does seem that there's quite a narrow band between \"barely smart enough to develop agriculture\" and \"barely smart enough to develop computers\"! Though there were genuinely fewer people in the preagricultural world, with worse nutrition and no Ashkenazic Jews, and there's the whole question about to what degree the reproduction of the shopkeeper class over several centuries was important to the Industrial Revolution getting started.\n\n\n\n[Ngo][17:15]\n(e.g. you'd get better spears or better plows or whatever just by tinkering, whereas you'd never get relativity just by tinkering)\n\n\n\n\n[Yudkowsky][17:17]\nI model you as taking a lesson from this which is something like… you can train up a villager to be John von Neumann by spending some evolutionary money on giving them science-specific brain features, since John von Neumann couldn't have been much more deeply or generally intelligent, and you could spend even more money and make a chimp a better scientist than John von Neumann.\nMy model is more like, yup, the capabilities you need to invent aqueducts sure do generalize the crap out of things, though also at the upper end of cognition there are compounding returns which can bring John von Neumann into existence, and also also there's various papers suggesting that selection was happening really fast over the last few millennia and real shifts in cognition shouldn't be ruled out.  (This last part is an update to what I was thinking when I wrote Intelligence Explosion Microeconomics, and is from my own perspective a more gradualist line of thinking, because it means there's a wider actual target to traverse before you get to von Neumann.)\n\n\n\n[Ngo][17:20]\nIt's not that \"von Neumann isn't much more deeply generally intelligent\", it's more like \"domain-specific heuristics and instincts get you a long way\". E.g. soccer is a domain where spending evolutionary money on specific features will very much help you beat von Neumann, and so is art, and so is music.\n\n\n\n\n[Yudkowsky][17:20]\nMy skepticism here is that there's a version of, like, \"invent nanotechnology\" which routes through just the shallow places, which humanity stumbles over before we stumble over deep AGI.\n\n\n\n[Ngo][17:21]\nWould you be comfortable publicly discussing the actual cognitive steps which you think would be necessary for inventing nanotechnology?\n\n\n\n\n[Yudkowsky][17:23]\nIt should not be overlooked that there's a very valid sibling of the old complaint \"Anything you can do ceases to be AI\", which is that \"Things you can do with surprisingly-to-your-model shallow cognition are precisely the things that Reality surprises you by telling you that AI can do earlier than you expected.\"  When we see GPT-3, we were getting some amount of real evidence about AI capabilities advancing faster than I expected, and some amount of evidence about GPT-3's task being performable using shallower cognition than expected.\nMany people were particularly surprised by Go because they thought that Go was going to require deeper real thought than chess.\nAnd I think AlphaGo probably was thinking in a legitimately deeper way than Deep Blue.  Just not as much deeper as Douglas Hofstadter thought it would take.\nConversely, people thought a few years ago that driving cars really seemed to be the sort of thing that machine learning would be good at, and were unpleasantly surprised by how the last 0.1% of driving conditions were resistant to shallow techniques.\nDespite the inevitable fact that some surprises of this kind now exist, and that more such surprises will exist in the future, it continues to seem to me that science-and-engineering on the level of \"invent nanotech\" still seems pretty unlikely to be easy to do with shallow thought, by means that humanity discovers before AGI tech manages to learn deep thought?\nWhat actual cognitive steps?  Outside-the-box thinking, throwing away generalizations that governed your previous answers and even your previous questions, inventing new ways to represent your questions, figuring out which questions you need to ask and developing plans to answer them; these are some answers that I hope will be sufficiently useless to AI developers that it is safe to give them, while still pointing in the direction of things that have an un-GPT-3-like quality of depth about them.\nDoing this across unfamiliar domains that couldn't be directly trained in by gradient descent because they were too expensive to simulate a billion examples of\nIf you have something this powerful, why is it not also noticing that the world contains humans?  Why is it not noticing itself?\n\n\n\n[Ngo][17:30]\nIf humans were to invent this type of nanotech, what do you expect the end intellectual result to be?\nE.g. consider the human knowledge involved in building cars\nThere are thousands of individual parts, each of which does a specific thing\n\n\n\n\n[Yudkowsky][17:30]\nUhhhh… is there a reason why \"Eric Drexler's Nanosystems but, like, the real thing, modulo however much Drexler did not successfully Predict the Future about how to do that, which was probably a lot\" is not the obvious answer here?\n\n\n\n[Ngo][17:31]\nAnd some deep principles governing engines, but not really very crucial ones to actually building (early versions of) those engines\n\n\n\n\n[Yudkowsky][17:31]\nthat's… not historically true at all?\ngetting a grip on quantities of heat and their flow was critical to getting steam engines to work\nit didn't happen until the math was there\n\n\n\n[Ngo][17:32]\nAh, interesting\n\n\n\n\n[Yudkowsky][17:32]\nmaybe you can be a mechanic banging on an engine that somebody else designed, around principles that somebody even earlier invented, without a physics degree\nbut, like, engineers have actually needed math since, like, that's been a thing, it wasn't just a prestige trick\n\n\n\n[Ngo][17:34]\nOkay, so you expect there to be a bunch of conceptual work in finding equations which govern nanosystems.\n\nUhhhh… is there a reason why \"Eric Drexler's Nanosystems but, like, the real thing, modulo however much Drexler did not successfully Predict the Future about how to do that, which was probably a lot\" is not the obvious answer here?\n\nThis may in fact be the answer; I haven't read it though.\n\n\n\n\n[Yudkowsky][17:34]\nor other abstract concepts than equations, which have never existed before\nlike, maybe not with a type signature unknown to humanity, but with specific instances unknown to present humanity\nthat's what I'd expect to see from humanly designed nanosystems\n\n\n\n[Ngo][17:35]\nSo something like AlphaFold is only doing a very small proportion of the work here, since it's not able to generate new abstract concepts (of the necessary level of power)\n\n\n\n\n[Yudkowsky][17:35]\nyeeeessss, that is why DeepMind did not take over the world last year\nit's not just that AlphaFold lacks the concepts but that it lacks the machinery to invent those concepts and the machinery to do anything with such concepts\n\n\n\n[Ngo][17:38]\nI think I find this fairly persuasive, but I also expect that people will come up with increasingly clever ways to leverage narrow systems so that they can do more and more work.\n(including things like: if you don't have enough simulations, then train another narrow system to help fix that, etc)\n\n\n\n\n[Yudkowsky][17:39]\n(and they will accept their trivial billion-dollar-payouts and World GDP will continue largely undisturbed, on my mainline model, because it will be easiest to find ways to make money by leveraging narrow systems on the less regulated, less real parts of the economy, instead of trying to build houses or do medicine, etc.)\nreal tests being expensive, simulation being impossibly expensive, and not having enough samples to train your civilization's current level of AI technology, is not a problem you can solve by training a new AI to generate samples, because you do not have enough samples to train your civilization's current level of AI technology to generate more samples\n\n\n\n[Ngo][17:41]\nThinking about nanotech makes me more sympathetic to the argument that developing general intelligence will bring a sharp discontinuity. But it also makes me expect longer timelines to AGI, during which there's more time to do interesting things with narrow AI. So I guess it weighs more against Dario's view, less against Paul's view.\n\n\n\n\n[Yudkowsky][17:41]\nwell, I've been debating Paul about that separately in the timelines channel, not sure about recapitulating it here\nbut in broad summary, since I expect the future to look like it was drawn from the \"history book\" barrel and not the \"futurism\" barrel, I expect huge barriers to doing huge things with narrow AI in small amounts of time; you can sell waifutech because it's unregulated and hard to regulate, but that doesn't feed into core mining and steel production.\nwe could already have double the GDP if it was legal to build houses and hire people, etc., and the change brought by pre-AGI will perhaps be that our GDP could quadruple instead of just double if it was legal to do things, but that will not make it legal to do things, and why would anybody try to do things and probably fail when there are easier $36 billion profits to be made in waifutech.\n\n\n \n \n14.3. Relatively shallow cognition, Go, and math\n \n\n[Ngo][17:45]\nI'd be interested to see Paul's description of how we would train AIs to solve hard scientific problems. I think there's some prediction that's like \"we train it on arxiv and fine-tune it until it starts to output credible hypotheses about nanotech\". And this seems like it has a step that's quite magical to me, but perhaps that'll be true of any prediction that I make before fully understanding how intelligence works.\n\n\n\n\n[Yudkowsky][17:46]\nmy belief is not so much that this training can never happen, but that this probably means the system was trained beyond the point of safe shallowness\nnot in principle over all possible systems a superintelligence could build, but in practice when it happens on Earth\nmy only qualm about this is that current techniques make it possible to buy shallowness in larger quantities than this Earth has ever seen before, and people are looking for surprising ways to make use of that\nso I weigh in my mind the thought of Reality saying Gotcha! by handing me a headline I read tomorrow about how GPT-4 has started producing totally reasonable science papers that are actually correct\nand I am pretty sure that exact thing doesn't happen\nand I ask myself about GPT-5 in a few more years, which had the same architecture as GPT-3 but more layers and more training, doing the same thing\nand it's still largely \"nope\"\nthen I ask myself about people in 5 years being able to use the shallow stuff in any way whatsoever to produce the science papers\nand of course the answer there is, \"okay, but is it doing that without having shallowly learned stuff that adds up to deep stuff which is why it can now do science\"\nand I try saying back \"no, it was born of shallowness and it remains shallow and it's just doing science because it turns out that there is totally a way to be an incredibly mentally shallow skillful scientist if you think 10,000 shallow thoughts per minute instead of 1 deep thought per hour\"\nand my brain is like, \"I cannot absolutely rule it out but it really seems like trying to call the next big surprise in 2014 and you guess self-driving cars instead of Go because how the heck would you guess that Go was shallower than self-driving cars\"\nlike, that is an imaginable surprise\n\n\n\n[Ngo][17:52]\nOn that particular point it seems like the very reasonable heuristic of \"pick the most similar task\" would say that go is like chess and therefore you can do it shallowly.\n\n\n\n\n[Yudkowsky][17:52]\nbut there's a world of difference between saying that a surprise is imaginable, and that it wouldn't surprise you\n\n\n\n[Ngo][17:52]\nI wasn't thinking that much about AI at that point, so you're free to call that post-hoc.\n\n\n\n\n[Yudkowsky][17:52]\nthe Chess techniques had already failed at Go\nactual new techniques were required\nthe people around at the time had witnessed sudden progress on self-driving cars a few years earlier\n\n\n\n[Ngo][17:53]\nMy advance prediction here is that \"math is like go and therefore can be done shallowly\".\n\n\n\n\n[Yudkowsky][17:53]\nself-driving cars were of obviously greater economic interest as well\nmy recollection is that talk of the time was about self-driving\nheh! I have the same sense.\nthat is, math being shallower than science.\nthough perhaps not as shallow as Go, and you will note that Go has fallen and Math has not\n\n\n\n[Ngo][17:54]\nright\nI also expect that we'll need new techniques for math (although not as different from the go techniques as the go techniques were from chess techniques)\nBut I guess we're not finding strong disagreements here either.\n\n\n\n\n[Yudkowsky][17:57]\nif Reality came back and was like \"Wrong! Keeping up with the far reaches of human mathematics is harder than being able to develop your own nanotech,\" I would be like \"What?\" to about the same degree as being \"What?\" on \"You can build nanotech just by thinking trillions of thoughts that are too shallow to notice humans!\"\n\n\n\n[Ngo][17:58]\nPerhaps let's table this topic and move on to one of the others Nate suggested? I'll note that walking through the steps required to invent a science of nanotechnology does make your position feel more compelling, but I'm not sure how much of that is the general \"intelligence is magic\" intuition I mentioned before.\n\n\n\n\n[Yudkowsky][17:59]\nHow do you suspect your beliefs would shift if you had any detailed model of intelligence?\nConsider trying to imagine a particular wrong model of intelligence and seeing what it would say differently?\n(not sure this is a useful exercise and we could indeed try to move on)\n\n\n\n[Ngo][18:01]\nI think there's one model of intelligence where scientific discovery is more actively effortful – as in, you need to be very goal-directed in determining hypotheses, testing hypotheses, and so on.\nAnd there's another in which scientific discovery is more constrained by flashes of insight, and the systems which are producing those flashes of insight are doing pattern-matching in a way that's fairly disconnected from the real-world consequences of those insights.\n\n\n\n\n[Yudkowsky][18:05]\nThe first model is true and the second one is false, if that helps.  You can tell this by contemplating where you would update if you learned any model, by considering that things look more disconnected when you can't see the machinery behind them.  If you don't know what moves the second hand on a watch and the minute hand on a watch, they could just be two things that move at different rates for completely unconnected reasons; if you can see inside the watch, you'll see that the battery is shared and the central timing mechanism is shared and then there's a few gears to make the hands move at different rates.\nLike, in my ontology, the notion of \"effortful\" doesn't particularly parse as anything basic, because it doesn't translate over into paperclip maximizers, which are neither effortful nor effortless.\nBut in a human scientist you've got thoughts being shoved around by all sorts of processes behind the curtains, created by natural selection, some of them reflecting shards of Consequentialism / shadowing paths through time\nThe flashes of insight come to people who were looking in nonrandom places\nIf they didn't plan deliberately and looked on pure intuition, they looked with an intuition trained by past success and failure\nSomebody walking doesn't plan to walk, but long ago as a baby they learned from falling over, and their ancestors who fell over more didn't reproduce\n\n\n\n[Ngo][18:09]\nI think the first model is probably more true for humans in the domain of science. But I'm uncertain about the extent to which this because humans have not been optimised very much for doing science. If we consider the second model in a domain that humans have actually been optimised very hard for (say, physical activity) – then maybe we can use the analogy of a coach and a player. The coach can tell the player what to practice, but almost all the work is done by the player practicing in a way which updates their intuitions.\nThis has become very abstract, though.\n\n\n\n \n14.4. Pivotal acts and historical precedents\n \n\n[Ngo][18:11]\n\nA few takes:\n1. It looks to me like there's some crux in \"how useful will the 'shallow' stuff get before dangerous things happen\". I would be unsurprised if this spiraled back into the gradualness debate. I'm excited about attempts to get specific and narrow disagreements in this domain (not necessarily bettable; I nominate distilling out specific disagreements before worrying about finding bettable ones).\n2. It seems plausible to me we should have some much more concrete discussion about possible ways things could go right, according to Richard. I'd be up for playin the role of beeping when things seem insufficiently concrete.\n3. It seems to me like Richard learned a couple things about Eliezer's model in that last bout of conversation. I'd be interested to see him try to paraphrase his current understanding of it, and to see Eliezer produce beeps where it seems particularly off.\n\nHere's Nate's comment.\nWe could try his #2 suggestion: concrete ways that things could go right.\n\n\n\n\n[Soares][18:12]\n(I am present and am happy to wield the concreteness-hammer)\n\n\n\n\n[Ngo][18:13]\nI think I'm a little cautious about this line of discussion, because my model doesn't strongly constrain the ways that different groups respond to increasing developments in AI. The main thing I'm confident about is that there will be much clearer responses available to us once we have a better picture of AI development.\nE.g. before modern ML, the option of international constraints on compute seemed much less salient, because algorithmic developments seemed much more important.\nWhereas now, tracking/constraining compute use seems like one promising avenue for influencing AGI development.\nOr in the case of nukes, before knowing the specific details about how they were constructed, it would be hard to give a picture of how arms control goes well. But once you know more details about the process of uranium enrichment, you can construct much more efficacious plans.\n\n\n\n\n[Yudkowsky][18:19]\nOnce we knew specific things about bioweapons, countries developed specific treaties for controlling them, which failed (according to @CarlShulman)\n\n\n\n[Ngo][18:19, moved two down in log]\n(As a side note, I think that if Eliezer had been around in the 1930s, and you described to him what actually happened with nukes over the next 80 years, he would have called that \"insanely optimistic\".)\n\n\n\n\n[Yudkowsky][18:21]\nMmmmmmaybe.  Do note that I tend to be more optimistic than the average human about, say, global warming, or everything in transhumanism outside of AGI.\nNukes have going for them that, in fact, nobody has an incentive to start a global thermonuclear war.  Eliezer is not in fact pessimistic about everything and views his AGI pessimism as generalizing to very few other things, which are not, in fact, as bad as AGI.\n\n\n\n[Ngo][18:21]\nI think I put this as the lowest application of competent power out of the things listed in my doc; I'd need to look at the historical details to know if important decision-makers actually cared about it, or were just doing it for PR reasons.\n\n\n\n\n[Shulman][18:22]\n\nOnce we knew specific things about bioweapons, countries developed specific treaties for controlling them, which failed (according to @CarlShulman)\n\nThe treaties were pro forma without verification provisions because the powers didn't care much about bioweapons. They did have verification for nuclear and chemical weapons which did work.\n\n\n\n\n[Yudkowsky][18:22]\nBut yeah, compared to pre-1946 history, nukes actually kind of did go really surprisingly well!\nLike, this planet used to be a huge warring snakepit of Great Powers and Little Powers and then nukes came along and people actually got serious and decided to stop having the largest wars they could fuel.\n\n\n\n[Shulman][18:22][18:23]\nThe analog would be an international agreement to sign a nice unenforced statement of AI safety principles and then all just building AGI in doomy ways without explicitly saying they're doing it..\n\n\n\n\n\nThe BWC also allowed 'defensive' research that is basically as bad as the offensive kind.\n\n\n\n\n[Yudkowsky][18:23]\n\nThe analog would be an international agreement to sign a nice unenforced statement of AI safety principles and then all just building AGI in doomy ways without explicitly saying they're doing it..\n\nThis scenario sure sounds INCREDIBLY PLAUSIBLE, yes\n\n\n\n[Ngo][18:22]\nOn that point: do either of you have strong opinions about the anthropic shadow argument about nukes? That seems like one reason why the straw 1930s-Eliezer I just cited would have been justified.\n\n\n\n\n[Yudkowsky][18:23]\nI mostly don't consider the anthropic shadow stuff\n\n\n\n[Shulman][18:24]\nIn the late Cold War Gorbachev and Reagan might have done the BWC treaty+verifiable dismantling, but they were in a rush on other issues like nukes and collapse of the USSR.\nPutin just wants to keep his bioweapons program, it looks like. Even denying the existence of the exposed USSR BW program.\n\n\n\n\n[Yudkowsky][18:25]\nI'm happy making no appeal to anthropics here.\n\n\n\n[Shulman][18:25]\nBoo anthropic shadow claims. Always dumb.\n(Sorry I was only invoked for BW, holding my tongue now.)\n\n\n\n\n[Yudkowsky: ]\n[Soares: ]\n\n\n\n\n\n\n\n\n[Yudkowsky][18:26]\nThere may come a day when the strength of nonanthropic reasoning fails… but that is not this day!\n\n\n\n[Ngo][18:27]\nOkay, happy to rule that out for now too. So yeah, I picture 1930s-Eliezer pointing to technological trends and being like \"by default, 30 years after the first nukes are built, you'll be able to build one in your back yard. And governments aren't competent enough to stop that happening.\"\nAnd I don't think I could have come up with a compelling counterargument back then.\n\n\n\n\n[Soares][18:27]\n\n[Sorry I was only invoked for BW, holding my tongue now.]\n\n(fwiw, I thought that when Richard asked \"you two\" re: anthropic shadow, he meant you also. But I appreciate the caution. And in case Richard meant me, I will note that I agree w/ Carl and Eliezer on this count.)\n\n\n\n\n[Ngo][18:28]\n\n(fwiw, I thought that when Richard asked \"you two\" re: anthropic shadow, he meant you also. But I appreciate the caution. And in case Richard meant me, I will note that I agree w/ Carl and Eliezer on this count.)\n\nOh yeah, sorry for the ambiguity, I meant Carl.\nI do believe that AI control will be more difficult than nuclear control, because AI is so much more useful. But I also expect that there will be many more details about AI development that we don't currently understand, that will allow us to influence it (because AGI is a much more complicated concept than \"really really big bomb\").\n\n\n\n\n[Yudkowsky][18:29]\n\n[So yeah, I picture 1930s-Eliezer pointing to technological trends and being like \"by default, 30 years after the first nukes are built, you'll be able to build one in your back yard. And governments aren't competent enough to stop that happening.\"\nAnd I don't think I could have come up with a compelling counterargument back then.]\n\nSo, I mean, in fact, I don't prophesize doom from very many trends at all!  It's literally just AGI that is anywhere near that unmanageable!  Many people in EA are more worried about biotech than I am, for example.\n\n\n\n[Ngo][18:31]\nI appreciate that my response is probably not very satisfactory to you here, so let me try to think about more concrete things we can disagree about.\n\n\n\n\n[Yudkowsky][18:31]\n\n[I do believe that AI control will be more difficult than nuclear control, because AI is so much more useful. But I also expect that there will be many more details about AI development that we don't currently understand, that will allow us to influence it (because AGI is a much more complicated concept than \"really really big bomb\").]\n\nEr… I think this is not a correct use of the Way I was attempting to gesture at; things being more complicated when known than unknown, does not mean you have more handles to influence them because each complication has the potential to be a handle.  It is not in general true that very complicated things are easier for humanity in general, and governments in particular, to control, because they have so many exposed handles.\nI think there's a valid argument about it maybe being more possible to control the supply chain for AI training processors if the global chip supply chain is narrow (also per Carl).\n\n\n\n[Ngo][18:34]\nOne thing that we seemed to disagree on, to a significant extent, is the difficulty of \"US and China preventing any other country from becoming a leader in AI\"\n\n\n\n\n[Yudkowsky][18:35]\nIt is in fact a big deal about nuclear tech that uranium can't be mined in every country, as I understand it, and that centrifuges stayed at the frontier of technology and were harder to build outside the well-developed countries, and that the world ended up revolving around a few Great Powers that had no interest in nuclear tech proliferating any further.\n\n\n\n[Ngo][18:35]\nIt seems to me that the US and/or China could apply a lot of pressure to many countries.\n\n\n\n\n[Yudkowsky][18:35]\nUnfortunately, before you let that encourage you too much, I would also note it was an important fact about nuclear bombs that they did not produce streams of gold and then ignite the atmosphere if you turned up the stream of gold too high with the actual thresholds involved being unpredictable.\n\n\n\n[Ngo][18:35]\nE.g. if the UK had actually seriously tried to block Google's acquisition of DeepMind, and the US had actually seriously tried to convince them not to do so, then I expect that the UK would have folded. (Although it's a weird hypothetical.)\n\nUnfortunately, before you let that encourage you too much, I would also note it was an important fact about nuclear bombs that they did not produce streams of gold and then ignite the atmosphere if you turned up the stream of gold too high with the actual thresholds involved being unpredictable.\n\nNot a critical point, but nuclear power does actually seem like a \"stream of gold\" in many ways.\n(also, quick meta note: I need to leave in 10 mins)\n\n\n\n\n[Yudkowsky][18:38]\nI would be a lot more cheerful about a few Great Powers controlling AGI if AGI produced wealth, but more powerful AGI produced no more wealth; if AGI was made entirely out of hardware, with no software component that could be keep getting orders of magnitude more efficient using hardware-independent ideas; and if the button on AGIs that destroyed the world was clearly labeled.\nThat does take AGI to somewhere in the realm of nukes.\n\n\n\n[Ngo][18:38]\nHow much improvement do you think can be eked out of existing amounts of hardware if people just try to focus on algorithmic improvements?\n\n\n\n\n[Yudkowsky][18:38]\nAnd Eliezer is capable of being less concerned about things when they are intrinsically less concerning, which is why my history does not, unlike some others in this field, involve me running also being Terribly Concerned about nuclear war, global warming, biotech, and killer drones.\n\n\n\n[Ngo][18:39]\nThis says 44x improvements over 7 years: https://openai.com/blog/ai-and-efficiency/\n\n\n\n\n\n[Yudkowsky][18:39]\nWell, if you're a superintelligence, you can probably do human-equivalent human-speed general intelligence on a 286, though it might possibly have less fine motor control, or maybe not, I don't know.\n\n\n\n[Ngo][18:40]\n(within reasonable amounts of human-researcher-time – say, a decade of holding hardware fixed)\n\n\n\n\n[Yudkowsky][18:40]\nI wouldn't be surprised if human ingenuity asymptoted out at AGI on a home computer from 1995.\nDon't know if it'd take more like a hundred years or a thousand years to get fairly close to that.\n\n\n\n[Ngo][18:41]\nDoes this view cash out in a prediction about how the AI and Efficiency graph projects into the future?\n\n\n\n\n[Yudkowsky][18:42]\nThe question of how efficiently you can perform a fixed algorithm doing fixed things, often pales compared to the gains on switching to different algorithms doing different things.\nGiven government control of all the neural net training chips and no more public GPU farms, I buy that they could keep a nuke!AGI (one that wasn't tempting to crank up and had clearly labeled Doom-Causing Buttons whose thresholds were common knowledge) under lock of the Great Powers for 7 years, during which software decreased hardware requirements by 44x.  I am a bit worried about how long it takes before there's a proper paradigm shift on the level of deep learning getting started in 2006, after which the Great Powers need to lock down on individual GPUs.\n\n\n\n[Ngo][18:46]\nHmm, okay.\n\n\n\n \n14.5. Past ANN progress\n \n\n[Ngo][18:46]\nI don't expect another paradigm shift like that\n(in part because I'm not sure the paradigm shift actually happened in the first place – it seems like neural networks were improving pretty continuously over many decades)\n\n\n\n\n[Yudkowsky][18:47]\nI've noticed that opinion around OpenPhil!  It makes sense if you have short timelines and expect the world to end before there's another paradigm shift, but OpenPhil doesn't seem to expect that either.\nYeah, uh, there was kinda a paradigm shift in AI between say 2000 and now.  There really, really was.\n\n\n\n[Ngo][18:49]\nWhat I mean is more like: it's not clear to me that an extrapolation of the trajectory of neural networks is made much better by incorporating data about the other people who weren't using neural networks.\n\n\n\n\n[Yudkowsky][18:49]\nWould you believe that at one point Netflix ran a prize contest to produce better predictions of their users' movie ratings, with a $1 million prize, and this was one of the largest prizes ever in AI and got tons of contemporary ML people interested, and neural nets were not prominent on the solutions list at all, because, back then, people occasionally solved AI problems not using neural nets?\nI suppose that must seem like a fairy tale, as history always does, but I lived it!\n\n\n\n[Ngo][18:50]\n(I wasn't denying that neural networks were for a long time marginalised in AI)\nI'd place much more credence on future revolutions occurring if neural networks had actually only been invented recently.\n(I have to run in 2 minutes)\n\n\n\n\n[Yudkowsky][18:51]\nThe world might otherwise end before the next paradigm shift, but if the world keeps on ticking for 10 years, 20 years, there will not always be the paradigm of training massive networks by even more massive amounts of gradient descent; I do not think that is actually the most efficient possible way to turn computation into intelligence.\nNeural networks stayed stuck at only a few layers for a long time, because the gradients would explode or die out if you made the networks any deeper.\nThere was a critical moment in 2006(?) where Hinton and Salakhutdinov(?) proposed training Restricted Boltzmann machines unsupervised in layers, and then 'unrolling' the RBMs to initialize the weights in the network, and then you could do further gradient descent updates from there, because the activations and gradients wouldn't explode or die out given that initialization.  That got people to, I dunno, 6 layers instead of 3 layers or something? But it focused attention on the problem of exploding gradients as the reason why deeply layered neural nets never worked, and that kicked off the entire modern field of deep learning, more or less.\n\n\n\n[Ngo][18:56]\nOkay, so are you claiming that that neural networks were mostly bottlenecked by algorithmic improvements, not compute availability, for a significant part of their history?\n\n\n\n\n[Yudkowsky][18:56]\nIf anybody goes back and draws a graph claiming the whole thing was continuous if you measure the right metric, I am not really very impressed unless somebody at the time was using that particular graph and predicting anything like the right capabilities off of it.\n\n\n\n[Ngo][18:56]\nIf so this seems like an interesting question to get someone with more knowledge of ML history than me to dig into; I might ask around.\n\n\n\n\n[Yudkowsky][18:57]\n\n[Okay, so are you claiming that that neural networks were mostly bottlenecked by algorithmic improvements, not compute availability, for a significant part of their history?]\n\nEr… yeah?  There was a long time when, even if you threw a big neural network at something, it just wouldn't work.\nGood night, btw?\n\n\n\n[Ngo][18:57]\nLet's call it here; thanks for the discussion.\n\n\n\n\n[Soares][18:57]\nThanks, both!\n\n\n\n\n[Ngo][18:57]\nI'll be interested to look into that claim, it doesn't fit with the impressions I have of earlier bottlenecks.\nI think the next important step is probably for me to come up with some concrete governance plans that I'm excited about.\nI expect this to take quite a long time\n\n\n\n\n[Soares][18:58]\nWe can coordinate around that later. Sorry for keeping you so late already, Richard.\n\n\n\n\n[Ngo][18:59]\nNo worries\nMy proposal would be that we should start on whatever work is necessary to convert the debate into a publicly accessible document now\nIn some sense coming up with concrete governance plans is my full-time job, but I feel like I'm still quite a way behind in my thinking on this, compared with people who have been thinking about governance specifically for longer\n\n\n\n\n[Soares][19:01]\n(@RobBensinger is already on it )\n\n\n\n\n[Bensinger: ]\n\n\n\n\n\n\n\n\n[Yudkowsky][19:03]\nNuclear plants might be like narrow AI in this analogy; some designs potentially contribute to proliferation, and you can get more economic wealth by building more of them, but they have no Unlabeled Doom Dial where you can get more and more wealth out of them by cranking them up until at some unlabeled point the atmosphere ignites.\nAlso a thought: I don't think you just want somebody with more knowledge of AI history, I think you might need to ask an actual old fogey who was there at the time, and hasn't just learned an ordered history of just the parts of the past that are relevant to the historian's theory about how the present happened.\nTwo of them, independently, to see if the answers you get are reliable-as-in-statistical-reliability.\n\n\n\n[Soares][19:19]\nMy own quick take, for the record, is that it looks to me like there are two big cruxes here.\nOne is about whether \"deep generality\" is a good concept, and in particular whether it pushes AI systems quickly from \"nonscary\" to \"scary\" and whether we should expect human-built AI systems to acquire it in practice (before the acute risk period is ended by systems that lack it). The other is about how easy it will be to end the acute risk period (eg by use of politics or nonscary AI systems alone).\nI suspect the latter is the one that blocks on Richard thinking about governance strategies. I'd be interested in attempting further progress on the former point, though it's plausible to me that that should happen over in #timelines instead of here.\n\n\n\n \n\nThe post Ngo and Yudkowsky on scientific reasoning and pivotal acts appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Ngo and Yudkowsky on scientific reasoning and pivotal acts", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=1", "id": "137381e3428732e7f45cee61d790c321"} {"text": "Christiano and Yudkowsky on AI predictions and human intelligence\n\n\n \nThis is a transcript of a conversation between Paul Christiano and Eliezer Yudkowsky, with comments by Rohin Shah, Beth Barnes, Richard Ngo, and Holden Karnofsky, continuing the Late 2021 MIRI Conversations.\nColor key:\n\n\n\n\n Chat by Paul and Eliezer \n Other chat \n\n\n\n\n \n15. October 19 comment\n \n\n[Yudkowsky][11:01]\nthing that struck me as an iota of evidence for Paul over Eliezer:\nhttps://twitter.com/tamaybes/status/1450514423823560706?s=20 \n\n\n\n\n\n \n16. November 3 conversation\n \n16.1. EfficientZero\n \n\n[Yudkowsky][9:30]\nThing that (if true) strikes me as… straight-up falsifying Paul's view as applied to modern-day AI, at the frontier of the most AGI-ish part of it and where Deepmind put in substantial effort on their project?  EfficientZero (allegedly) learns Atari in 100,000 frames.  Caveat: I'm not having an easy time figuring out how many frames MuZero would've required to achieve the same performance level.  MuZero was trained on 200,000,000 frames but reached what looks like an allegedly higher high; the EfficientZero paper compares their performance to MuZero on 100,000 frames, and claims theirs is much better than MuZero given only that many frames.\nhttps://arxiv.org/pdf/2111.00210.pdf  CC: @paulfchristiano.\n(I would further argue that this case is important because it's about the central contemporary model for approaching AGI, at least according to Eliezer, rather than any number of random peripheral AI tasks.)\n\n\n\n[Shah][14:46]\nI only looked at the front page, so might be misunderstanding, but the front figure says \"Our proposed method EfficientZero is 170% and 180% better than the previous SoTA performance in mean and median human normalized score […] on the Atari 100k benchmark\", which does not seem like a huge leap?\nOh, I incorrectly thought that was 1.7x and 1.8x, but it is actually 2.7x and 2.8x, which is a bigger deal (though still feels not crazy to me)\n\n\n\n\n[Yudkowsky][15:28]\nthe question imo is how many frames the previous SoTA would require to catch up to EfficientZero\n(I've tried emailing an author to ask about this, no response yet)\nlike, perplexity on GPT-3 vs GPT-2 and \"losses decreased by blah%\" would give you a pretty meaningless concept of how far ahead GPT-3 was from GPT-2, and I think the \"2.8x performance\" figure in terms of scoring is equally meaningless as a metric of how much EfficientZero improves if any\nwhat you want is a notion like \"previous SoTA would have required 10x the samples\" or \"previous SoTA would have required 5x the computation\" to achieve that performance level\n\n\n\n[Shah][15:38]\nI see. Atari curves are not nearly as nice and stable as GPT curves and often have the problem that they plateau rather than making steady progress with more training time, so that will make these metrics noisier, but it does seem like a reasonable metric to track\n(Not that I have recommendations about how to track it; I doubt the authors can easily get these metrics)\n\n\n\n\n[Christiano][18:01]\nIf you think our views are making such starkly different predictions then I'd be happy to actually state any of them in advance, including e.g. about future ML benchmark results.\nI don't think this falsifies my view, and we could continue trying to hash out what my view is but it seems like slow going and I'm inclined to give up.\nRelevant questions on my view are things like: is MuZero optimized at all for performance in the tiny-sample regime? (I think not, I don't even think it set SoTA on that task and I haven't seen any evidence.) What's the actual rate of improvements since people started studying this benchmark ~2 years ago, and how much work has gone into it? And I totally agree with your comments that \"# of frames\" is the natural unit for measuring and that would be the starting point for any discussion.\n\n\n\n\n[Barnes][18:22]\n\nIn previous MCTS RL algorithms, the environment model is either given or only trained with rewards, values, and policies, which cannot provide sufficient training signals due to their scalar nature. The problem is more severe when the reward is sparse or the bootstrapped value is not accurate. The MCTS policy improvement operator heavily relies on the environment model. Thus, it is vital to have an accurate one.\nWe notice that the output ^st+1\n from the dynamic function G should be the same as st+1, i.e. the output of the representation function H with input of the next observation ot+1 (Fig. 2). This can help to supervise the predicted next state ^st+1 using the actual st+1, which is a tensor with at least a few hundred dimensions. This provides ^st+1 with much more training signals than the default scalar reward and value.\n\nThis seems like a super obvious thing to do and I'm confused why DM didn't already try this. It was definitely being talked about in ~2018\nWill ask a DM friend about it\n\n\n\n\n[Yudkowsky][22:45]\nI… don't think I want to take all of the blame for misunderstanding Paul's views; I think I also want to complain at least a little that Paul spends an insufficient quantity of time pointing at extremely concrete specific possibilities, especially real ones, and saying how they do or don't fit into the scheme.\nAm I rephrasing correctly that, in this case, if Efficient Zero was actually a huge (3x? 5x? 10x?) jump in RL sample efficiency over previous SOTA, measured in 1 / frames required to train to a performance level, then that means the Paul view doesn't apply to the present world; but this could be because MuZero wasn't the real previous SOTA, or maybe because nobody really worked on pushing out this benchmark for 2 years and therefore on the Paul view it's fine for there to still be huge jumps?  In other words, this is something Paul's worldview has to either defy or excuse, and not just, \"well, sure, why wouldn't it do that, you have misunderstood which kinds of AI-related events Paul is even trying to talk about\"?\nIn the case where, \"yes it's a big jump and that shouldn't happen later, but it could happen now because it turned out nobody worked hard on pushing past MuZero over the last 2 years\", I wish to register that my view permits it to be the case that, when the world begins to end, the frontier that enters into AGI is similarly something that not a lot of people spent a huge effort on since a previous prototype from 2 years earlier.  It's just not very surprising to me if the future looks a lot like the past, or if human civilization neglects to invest a ton of effort in a research frontier.\nGwern guesses that getting to EfficientZero's performance level would require around 4x the samples for MuZero-Reanalyze (the more efficient version of MuZero which replayed past frames), which is also apparently the only version of MuZero the paper's authors were considering in the first place – without replays, MuZero requires 20 billion frames to achieve its performance, not the figure of 200 million. https://www.lesswrong.com/posts/jYNT3Qihn2aAYaaPb/efficientzero-human-ale-sample-efficiency-w-muzero-self?commentId=JEHPQa7i8Qjcg7TW6\n\n\n \n17. November 4 conversation\n \n17.1. EfficientZero (continued)\n \n\n[Christiano][7:42]\nI think it's possible the biggest misunderstanding is that you somehow think of my view as a \"scheme\" and your view as a normal view where probability distributions over things happen.\nConcretely, this is a paper that adds a few techniques to improve over MuZero in a domain that (it appears) wasn't a significant focus of MuZero. I don't know how much it improves but I can believe gwern's estimates of 4x.\nI'd guess MuZero itself is a 2x improvement over the baseline from a year ago, which was maybe a 4x improvement over the algorithm from a year before that.\nIf that's right, then no it's not mindblowing on my view to have 4x progress one year, 2x progress the next, and 4x progress the next.\nIf other algorithms were better than MuZero, then the 2019-2020 progress would be >2x and the 2020-2021 progress would be <4x.\nI think it's probably >4x sample efficiency though (I don't totally buy gwern's estimate there), which makes it at least possibly surprising.\nBut it's never going to be that surprising. It's a benchmark that people have been working on for a few years that has been seeing relatively rapid improvement over that whole period.\nThe main innovation is how quickly you can learn to predict future frames of Atari games, which has tiny economic relevance and calling it the most AGI-ish direction seems like it's a very Eliezer-ish view, this isn't the kind of domain where I'm either most surprised to see rapid progress at all nor is the kind of thing that seems like a key update re: transformative AI\nyeah, SoTA in late 2020 was SPR, published by a much smaller academic group: https://arxiv.org/pdf/2007.05929.pdf\nMuZero wasn't even setting sota on this task at the time it was published\nmy \"schemes\" are that (i) if a bunch of people are trying on a domain and making steady slow progress, I'm surprised to see giant jumps and I don't expect most absolute progress to occur in such jumps, (ii) if a domain is worth a lot of $, generally a bunch of people will be trying. Those aren't claims about what is always true, they are claims about what is typically true and hence what I'm guessing will be true for transformative AI.\nMaybe you think those things aren't even good general predictions, and that I don't have long enough tails in my distributions or whatever. But in that case it seems we can settle it quickly by prediction.\nI think this result is probably significant (>30% absolute improvement) + faster-than-trend (>50% faster than previous increment) progress relative to prior trend on 8 of the 27 atari games (from table 1, treating SimPL->{max of MuZero, SPR}->EfficientZero as 3 equally spaced datapoints): Asterix, Breakout, almost ChopperCMD, almost CrazyClimber, Gopher, Kung Fu Master, Pong, QBert, SeaQuest. My guess is that they thought a lot about a few of those games in particular because they are very influential on the mean/median. Note that this paper is a giant grab bag and that simply stapling together the prior methods would have already been a significant improvement over prior SoTA. (ETA: I don't think saying \"its only 8 of 27 games\" is an update against it being big progress or anything. I do think saying \"stapling together 2 previous methods without any complementarity at all would already have significantly beaten SoTA\" is fairly good evidence that it's not a hard-to-beat SoTA.)\nand even fewer people working on the ultra-low-sample extremely-low-dimensional DM control environments (this is the subset of problems where the state space is 4 dimensions, people are just not trying to publish great results on cartpole), so I think the most surprising contribution is the atari stuff\nOK, I now also understand what the result is I think?\nI think the quick summary is: the prior SoTA is SPR, which learns to predict the domain and then does Q-learning. MuZero instead learns to predict the domain and does MCTS, but it predicts the domain in a slightly less sophisticated way than SPR (basically just predicts rewards, whereas SPR predicts all of the agent's latent state in order to get more signal from each frame). If you combine MCTS with more sophisticated prediction, you do better.\nI think if you told me that DeepMind put in significant effort in 2020 (say, at least as much post-MuZero effort as the new paper?) trying to get great sample efficiency on the easy-exploration atari games, and failed to make significant progress, then I'm surprised.\nI don't think that would \"falsify\" my view, but it would be an update against? Like maybe if DM put in that much effort I'd maybe have given only a 10-20% probability to a new project of similar size putting in that much effort making big progress, and even conditioned on big progress this is still >>median (ETA: and if DeepMind put in much more effort I'd be more surprised than 10-20% by big progress from the new project)\nWithout DM putting in much effort, it's significantly less surprising and I'll instead be comparing to the other academic efforts. But it's just not surprising that you can beat them if you are willing to put in the effort to reimplement MCTS and they aren't, and that's a step that is straightforwardly going to improve performance.\n(not sure if that's the situation)\nAnd then to see how significant updates against are, you have to actually contrast them with all the updates in the other direction where people don't crush previous benchmark results\nand instead just make modest progress\nI would guess that if you had talked to an academic about this question (what happens if you combine SPR+MCTS) they would have predicted significant wins in sample efficiency (at the expense of compute efficiency) and cited the difficulty of implementing MuZero compared to any of the academic results. That's another way I could be somewhat surprised (or if there were academics with MuZero-quality MCTS implementations working on this problem, and they somehow didn't set SoTA, then I'm even more surprised). But I'm not sure if you'll trust any of those judgments in hindsight.\nRepeating the main point:\nI don't really think a 4x jump over 1 year is something I have to \"defy or excuse\", it's something that I think becomes more or less likely depending on facts about the world, like (i) how fast was previous progress, (ii) how many people were working on previous projects and how targeted were they at this metric, (iii) how many people are working in this project and how targeted was it at this metric\nit becomes continuously less likely as those parameters move in the obvious directions\nit never becomes 0 probability, and you just can't win that much by citing isolated events that I'd give say a 10% probability to, unless you actually say something about how you are giving >10% probabilities to those events without losing a bunch of probability mass on what I see as the 90% of boring stuff\n\n\n\n\n[Ngo: ]\n\n\n\n\nand then separately I have a view about lots of people working on important problems, which doesn't say anything about this case\n(I actually don't think this event is as low as 10%, though it depends on what background facts about the project you are conditioning on—obviously I gave <<10% probability to someone publishing this particular result, but something like \"what fraction of progress in this field would come down to jumps like this\" or whatever is probably >10% until you tell me that DeepMind actually cared enough to have already tried)\n\n\n\n\n[Ngo][8:48]\nI expect Eliezer to say something like: DeepMind believes that both improving RL sample efficiency, and benchmarking progress on games like Atari, are important parts of the path towards AGI. So insofar as your model predicts that smooth progress will be caused by people working directly towards AGI, DeepMind not putting effort into this is a hit to that model. Thoughts?\n\n\n\n\n[Christiano][9:06]\nI don't think that learning these Atari games in 2 hours is a very interesting benchmark even for deep RL sample efficiency, and it's totally unrelated to the way in which humans learn such games quickly. It seems pretty likely totally plausible (50%?) to me that DeepMind feels the same way, and then the question is about other random considerations like how they are making some PR calculation.\n\n\n\n\n[Ngo][9:18]\nIf Atari is not a very interesting benchmark, then why did DeepMind put a bunch of effort into making Agent57 and applying MuZero to Atari?\nAlso, most of the effort they've spent on games in general has been on methods very unlike the way humans learn those games, so that doesn't seem like a likely reason for them to overlook these methods for increasing sample efficiency.\n\n\n\n\n[Shah][9:32]\n\nIt seems pretty likely totally plausible (50%?) to me that DeepMind feels the same way, and then the question is about other random considerations like how they are making some PR calculation.\n\nNot sure of the exact claim, but DeepMind is big enough and diverse enough that I'm pretty confident at least some people working on relevant problems don't feel the same way\n\n[…] This seems like a super obvious thing to do and I'm confused why DM didn't already try this. It was definitely being talked about in ~2018\n\nSpeculating without my DM hat on: maybe it kills performance in board games, and they want one algorithm for all settings?\n\n\n\n\n[Christiano][10:29]\nAtari games in the tiny sample regime are a different beast\nthere are just a lot of problems you can state about Atari some of which are more or less interesting (e.g. jointly learning to play 57 Atari games is a more interesting problem than learning how to play one of them absurdly quickly, and there are like 10 other problems about Atari that are more interesting than this one)\nThat said, Agent57 also doesn't seem interesting except that it's an old task people kind of care about. I don't know about the take within DeepMind but outside I don't think anyone would care about it other than historical significance of the benchmark / obviously-not-cherrypickedness of the problem.\nI'm sure that some people at DeepMind care about getting the super low sample complexity regime. I don't think that really tells you how large the DeepMind effort is compared to some random academics who care about it.\n\n\n\n\n[Shah: ]\n\n\n\n\nI think the argument for working on deep RL is fine and can be based on an analogy with humans while you aren't good at the task. Then once you are aiming for crazy superhuman performance on Atari games you naturally start asking \"what are we doing here and why are we still working on atari games?\"\n\n\n\n\n[Ngo: ]\n\n\n\n\nand correspondingly they are a smaller and smaller slice of DeepMind's work over time\n\n\n\n\n[Ngo: ]\n\n\n\n\n(e.g. Agent57 and MuZero are the only DeepMind blog posts about Atari in the last 4 years, it's not the main focus of MuZero and I don't think Agent57 is a very big DM project)\nReaching this level of performance in Atari games is largely about learning perception, and doing that from 100k frames of an Atari game just doesn't seem very analogous to anything humans do or that is economically relevant from any perspective. I totally agree some people are into it, but I'm totally not surprised if it's not going to be a big DeepMind project.\n\n\n\n\n[Yudkowsky][10:51]\nwould you agree it's a load-bearing assumption of your worldview – where I also freely admit to having a worldview/scheme, this is not meant to be a prejudicial term at all – that the line of research which leads into world-shaking AGI must be in the mainstream and not in a weird corner where a few months earlier there were more profitable other ways of doing all the things that weird corner did? \neg, the tech line leading into world-shaking AGI must be at the profitable forefront of non-world-shaking tasks.  as otherwise, afaict, your worldview permits that if counterfactually we were in the Paul-forbidden case where the immediate precursor to AGI was something like EfficientZero (whose motivation had been beating an old SOTA metric rather than, say, market-beating self-driving cars), there might be huge capability leaps there just as EfficientZero represents a large leap, because there wouldn't have been tons of investment in that line.\n\n\n\n[Christiano][10:54]\nSomething like that is definitely a load-bearing assumption\nLike there's a spectrum with e.g. EfficientZero –> 2016 language modeling –> 2014 computer vision –> 2021 language modeling –> 2021 computer vision, and I think everything anywhere close to transformative AI will be way way off the right end of that spectrum\nBut I think quantitatively the things you are saying don't seem quite right to me. Suppose that MuZero wasn't the best way to do anything economically relevant, but it was within a factor of 4 on sample efficiency for doing tasks that people care about. That's already going to be enough to make tons of people extremely excited.\nSo yes, I'm saying that anything leading to transformative AI is \"in the mainstream\" in the sense that it has more work on it than 2021 language models.\nBut not necessarily that it's the most profitable way to do anything that people care about. Different methods scale in different ways, and something can burst onto the scene in a dramatic way, but I strongly expect speculative investment driven by that possibility to already be way (way) more than 2021 language models. And I don't expect gigantic surprises. And I'm willing to bet that e.g. EfficientZero isn't a big surprise for researchers who are paying attention to the area (in addition to being 3+ orders of magnitude more neglected than anything close to transformative AI)\n2021 language modeling isn't even very competitive, it's still like 3-4 orders of magnitude smaller than semiconductors. But I'm giving it as a reference point since it's obviously much, much more competitive than sample-efficient atari.\nThis is a place where I'm making much more confident predictions, this is \"falsify paul's worldview\" territory once you get to quantitative claims anywhere close to TAI and \"even a single example seriously challenges paul's worldview\" a few orders of magnitude short of that\n\n\n\n\n[Yudkowsky][11:04]\ncan you say more about what falsifies your worldview previous to TAI being super-obviously-to-all-EAs imminent?\nor rather, \"seriously challenges\", sorry\n\n\n\n[Christiano][11:05][11:08]\nbig AI applications achieved by clever insights in domains that aren't crowded, we should be quantitative about how crowded and how big if we want to get into \"seriously challenges\"\nlike e.g. if this paper on atari was actually a crucial ingredient for making deep RL for robotics work, I'd be actually for real surprised rather than 10% surprised\nbut it's not going to be, those results are being worked on by much larger teams of more competent researchers at labs with $100M+ funding\nit's definitely possible for them to get crushed by something out of left field\nbut I'm betting against every time\n\n\n\n\n\nor like, the set of things people would describe as \"out of left field,\" and the quantitative degree of neglectedness, becomes more and more mild as the stakes go up\n\n\n\n\n[Yudkowsky][11:08]\nhow surprised are you if in 2022 one company comes out with really good ML translation, and they manage to sell a bunch of it temporarily until others steal their ideas or Google acquires them?  my model of Paul is unclear on whether this constitutes \"many people are already working on language models including ML translation\" versus \"this field is not profitable enough right this minute for things to be efficient there, and it's allowed to be nonobvious in worlds where it's about to become profitable\".\n\n\n\n[Christiano][11:08]\nif I wanted to make a prediction about that I'd learn a bunch about how much google works on translation and how much $ they make\nI just don't know the economics\nand it depends on the kind of translation that they are good at and the economics (e.g. google mostly does extremely high-volume very cheap translation)\nbut I think there are lots of things like that / facts I could learn about Google such that I'd be surprised in that situation\nindependent of the economics, I do think a fair number of people are working on adjacent stuff, and I don't expect someone to come out of left field for google-translate-cost translation between high-resource languages\nbut it seems quite plausible that a team of 10 competent people could significantly outperform google translate, and I'd need to learn about the economics to know how surprised I am by 10 people or 100 people or what\nI think it's allowed to be non-obvious whether a domain is about to be really profitable\nbut it's not that easy, and the higher the stakes the more speculative investment it will drive, etc.\n\n\n\n\n[Yudkowsky][11:14]\nif you don't update much off EfficientZero, then people also shouldn't be updating much off of most of the graph I posted earlier as possible Paul-favoring evidence, because most of those SOTAs weren't highly profitable so your worldview didn't have much to say about them. ?\n\n\n\n[Christiano][11:15]\nMost things people work a lot on improve gradually. EfficientZero is also quite gradual compared to the crazy TAI stories you tell. I don't really know what to say about this game other than I would prefer make predictions in advance and I'm happy to either propose questions/domains or make predictions in whatever space you feel more comfortable with.\n\n\n\n\n[Yudkowsky][11:16]\nI don't know how to point at a future event that you'd have strong opinions about.  it feels like, whenever I try, I get told that the current world is too unlike the future conditions you expect.\n\n\n\n[Christiano][11:16]\nLike, whether or not EfficientZero is evidence for your view depends on exactly how \"who knows what will happen\" you are. if you are just a bit more spread out than I am, then it's definitely evidence for your view.\nI'm saying that I'm willing to bet about any event you want to name, I just think my model of how things work is more accurate.\nI'd prefer it be related to ML or AI.\n\n\n\n\n[Yudkowsky][11:17]\nto be clear, I appreciate that it's similarly hard to point at an event like that for myself, because my own worldview says \"well mostly the future is not all that predictable with a few rare exceptions\"\n\n\n\n[Christiano][11:17]\nBut I feel like the situation is not at all symmetrical, I expect to outperform you on practically any category of predictions we can specify.\nso like I'm happy to bet about benchmark progress in LMs, or about whether DM or OpenAI or Google or Microsoft will be the first to achieve something, or about progress in computer vision, or about progress in industrial robotics, or about translations\nwhatever\n\n\n\n \n17.2. Near-term AI predictions\n \n\n[Yudkowsky][11:18]\nthat sounds like you ought to have, like, a full-blown storyline about the future?\n\n\n\n[Christiano][11:18]\nwhat is a full-blown storyline? I have a bunch of ways that I think about the world and make predictions about what is likely\nand yes, I can use those ways of thinking to make predictions about whatever\nand I will very often lose to a domain expert who has better and more informed ways of making predictions\n\n\n\n\n[Yudkowsky][11:19]\nwhat happens if 2022 through 2024 looks literally exactly like Paul's modal or median predictions on things?\n\n\n\n[Christiano][11:19]\nbut I think in ML I will generally beat e.g. a superforecaster who doesn't have a lot of experience in the area\ngive me a question about 2024 and I'll give you a median?\nI don't know what \"what happens\" means\nstorylines do not seem like good ways of making predictions\n\n\n\n\n[Shah: ]\n\n\n\n\n\n\n\n\n[Yudkowsky][11:20]\nI mean, this isn't a crux for anything, but it seems like you're asking me to give up on that and just ask for predictions?  so in 2024 can I hire an artist who doesn't speak English and converse with them almost seamlessly through a machine translator?\n\n\n\n[Christiano][11:22]\nmedian outcome (all of these are going to be somewhat easy-to-beat predictions because I'm not thinking): you can get good real-time translations, they are about as good as a +1 stdev bilingual speaker who listens to what you said and then writes it out in the other language as fast as they can type\nProbably also for voice -> text or voice -> voice, though higher latencies and costs.\nNot integrated into standard video chatting experience because the UX is too much of a pain and the world sucks.\nThat's a median on \"how cool/useful is translation\"\n\n\n\n\n[Yudkowsky][11:23]\nI would unfortunately also predict that in this case, this will be a highly competitive market and hence not a very profitable one, which I predict to match your prediction, but I ask about the economics here just in case.\n\n\n\n[Christiano][11:24]\nKind of typical sample: I'd guess that Google has a reasonably large lead, most translation still provided as a free value-added, cost per translation at that level of quality is like $0.01/word, total revenue in the area is like $10Ms / year?\n\n\n\n\n[Yudkowsky][11:24]\nwell, my model also permits that Google does it for free and so it's an uncompetitive market but not a profitable one… ninjaed.\n\n\n\n[Christiano][11:25]\nfirst order of improving would be sanity-checking economics and thinking about #s, second would be learning things like \"how many people actually work on translation and what is the state of the field?\"\n\n\n\n\n[Yudkowsky][11:26]\ndid Tesla crack self-driving cars and become a $3T company instead of a $1T company?  do you own Tesla options?\ndid Waymo beat Tesla and cause Tesla stock to crater, same question?\n\n\n\n[Christiano][11:27]\n1/3 chance tesla has FSD in 2024\nconditioned on that, yeah probably market cap is >$3T?\nconditioned on Tesla having FSD, 2/3 chance Waymo has also at least rolled out to a lot of cities\nconditioned on no tesla FSD, 10% chance Waymo has rolled out to like half of big US cities?\ndunno if numbers make sense\n\n\n\n\n[Yudkowsky][11:28]\nthat's okay, I dunno if my questions make sense\n\n\n\n[Christiano][11:29]\n(5% NW in tesla, 90% NW in AI bets, 100% NW in more normal investments; no tesla options that sounds like a scary place with lottery ticket biases and the crazy tesla investors)\n\n\n\n\n[Yudkowsky][11:30]\n(am I correctly understanding you're 2x levered?)\n\n\n\n[Christiano][11:30][11:31]\nyeah\nit feels like you've got to have weird views on trajectory of value-added from AI over the coming years\non how much of the $ comes from domains that are currently exciting to people (e.g. that Google already works on, self-driving, industrial robotics) vs stuff out of left field\non what kind of algorithms deliver $ in those domains (e.g. are logistics robots trained using the same techniques tons of people are currently pushing on)\non my picture you shouldn't be getting big losses on any of those\n\n\n\n\n\njust losing like 10-20% each time\n\n\n\n\n[Yudkowsky][11:31][11:32]\nmy uncorrected inside view says that machine translation should be in reach and generate huge amounts of economic value even if it ends up an unprofitable competitive or Google-freebie field\n\n\n\n\nand also that not many people are working on basic research in machine translation or see it as a \"currently exciting\" domain\n\n\n\n[Christiano][11:32]\nhow many FTE is \"not that many\" people?\nalso are you expecting improvement in the google translate style product, or in lower-latencies for something closer to normal human translator prices, or something else?\n\n\n\n\n[Yudkowsky][11:33]\nmy worldview says more like… sure, maybe there's 300 programmers working on it worldwide, but most of them aren't aggressively pursuing new ideas and trying to explore the space, they're just applying existing techniques to a new language or trying to throw on some tiny mod that lets them beat SOTA by 1.2% for a publication\nbecause it's not an exciting field\n\"What if you could rip down the language barriers\" is an economist's dream, or a humanist's dream, and Silicon Valley is neither\nand looking at GPT-3 and saying, \"God damn it, this really seems like it must on some level understand what it's reading well enough that the same learned knowledge would suffice to do really good machine translation, this must be within reach for gradient descent technology we just don't know how to reach it\" is Yudkowskian thinking; your AI system has internal parts like \"how much it understands language\" and there's thoughts about what those parts ought to be able to do if you could get them into a new system with some other parts\n\n\n\n[Christiano][11:36]\nmy guess is we'd have some disagreements here\nbut to be clear, you are talking about text-to-text at like $0.01/word price point?\n\n\n\n\n[Yudkowsky][11:38]\nI mean, do we?  Unfortunately another Yudkowskian worldview says \"and people can go on failing to notice this for arbitrarily long amounts of time\".\nif that's around GPT-3's price point then yeah\n\n\n\n[Christiano][11:38]\ngpt-3 is a lot cheaper, happy to say gpt-3 like price point\n\n\n\n\n[Yudkowsky][11:39]\n(thinking about whether $0.01/word is meaningfully different from $0.001/word and concluding that it is)\n\n\n\n[Christiano][11:39]\n(api is like 10,000 words / $)\nI expect you to have a broader distribution over who makes a great product in this space, how great it ends up being etc., whereas I'm going to have somewhat higher probabilities on it being google research and it's going to look boring\n\n\n\n\n[Yudkowsky][11:40]\nwhat is boring?\nboring predictions are often good predictions on my own worldview too\nlots of my gloom is about things that are boringly bad and awful\n(and which add up to instant death at a later point)\nbut, I mean, what does boring machine translation look like?\n\n\n\n[Christiano][11:42]\nTrain big language model. Have lots of auxiliary tasks especially involving reading in source language and generation in target language. Have pre-training on aligned sentences and perhaps using all the unsupervised translation we have depending on how high-resource language is. Fine-tune with smaller amount of higher quality supervision.\nSome of the steps likely don't add much value and skip them. Fair amount of non-ML infrastructure.\nFor some languages/domains/etc. dedicated models, over time increasingly just have a giant model with learned dispatch as in mixture of experts.\n\n\n\n\n[Yudkowsky][11:44]\nbut your worldview is also totally ok with there being a Clever Trick added to that which produces a 2x reduction in training time.  or with there being a new innovation like transformers, which was developed a year earlier and which everybody now uses, without which the translator wouldn't work at all. ?\n\n\n\n[Christiano][11:44]\nJust for reference, I think transformers aren't that visible on a (translation quality) vs (time) graph?\nBut yes, I'm totally fine with continuing architectural improvements, and 2x reduction in training time is currently par for the course for \"some people at google thought about architectures for a while\" and I expect that to not get that much tighter over the next few years.\n\n\n\n\n[Yudkowsky][11:45]\nunrolling Restricted Boltzmann Machines to produce deeper trainable networks probably wasn't much visible on a graph either, but good luck duplicating modern results using only lower portions of the tech tree.  (I don't think we disagree about this.)\n\n\n\n[Christiano][11:45]\nI do expect it to eventually get tighter, but not by 2024.\nI don't think unrolling restricted boltzmann machines is that important\n\n\n\n\n[Yudkowsky][11:46]\nlike, historically, or as a modern technology?\n\n\n\n[Christiano][11:46]\nhistorically\n\n\n\n\n[Yudkowsky][11:46]\ninteresting\nmy model is that it got people thinking about \"what makes things trainable\" and led into ReLUs and inits\nbut I am going more off having watched from the periphery as it happened, than having read a detailed history of that\nlike, people asking, \"ah, but what if we had a deeper network and the gradients didn't explode or die out?\" and doing that en masse in a productive way rather than individuals being wistful for 30 seconds\n\n\n\n[Christiano][11:48]\nwell, not sure if this will introduce differences in predictions\nI don't feel like it should really matter for our bottom line predictions whether we classify google's random architectural change as something fundamentally new (which happens to just have a modest effect at the time that it's built) or as something boring\nI'm going to guess how well things will work by looking at how well things work right now and seeing how fast it's getting better\nand that's also what I'm going to do for applications of AI with transformative impacts\nand I actually believe you will do something today that's analogous to what you would do in the future, and in fact will make somewhat different predictions than what I would do\nand then some of the action will be in new things that people haven't been trying to do in the past, and I'm predicting that new things will be \"small\" whereas you have a broader distribution, and there's currently some not-communicated judgment call in \"small\"\nif you think that TAI will be like translation, where google publishes tons of papers, but that they will just get totally destroyed by some new idea, then it seems like that should correspond to a difference in P(google translation gets totally destroyed by something out-of-left-field)\nand if you think that TAI won't be like translation, then I'm interested in examples more like TAI\nI don't really understand the take \"and people can go on failing to notice this for arbitrarily long amounts of time,\" why doesn't that also happen for TAI and therefore cause it to be the boring slow progress by google? Why would this be like a 50% probability for TAI but <10% for translation?\nperhaps there is a disagreement about how good the boring progress will be by 2024? looks to me like it will be very good\n\n\n\n\n[Yudkowsky][11:57]\nI am not sure that is where the disagreement lies\n\n\n \n17.3. The evolution of human intelligence\n \n\n[Yudkowsky][11:57]\nI am considering advocating that we should have more disagreements about the past, which has the advantage of being very concrete, and being often checkable in further detail than either of us already know\n\n\n\n[Christiano][11:58]\nI'm fine with disagreements about the past; I'm more scared of letting you pick arbitrary things to \"predict\" since there is much more impact from differences in domain knowledge\n(also not quite sure why it's more concrete, I guess because we can talk about what led to particular events? mostly it just seems faster)\nalso as far as I can tell our main differences are about whether people will spend a lot of money work effectively on things that would make a lot of money, which means if we look to the past we will have to move away from ML/AI\n\n\n\n\n[Yudkowsky][12:00]\nso my understanding of how Paul writes off the example of human intelligence, is that you are like, \"evolution is much stupider than a human investor; if there'd been humans running the genomes, people would be copying all the successful things, and hominid brains would be developing in this ecology of competitors instead of being a lone artifact\". ?\n\n\n\n[Christiano][12:00]\nI don't understand why I have to write off the example of human intelligence\n\n\n\n\n[Yudkowsky][12:00]\nbecause it looks nothing like your account of how TAI develops\n\n\n\n[Christiano][12:00]\nit also looks nothing like your account, I understand that you have some analogy that makes sense to you\n\n\n\n\n[Yudkowsky][12:01]\nI mean, to be clear, I also write off the example of humans developing morality and have to explain to people at length why humans being as nice as they are, doesn't imply that paperclip maximizers will be anywhere near that nice, nor that AIs will be other than paperclip maximizers.\n\n\n\n[Christiano][12:01][12:02]\nyou could state some property of how human intelligence developed, that is in common with your model for TAI and not mine, and then we could discuss that\nif you say something like: \"chimps are not very good at doing science, but humans are\" then yes my answer will be that it's because evolution was not selecting us to be good at science\n\n\n\n\n\nand indeed AI systems will be good at science using much less resources than humans or chimps\n\n\n\n\n[Yudkowsky][12:02][12:02]\nwould you disagree that humans developing intelligence, on the sheer surfaces of things, looks much more Yudkowskian than Paulian?\n\n\n\n\nlike, not in terms of compatibility with underlying model\njust that there's this one corporation that came out and massively won the entire AGI race with zero competitors\n\n\n\n[Christiano][12:03]\nI agree that \"how much did the winner take all\" is more like your model of TAI than mine\nI don't think zero competitors is reasonable, I would say \"competitors who were tens of millions of years behind\"\n\n\n\n\n[Yudkowsky][12:03]\nsure\nand your account of this is that natural selection is nothing like human corporate managers copying each other\n\n\n\n[Christiano][12:03]\nwhich was a reasonable timescale for the old game, but a long timescale for the new game\n\n\n\n\n[Yudkowsky][12:03]\nyup\n\n\n\n[Christiano][12:04]\nthat's not my only account\nit's also that for human corporations you can form large coalitions, i.e. raise huge amounts of $ and hire huge numbers of people working on similar projects (whether or not vertically integrated), and those large coalitions will systematically beat small coalitions\nand that's basically the key dynamic in this situation, and isn't even trying to have any analog in the historical situation\n(the key dynamic w.r.t. concentration of power, not necessarily the main thing overall)\n\n\n\n\n[Yudkowsky][12:07]\nthe modern degree of concentration of power seems relatively recent and to have tons and tons to do with the regulatory environment rather than underlying properties of the innovation landscape\nback in the old days, small startups would be better than Microsoft at things, and Microsoft would try to crush them using other forces than superior technology, not always successfully\nor such was the common wisdom of USENET\n\n\n\n[Christiano][12:08]\nmy point is that the evolution analogy is extremely unpersuasive w.r.t. concentration of power\nI think that AI software capturing the amount of power you imagine is also kind of implausible because we know something about how hardware trades off against software progress (maybe like 1 year of progress = 2x hardware) and so even if you can't form coalitions on innovation at all you are still going to be using tons of hardware if you want to be in the running\nthough if you can't parallelize innovation at all and there is enough dispersion in software progress then the people making the software could take a lot of the $ / influence from the partnership\nanyway, I agree that this is a way in which evolution is more like your world than mine\nbut think on this point the analogy is pretty unpersuasive\nbecause it fails to engage with any of the a priori reasons you wouldn't expect concentration of power\n\n\n\n\n[Yudkowsky][12:11]\nI'm not sure this is the correct point on which to engage, but I feel like I should say out loud that I am unable to operate my model of your model in such fashion that it is not falsified by how the software industry behaved between 1980 and 2000.\nthere should've been no small teams that beat big corporations\ntoday those are much rarer, but on my model, that's because of regulatory changes (and possibly metabolic damage from something in the drinking water)\n\n\n\n[Christiano][12:12]\nI understand that you can't operate my model, and I've mostly given up, and on this point I would prefer to just make predictions or maybe retrodictions\n\n\n\n\n[Yudkowsky][12:13]\nwell, anyways, my model of how human intelligence happened looks like this:\nthere is a mysterious kind of product which we can call G, and which brains can operate as factories to produce\nG in turn can produce other stuff, but you need quite a lot of it piled up to produce better stuff than your competitors\nas late as 1000 years ago, the fastest creatures on Earth are not humans, because you need even more G than that to go faster than cheetahs\n(or peregrine falcons)\nthe natural selections of various species were fundamentally stupid and blind, incapable of foresight and incapable of copying the successes of other natural selections; but even if they had been as foresightful as a modern manager or investor, they might have made just the same mistake\nbefore 10,000 years they would be like, \"what's so exciting about these things? they're not the fastest runners.\"\nif there'd been an economy centered around running, you wouldn't invest in deploying a human\n(well, unless you needed a stamina runner, but that's something of a separate issue, let's consider just running races)\nyou would invest on improving cheetahs\nbecause the pile of human G isn't large enough that their G beats a specialized naturally selected cheetah\n\n\n\n[Christiano][12:17]\nhow are you improving cheetahs in the analogy?\nyou are trying random variants to see what works?\n\n\n\n\n[Yudkowsky][12:18]\nusing conventional, well-tested technology like MUSCLES and TENDONS\ntrying variants on those\n\n\n\n[Christiano][12:18]\nok\nand you think that G doesn't help you improve on muscles and tendons?\nuntil you have a big pile of it?\n\n\n\n\n[Yudkowsky][12:18]\nnot as a metaphor but as simple historical fact, that's how it played out\nit takes a whole big pile of G to go faster than a cheetah\n\n\n\n[Christiano][12:19]\nas a matter of fact there is no one investing in making better cheetahs\nso it seems like we're already playing analogy-game\n\n\n\n\n[Yudkowsky][12:19]\nthe natural selection of cheetahs is investing in it\nit's not doing so by copying humans because of fundamental limitations\nhowever if we replace it with an average human investor, it still doesn't copy humans, why would it\n\n\n\n[Christiano][12:19]\nthat's the part that is silly\nor like, it needs more analogy\n\n\n\n\n[Yudkowsky][12:19]\nhow so?  humans aren't the fastest.\n\n\n\n[Christiano][12:19]\nhumans are great at breeding animals\nso if I'm natural selection personified, the thing to explain is why I'm not using some of that G to improve on my selection\nnot why I'm not using G to build a car\n\n\n\n\n[Yudkowsky][12:20]\nI'm… confused\nis this implying that a key aspect of your model is that people are using AI to decide which AI tech to invest in?\n\n\n\n[Christiano][12:20]\nno\nI think I just don't understand your analogy\nhere in the actual world, some people are trying to make faster robots by tinkering with robot designs\nand then someone somewhere is training their AGI\n\n\n\n\n[Yudkowsky][12:21]\nwhat I'm saying is that you can imagine a little cheetah investor going, \"I'd like to copy and imitate some other species's tricks to make my cheetahs faster\" and they're looking enviously at falcons, not at humans\nnot until very late in the game\n\n\n\n[Christiano][12:21]\nand the relevant question is whether the pre-AGI thing is helpful for automating the work that humans are doing while they tinker with robot designs\nthat seems like the actual world\nand the interesting claim is you saying \"nope, not very\"\n\n\n\n\n[Yudkowsky][12:22]\nI am again confused.  Does it matter to your model whether the pre-AGI thing is helpful for automating \"tinkering with robot designs\" or just profitable machine translation?  Either seems like it induces equivalent amounts of investment.\nIf anything the latter induces much more investment.\n\n\n\n[Christiano][12:23]\nsure, I'm fine using \"tinkering with robot designs\" as a lower bound\nboth are fine\nthe point is I have no idea what you are talking about in the analogy\nwhat is analogous to what?\nI thought cheetahs were analogous to faster robots\n\n\n\n\n[Yudkowsky][12:23]\nfaster cheetahs are analogous to more profitable robots\n\n\n\n[Christiano][12:23]\nsure\nso you have some humans working on making more profitable robots, right?\nwho are tinkering with the robots, in a way analogous to natural selection tinkering with cheetahs?\n\n\n\n\n[Yudkowsky][12:24]\nI'm suggesting replacing the Natural Selection of Cheetahs with a new optimizer that has the Copy Competitor and Invest In Easily-Predictable Returns feature\n\n\n\n[Christiano][12:24]\nOK, then I don't understand what those are analogous to\nlike, what is analogous to the humans who are tinkering with robots, and what is analogous to the humans working on AGI?\n\n\n\n\n[Yudkowsky][12:24]\nand observing that, even this case, the owner of Cheetahs Inc. would not try to copy Humans Inc.\n\n\n\n[Christiano][12:25]\nhere's the analogy that makes sense to me\nnatural selection is working on making faster cheetahs = some humans tinkering away to make more profitable robots\nnatural selection is working on making smarter humans = some humans who are tinkering away to make more powerful AGI\nnatural selection doesn't try to copy humans because they suck at being fast = robot-makers don't try to copy AGI-makers because the AGIs aren't very profitable robots\n\n\n\n\n[Yudkowsky][12:26]\nwith you so far\n\n\n\n[Christiano][12:26]\neventually humans build cars once they get smart enough = eventually AGI makes more profitable robots once it gets smart enough\n\n\n\n\n[Yudkowsky][12:26]\nyup\n\n\n\n[Christiano][12:26]\ngreat, seems like we're on the same page then\n\n\n\n\n[Yudkowsky][12:26]\nand by this point it is LATE in the game\n\n\n\n[Christiano][12:27]\ngreat, with you still\n\n\n\n\n[Yudkowsky][12:27]\nbecause the smaller piles of G did not produce profitable robots\n\n\n\n[Christiano][12:27]\nbut there's a step here where you appear to go totally off the rails\n\n\n\n\n[Yudkowsky][12:27]\nor operate profitable robots\nsay on\n\n\n\n[Christiano][12:27]\ncan we just write out the sequence of AGIs, AGI(1), AGI(2), AGI(3)… in analogy with the sequence of human ancestors H(1), H(2), H(3)…?\n\n\n\n\n[Yudkowsky][12:28]\nIs the last member of the sequence H(n) the one that builds cars and then immediately destroys the world before anything that operates on Cheetah Inc's Owner's scale can react?\n\n\n\n[Christiano][12:28]\nsure\nI don't think of it as the last\nbut it's the last one that actually arises?\nmaybe let's call it the last, H(n)\ngreat\nand now it seems like you are imagining an analogous story, where AGI(n) takes over the world and maybe incidentally builds some more profitable robots along the way\n(building more profitable robots being easier than taking over the world, but not so much easier that AGI(n-1) could have done it unless we make our version numbers really close together, close enough that deploying AGI(n-1) is stupid)\n\n\n\n\n[Yudkowsky][12:31]\nif this plays out in the analogous way to human intelligence, AGI(n) becomes able to build more profitable robots 1 hour before it becomes able to take over the world; my worldview does not put that as the median estimate, but I do want to observe that this is what happened historically\n\n\n\n[Christiano][12:31]\nsure\n\n\n\n\n[Yudkowsky][12:32]\nok, then I think we're still on the same page as written so far\n\n\n\n[Christiano][12:32]\nso the question that's interesting in the real world is which AGI is useful for replacing humans in the design-better-robots task; is it 1 hour before the AGI that takes over the world, or 2 years, or what?\n\n\n\n\n[Yudkowsky][12:33]\nmy worldview tends to make a big ol' distinction between \"replace humans in the design-better-robots task\" and \"run as a better robot\", if they're not importantly distinct from your standpoint can we talk about the latter?\n\n\n\n[Christiano][12:33]\nthey seem importantly distinct\ntotally different even\nso I think we're still on the same page\n\n\n\n\n[Yudkowsky][12:34]\nok then, \"replacing humans at designing better robots\" sure as heck sounds to Eliezer like the world is about to end or has already ended\n\n\n\n[Christiano][12:34]\nmy whole point is that in the evolutionary analogy we are talking about \"run as a better robot\" rather than \"replace humans in the design-better-robots-task\"\nand indeed there is no analog to \"replace humans in the design-better-robots-task\"\nwhich is where all of the action and disagreement is\n\n\n\n\n[Yudkowsky][12:35][12:36]\nwell, yes, I was exactly trying to talk about when humans start running as better cheetahs\nand how that point is still very late in the game\n\n\n\n\nnot as late as when humans take over the job of making the thing that makes better cheetahs, aka humans start trying to make AGI, which is basically the fingersnap end of the world from the perspective of Cheetahs Inc.\n\n\n\n[Christiano][12:36]\nOK, but I don't care when humans are better cheetahs—in the real world, when AGIs are better robots. In the real world I care about when AGIs start replacing humans in the design-better-robots-task. I'm game to use evolution as an analogy to help answer that question (where I do agree that it's informative), but want to be clear what's actually at issue.\n\n\n\n\n[Yudkowsky][12:37]\nso, the thing I was trying to work up to, is that my model permits the world to end in a way where AGI doesn't get tons of investment because it has an insufficiently huge pile of G that it could run as a better robot.  people are instead investing in the equivalents of cheetahs.\nI don't understand why your model doesn't care when humans are better cheetahs.  AGIs running as more profitable robots is what induces the huge investments in AGI that your model requires to produce very close competition. ?\n\n\n\n[Christiano][12:38]\nit's a sufficient condition, but it's not the most robust one at all\nlike, I happen to think that in the real world AIs actually are going to be incredibly profitable robots, and that's part of my boring view about what AGI looks like\nBut the thing that's more robust is that the sub-taking-over-world AI is already really important, and receiving huge amounts of investment, as something that automates the R&D process. And it seems like the best guess given what we know now is that this process starts years before the singularity.\nFrom my perspective that's where most of the action is. And your views on that question seem related to your views on how e.g. AGI is a fundamentally different ballgame from making better robots (whereas I think the boring view is that they are closely related), but that's more like an upstream question about what you think AGI will look like, most relevant because I think it's going to lead you to make bad short-term predictions about what kinds of technologies will achieve what kinds of goals.\n\n\n\n\n[Yudkowsky][12:41]\nbut not all AIs are the same branch of the technology tree.  factory robotics are already really important and they are \"AI\" but, on my model, they're currently on the cheetah branch rather than the hominid branch of the tech tree; investments into better factory robotics are not directly investments into improving MuZero, though they may buy chips that MuZero also buys.\n\n\n\n[Christiano][12:42]\nYeah, I think you have a mistaken view of AI progress. But I still disagree with your bottom line even if I adopt (this part of) your view of AI progress.\nNamely, I think that the AGI line is mediocre before it is great, and the mediocre version is spectacularly valuable for accelerating R&D (mostly AGI R&D).\nThe way I end up sympathizing with your view is if I adopt both this view about the tech tree, + another equally-silly-seeming view about how close the AGI line is to fooming (or how inefficient the area will remain as we get close to fooming)\n\n\n\n \n17.4. Human generality and body manipulation\n \n\n[Yudkowsky][12:43]\nso metaphorically, you require that humans be doing Great at Various Things and being Super Profitable way before they develop agriculture; the rise of human intelligence cannot be a case in point of your model because the humans were too uncompetitive at most animal activities for unrealistically long (edit: compared to the AI case)\n\n\n\n[Christiano][12:44]\nI don't understand\nHuman brains are really great at basically everything as far as I can tell?\nlike it's not like other animals are better at manipulating their bodies\nwe crush them\n\n\n\n\n[Yudkowsky][12:44]\nif we've got weapons, yes\n\n\n\n[Christiano][12:44]\nhuman bodies are also pretty great, but they are not the greatest on every dimension\n\n\n\n\n[Yudkowsky][12:44]\nwrestling a chimpanzee without weapons is famously ill-advised\n\n\n\n[Christiano][12:44]\nno, I mean everywhere\nchimpanzees are practically the same as humans in the animal kingdom\nthey have almost as excellent a brain\n\n\n\n\n[Yudkowsky][12:45]\nas is attacking an elephant with your bare hands\n\n\n\n[Christiano][12:45]\nthat's not because of elephant brains\n\n\n\n\n[Yudkowsky][12:45]\nwell, yes, exactly\nyou need a big pile of G before it's profitable\nso big the game is practically over by then\n\n\n\n[Christiano][12:45]\nthis seems so confused\nbut that's exciting I guess\nlike, I'm saying that the brains to automate R&D\nare similar to the brains to be a good factory robot\nanalogously, I think the brains that humans use to do R&D\nare similar to the brains we use to manipulate our body absurdly well\nI do not think that our brains make us fast\nthey help a tiny bit but not much\nI do not think the physical actuators of the industrial robots will be that similar to the actuators of the robots that do R&D\nthe claim is that the problem of building the brain is pretty similar\njust as the problem of building a brain that can do science is pretty similar to the problem of building a brain that can operate a body really well\n(and indeed I'm claiming that human bodies kick ass relative to other animal bodies—there may be particular tasks other animal brains are pre-built to be great at, but (i) humans would be great at those too if we were under mild evolutionary pressure with our otherwise excellent brains, (ii) there are lots of more general tests of how good you are at operating a body and we will crush it at those tests)\n(and that's not something I know much about, so I could update as I learned more about how actually we just aren't that good at motor control or motion planning)\n\n\n\n\n[Yudkowsky][12:49]\nso on your model, we can introduce humans to a continent, forbid them any tool use, and they'll still wipe out all the large animals?\n\n\n\n[Christiano][12:49]\n(but damn we seem good to me)\nI don't understand why that would even plausibly follow\n\n\n\n\n[Yudkowsky][12:49]\nbecause brains are profitable early, even if they can't build weapons?\n\n\n\n[Christiano][12:49]\nI'm saying that if you put our brains in a big animal body\nwe would wipe out the big animals\nyes, I think brains are great\n\n\n\n\n[Yudkowsky][12:50]\nbecause we'd still have our late-game pile of G and we would build weapons\n\n\n\n[Christiano][12:50]\nno, I think a human in a big animal body, with brain adapted to operate that body instead of our own, would beat a big animal straightforwardly\nwithout using tools\n\n\n\n\n[Yudkowsky][12:51]\nthis is a strange viewpoint and I do wonder whether it is a crux of your view\n\n\n\n[Christiano][12:51]\nthis feels to me like it's more on the \"eliezer vs paul disagreement about the nature of AI\" rather than \"eliezer vs paul on civilizational inadequacy and continuity\", but enough changes on \"nature of AI\" would switch my view on the other question\n\n\n\n\n[Yudkowsky][12:51]\nlike, ceteris paribus maybe a human in an elephant's body beats an elephant after a burn-in practice period?  because we'd have a strict intelligence advantage?\n\n\n\n[Christiano][12:52]\npractice may or may not be enough\nbut if you port over the excellent human brain to the elephant body, then run evolution for a brief burn-in period to get all the kinks sorted out?\nelephants are pretty close to humans so it's less brutal than for some other animals (and also are elephants the best example w.r.t. the possibility of direct conflict?) but I totally expect us to win\n\n\n\n\n[Yudkowsky][12:53]\nI unfortunately need to go do other things in advance of an upcoming call, but I feel like disagreeing about the past is proving noticeably more interesting, confusing, and perhaps productive, than disagreeing about the future\n\n\n\n[Christiano][12:53]\nactually probably I just think practice is enough\nI think humans have way more dexterity, better locomotion, better navigation, better motion planning…\nsome of that is having bodies optimized for those things (esp. dexterity), but I also think most animals just don't have the brains for it, with elephants being one of the closest calls\nI'm a little bit scared of talking to zoologists or whoever the relevant experts are on this question, because I've talked to bird people a little bit and they often have very strong \"humans aren't special, animals are super cool\" instincts even in cases where that take is totally and obviously insane. But if we found someone reasonable in that area I'd be interested to get their take on this.\nI think this is pretty important for the particular claim \"Is AGI like other kinds of ML?\"; that definitely doesn't persuade me to be into fast takeoff on its own though it would be a clear way the world is more Eliezer-like than Paul-like\nI think I do further predict that people who know things about animal intelligence, and don't seem to have identifiably crazy views about any adjacent questions that indicate a weird pro-animal bias, will say that human brains are a lot better than other animal brains for dexterity/locomotion/similar physical tasks (and that the comparison isn't that close for e.g. comparing humans vs big cats).\nIncidentally, seems like DM folks did the same thing this year, presumably publishing now because they got scooped. Looks like they probably have a better algorithm but used harder environments instead of Atari. (They also evaluate the algorithm SPR+MuZero I mentioned which indeed gets one factor of 2x improvement over MuZero alone, roughly as you'd guess): https://arxiv.org/pdf/2111.01587.pdf\n\n\n\n\n[Barnes][13:45]\nMy DM friend says they tried it before they were focused on data efficiency and it didn't help in that regime, sounds like they ignored it for a while after that\n\n\n\n\n[Christiano: ]\n\n\n\n\n\n\n\n\n[Christiano][13:48]\nOverall the situation feels really boring to me. Not sure if DM having a highly similar unpublished result is more likely on my view than Eliezer's (and initially ignoring the method because they weren't focused on sample-efficiency), but at any rate I think it's not anywhere close to falsifying my view.\n\n\n\n \n18. Follow-ups to the Christiano/Yudkowsky conversation\n \n\n[Karnofsky][9:39]\nGoing to share a point of confusion about this latest exchange.\nIt started with Eliezer saying this:\n\nThing that (if true) strikes me as… straight-up falsifying Paul's view as applied to modern-day AI, at the frontier of the most AGI-ish part of it and where Deepmind put in substantial effort on their project? EfficientZero (allegedly) learns Atari in 100,000 frames. Caveat: I'm not having an easy time figuring out how many frames MuZero would've required to achieve the same performance level. MuZero was trained on 200,000,000 frames but reached what looks like an allegedly higher high; the EfficientZero paper compares their performance to MuZero on 100,000 frames, and claims theirs is much better than MuZero given only that many frames.\n\nSo at this point, I thought Eliezer's view was something like: \"EfficientZero represents a several-OM (or at least one-OM?) jump in efficiency, which should shock the hell out of Paul.\" The upper bound on the improvement is 2000x, so I figured he thought the corrected improvement would be some number of OMs.\nBut very shortly afterwards, Eliezer quotes Gwern's guess of a 4x improvement, and Paul then said:\n\nConcretely, this is a paper that adds a few techniques to improve over MuZero in a domain that (it appears) wasn't a significant focus of MuZero. I don't know how much it improves but I can believe gwern's estimates of 4x.\nI'd guess MuZero itself is a 2x improvement over the baseline from a year ago, which was maybe a 4x improvement over the algorithm from a year before that. If that's right, then no it's not mindblowing on my view to have 4x progress one year, 2x progress the next, and 4x progress the next.\n\nEliezer never seemed to push back on this 4x-2x-4x claim.\nWhat I thought would happen after the 4x estimate and 4x-2x-4x claim: Eliezer would've said \"Hmm, we should nail down whether we are talking about 4x-2x-4x or something more like 4x-2x-100x. If it's 4x-2x-4x, then I'll say 'never mind' re: my comment that this 'straight-up falsifies Paul's view.' At best this is just an iota of evidence or something.\"\nWhy isn't that what happened? Did Eliezer mean all along to be saying that a 4x jump on Atari sample efficiency would \"straight-up falsify Paul's view?\" Is a 4x jump the kind of thing Eliezer thinks is going to power a jumpy AI timeline?\n\n\n\n\n[Ngo: ]\n[Shah: ]\n\n\n\n\n\n\n\n\n[Yudkowsky][11:16]\nThis is a proper confusion and probably my fault; I also initially thought it was supposed to be 1-2 OOM and should've made it clearer that Gwern's 4x estimate was less of a direct falsification.\nI'm not yet confident Gwern's estimate is correct.  I just got a reply from my query to the paper's first author which reads:\n\nDear Eliezer: It's a good question. But due to the limits of resources and time, we haven't evaluated the sample efficiency towards different frames systematically. I think it's not a trivial question as the required time and resources are much expensive for the 200M frames setting, especially concerning the MCTS-based methods. Maybe you need about several days or longer to finish a run with GPUs in that setting. I hope my answer can help you. Thank you for your email.\n\nI replied asking if Gwern's 3.8x estimate sounds right to them.\nA 10x improvement could power what I think is a jumpy AI timeline.  I'm currently trying to draft a depiction of what I think an unrealistically dignified but computationally typical end-of-world would look like if it started in 2025, and my first draft of that had it starting with a new technique published by Google Brain that was around a 10x improvement in training speeds for very large networks at the cost of higher inference costs, but which turned out to be specially applicable to online learning.\nThat said, I think the 10x part isn't either a key concept or particularly likely, and it's much more likely that hell breaks loose when an innovation changes some particular step of the problem from \"can't realistically be done at all\" to \"can be done with a lot of computing power\", which was what I had being the real effect of that hypothetical Google Brain innovation when applied to online learning, and I will probably rewrite to reflect that.\n\n\n\n[Karnofsky][11:29]\nThat's helpful, thanks.\nRe: \"can't realistically be done at all\" to \"can be done with a lot of computing power\", cpl things:\n1. Do you think a 10x improvement in efficiency at some particular task could qualify as this? Could a smaller improvement?\n2. I thought you were pretty into the possibility of a jump from \"can't realistically be done at all\" to \"can be done with a small amount of computing power,\" eg some random ppl with a $1-10mm/y budget blowing past mtpl labs with >$1bb/y budgets. Is that wrong?\n\n\n\n\n[Yudkowsky][13:44]\n1 – yes and yes, my revised story for how the world ends looks like Google Brain publishing something that looks like only a 20% improvement but which is done in a way that lets it be adapted to make online learning by gradient descent \"work at all\" in DeepBrain's ongoing Living Zero project (not an actual name afaik)\n2 – that definitely remains very much allowed in principle, but I think it's not my current mainline probability for how the world's end plays out – although I feel hesitant / caught between conflicting heuristics here.\nI think I ended up much too conservative about timelines and early generalization speed because of arguing with Robin Hanson, and don't want to make a similar mistake here, but on the other hand a lot of the current interesting results have been from people spending huge compute (as wasn't the case to nearly the same degree in 2008) and if things happen on short timelines it seems reasonable to guess that the future will look that much like the present.  This is very much due to cognitive limitations of the researchers rather than a basic fact about computer science, but cognitive limitations are also facts and often stable ones.\n\n\n\n[Karnofsky][14:35]\nHm OK. I don't know what \"online learning by gradient descent\" means such that it doesn't work at all now (does \"work at all\" mean something like \"work with human-ish learning efficiency?\")\n\n\n\n\n[Yudkowsky][15:07]\nI mean, in context, it means \"works for Living Zero at the performance levels where it's running around accumulating knowledge\", which by hypothesis it wasn't until that point.\n\n\n\n[Karnofsky][15:12]\nHm. I am feeling pretty fuzzy on whether your story is centrally about:\n1. A <10x jump in efficiency at something important, leading pretty directly/straightforwardly to crazytown\n2. A 100x ish jump in efficiency at something important, which may at first \"look like\" a mere <10x jump in efficiency at something else\n#2 is generally how I've interpreted you and how the above sounds, but under #2 I feel like we should just have consensus that the Atari thing being 4x wouldn't be much of an update. Maybe we already do (it was a bit unclear to me from your msg)\n(And I totally agree that we haven't established the Atari thing is only 4x – what I'm saying is it feels like the conversation should've paused there)\n\n\n\n\n[Yudkowsky][15:13]\nThe Atari thing being 4x over 2 years is I think legit not an update because that's standard software improvement speed\nyou're correct that it should pause there\n\n\n\n[Karnofsky][15:14]  (Nov. 5)\n\n\n\n\n\n[Yudkowsky] [15:24]\nI think that my central model is something like – there's a central thing to general intelligence that starts working when you get enough pieces together and they coalesce, which is why humans went down this evolutionary gradient by a lot before other species got 10% of the way there in terms of output; and then it takes a big pile of that thing to do big things, which is why humans didn't go faster than cheetahs until extremely late in the game.\nso my visualization of how the world starts to end is \"gear gets added and things start to happen, maybe slowly-by-my-standards at first such that humans keep on pushing it along rather than it being self-moving, but at some point starting to cumulate pretty quickly in the same way that humans cumulated pretty quickly once they got going\" rather than \"dial gets turned up 50%, things happen 50% faster, every year\".\n\n\n\n[Yudkowsky][15:16]  (Nov. 5, switching channels)\nas a quick clarification, I agree that if this is 4x sample efficiency over 2 years then that doesn't at all challenge Paul's view\n\n\n\n[Christiano][0:20]\nFWIW, I felt like the entire discussion of EfficientZero was a concrete example of my view making a number of more concentrated predictions than Eliezer that were then almost immediately validated. In particular, consider the following 3 events:\n\nThe quantitative effect size seems like it will turn out to be much smaller than Eliezer initially believed, much closer to being in line with previous progress.\nDeepMind had relatively similar results that got published immediately after our discussion, making it look like random people didn't pull ahead of DM after all.\nDeepMind appears not to have cared much about the metric in question, as evidenced by (i) Beth's comment above, which is basically what I said was probably going on, (ii) they barely even mention Atari sample-efficiency in their paper about similar methods.\n\nIf only 1 of these 3 things had happened, then I agree this would have been a challenge to my view that would make me update in Eliezer's direction. But that's only possible if Eliezer actually assigns a higher probability than me to <= 1 of these things happening, and hence a lower probability to >= 2 of them happening. So if we're playing a reasonable epistemic game, it seems like I need to collect some epistemic credit every time something looks boring to me.\n\n\n\n\n[Yudkowsky][15:30]\nI broadly agree; you win a Bayes point.  I think some of this (but not all!) was due to my tripping over my own feet and sort of rushing back with what looked like a Relevant Thing without contemplating the winner's curse of exciting news, the way that paper authors tend to frame things in more exciting rather than less exciting ways, etc.  But even if you set that aside, my underlying AI model said that was a thing which could happen (which is why I didn't have technically rather than sociologically triggered skepticism) and your model said it shouldn't happen, and it currently looks like it mostly didn't happen, so you win a Bayes point.\nNotes that some participants may deem obvious(?) but that I state expecting wider readership:\n\nJust like markets are almost entirely efficient (in the sense that, even when they're not efficient, you can only make a very small fraction of the money that could be made from the entire market if you owned a time machine), even sharp and jerky progress has to look almost entirely not so fast almost all the time if the Sun isn't right in the middle of going supernova.  So the notion that progress sometimes goes jerky and fast does have to be evaluated by a portfolio view over time.  In worlds where progress is jerky even before the End Days, Paul wins soft steady Bayes points in most weeks and then I win back more Bayes points once every year or two.\nWe still don't have a very good idea of how much longer you would need to train the previous algorithm to match the performance of the new algorithm, just an estimate by Gwern based off linearly extrapolating a graph in a paper.  But, also to be clear, not knowing something is not the same as expecting it to update dramatically, and you have to integrate over the distribution you've got.\nIt's fair to say, \"Hey, Eliezer, if you tripped over your own feet here, but only noticed that because Paul was around to call it, maybe you're tripping over your feet at other times when Paul isn't around to check your thoughts in detail\" – I don't want to minimize the Bayes point that Paul won either.\n\n\n\n\n[Christiano][16:29]\nAgreed that it's (i) not obvious how large the EfficientZero gain was, and in general it's not a settled question what happened, (ii) it's not that big an update, it needs to be part of a portfolio (but this is indicative of the kind of thing I'd want to put in the portfolio), (iii) it generally seems pro-social to flag potentially relevant stuff without the presumption that you are staking a lot on it.\n\n\n\n \n\nThe post Christiano and Yudkowsky on AI predictions and human intelligence appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Christiano and Yudkowsky on AI predictions and human intelligence", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=2", "id": "c04ea9e75f986ec87b5f5b91b2d42f71"} {"text": "February 2022 Newsletter\n\n\n\nAs of yesterday, we've released the final posts in the Late 2021 MIRI Conversations sequence, a collection of (relatively raw and unedited) AI strategy conversations:\n\nNgo's view on alignment difficulty\nNgo and Yudkowsky on scientific reasoning and pivotal acts\nChristiano and Yudkowsky on AI predictions and human intelligence\nShah and Yudkowsky on alignment failures\n\nEliezer Yudkowsky, Nate Soares, Paul Christiano, Richard Ngo, and Rohin Shah (and possibly other participants in the conversations) will be answering questions in an AMA this Wednesday; questions are currently open on LessWrong.\nOther MIRI updates\n\nScott Alexander gives his take on Eliezer's dialogue on biology-inspired AGI timelines: Biological Anchors: A Trick That Might Or Might Not Work.\nConcurrent with progress on math olympiad problems by OpenAI, Paul Christiano operationalizes an IMO challenge bet with Eliezer, reflecting their different views on the continuousness/predictability of AI progress and the length of AGI timelines.\n\nNews and links\n\nDeepMind's AlphaCode demonstrates good performance in programming competitions.\nBillionaire EA Sam Bankman-Fried announces the Future Fund, a philanthropic fund whose areas of interest include AI and \"loss of control\" scenarios.\nStuart Armstrong leaves the Future of Humanity Institute to found Aligned AI, a benefit corporation focusing on the problem of \"value extrapolation\".\n\n\nThe post February 2022 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "February 2022 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=2", "id": "7db0772a8d01b3d9132f9e8902f586f2"} {"text": "January 2022 Newsletter\n\n\nMIRI updates\n\nMIRI's $1.2 million Visible Thoughts Project bounty now has an FAQ, and an example of a successful partial run that you can use to inform your own runs.\nScott Alexander reviews the first part of the Yudkowsky/Ngo debate. See also Richard Ngo's reply, and Rohin Shah's review of several posts from the Late 2021 MIRI Conversations.\nFrom Evan Hubinger: How do we become confident in the safety of an ML system? and A positive case for how we might succeed at prosaic AI alignment (with discussion in the comments).\nThe ML Alignment Theory Scholars program, mentored by Evan Hubinger and run by SERI, has produced a series of distillations and expansions of prior alignment-relevant research.\nMIRI ran a small workshop this month on what makes some concepts better than others, motivated by the question of how revolutionary science (which is about discovering new questions to ask, new ontologies, and new concepts) works.\n\nNews and links\n\nDaniel Dewey makes his version of the case that future advances in deep learning pose a \"global risk\".\nBuck Shlegeris of Redwood Research discusses Worst-Case Thinking in AI Alignment.\nFrom Paul Christiano: Why I'm excited about Redwood Research's current project.\nPaul Christiano's Alignment Research Center is hiring researchers and research interns.\n\n\nThe post January 2022 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "January 2022 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=2", "id": "d1d48ae2ae4e9f530611265ca25e1446"} {"text": "December 2021 Newsletter\n\n\nMIRI is offering $200,000 to build a dataset of AI-dungeon-style writing annotated with the thoughts used in the writing process, and an additional $1,000,000 for scaling that dataset an additional 10x: the Visible Thoughts Project.\nAdditionally, MIRI is in the process of releasing a series of chat logs, the Late 2021 MIRI Conversations, featuring relatively unedited and raw conversations between Eliezer Yudkowsky, Richard Ngo, Paul Christiano, and a number of other AI x-risk researchers.\nAs background, we've also released an anonymous discussion with Eliezer Yudkowsky on AGI interventions (cf. Zvi Mowshowitz's summary) and Nate Soares' comments on Carlsmith's \"Is power-seeking AI an existential risk?\" (one of several public reviews of Carlsmith's report).\nThe logs so far:\n\nNgo and Yudkowsky on alignment difficulty — A pair of opening conversations asking how easy it is to avoid \"consequentialism\" in powerful AGI systems.\nNgo and Yudkowsky on AI capability gains — Richard and Eliezer continue their dialogue.\nYudkowsky and Christiano discuss \"Takeoff Speeds\" — Paul Christiano joins the conversation, and debates hard vs. soft takeoff with Eliezer.\nSoares, Tallinn, and Yudkowsky discuss AGI cognition — Jaan Tallinn and Nate Soares weigh in on the conversation so far.\nChristiano, Cotra, and Yudkowsky on AI progress — Paul and Eliezer begin a longer AGI forecasting discussion, joined by Ajeya Cotra.\nShulman and Yudkowsky on AI progress — Carl Shulman weighs in on the Paul/Eliezer/Ajeya conversation.\nMore Christiano, Cotra, and Yudkowsky on AI progress — A discussion of \"why should we expect early prototypes to be low-impact?\", and of concrete predictions.\nConversation on technology forecasting and gradualism — A larger-group discussion, following up on the Paul/Eliezer debate.\n\nEliezer additionally wrote a dialogue, Biology-Inspired AGI Timelines: The Trick That Never Works, which Holden Karnofsky responded to.\nNews and links\n\nHow To Get Into Independent Research On Alignment/Agency: John Wentworth gives an excellent overview of how to get started doing AI alignment research.\nA new summer fellowship, Principles of Intelligent Behavior in Biological and Social Systems, is seeking applicants to spend three months in 2022 working on AI alignment \"through studying analogies to many complex systems (evolution, brains, language, social structures…)\". Apply by January 16.\n\n\nThe post December 2021 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "December 2021 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=2", "id": "87de90390d2c2db909c0556d0d41651b"} {"text": "Ngo's view on alignment difficulty\n\n\n \nThis post features a write-up by Richard Ngo on his views, with inline comments.\n \nColor key:\n\n\n\n\n  Chat  \n  Google Doc content  \n  Inline comments  \n\n\n\n\n \n13. Follow-ups to the Ngo/Yudkowsky conversation\n \n13.1. Alignment difficulty debate: Richard Ngo's case\n \n \n\n[Ngo][9:31]  (Sep. 25)\nAs promised, here's a write-up of some thoughts from my end. In particular, since I've spent a lot of the debate poking Eliezer about his views, I've tried here to put forward more positive beliefs of my own in this doc (along with some more specific claims): [GDocs link]\n\n\n\n\n[Soares: ] \n\n\n\n\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nWe take as a starting observation that a number of \"grand challenges\" in AI have been solved by AIs that are very far from the level of generality which people expected would be needed. Chess, once considered to be the pinnacle of human reasoning, was solved by an algorithm that's essentially useless for real-world tasks. Go required more flexible learning algorithms, but policies which beat human performance are still nowhere near generalising to anything else; the same for StarCraft, DOTA, and the protein folding problem. Now it seems very plausible that AIs will even be able to pass (many versions of) the Turing Test while still being a long way from AGI.\n\n\n\n\n[Yudkowsky][11:26]  (Sep. 25 comment)\n\nNow it seems very plausible that AIs will even be able to pass (many versions of) the Turing Test while still being a long way from AGI.\n\nI remark:  Restricted versions of the Turing Test.  Unrestricted passing of the Turing Test happens after the world ends.  Consider how smart you'd have to be to pose as an AGI to an AGI; you'd need all the cognitive powers of an AGI as well as all of your human powers.\n\n\n\n[Ngo][11:24]  (Sep. 29 comment)\nPerhaps we can quantify the Turing test by asking something like:\n\nWhat percentile of competence is the judge?\nWhat percentile of competence are the humans who the AI is meant to pass as?\nHow much effort does the judge put in (measured in, say, hours of strategic preparation)?\n\nDoes this framing seem reasonable to you? And if so, what are the highest numbers for each of these metrics that correspond to a Turing test which an AI could plausibly pass before the world ends?\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nI expect this trend to continue until after we have AIs which are superhuman at mathematical theorem-proving, programming, many other white-collar jobs, and many types of scientific research. It seems like Eliezer doesn't. I'll highlight two specific disagreements which seem to play into this.\n\n\n\n\n[Yudkowsky][11:28]  (Sep. 25 comment)\n\ndoesn't\n\nEh?  I'm pretty fine with something proving the Riemann Hypothesis before the world ends.  It came up during my recent debate with Paul, in fact.\nNot so fine with something designing nanomachinery that can be built by factories built by proteins.  They're legitimately different orders of problem, and it's no coincidence that the second one has a path to pivotal impact, and the first does not.\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nA first disagreement is related to Eliezer's characterisation of GPT-3 as a shallow pattern-memoriser. I think there's a continuous spectrum between pattern-memorisation and general intelligence. In order to memorise more and more patterns, you need to start understanding them at a high level of abstraction, draw inferences about parts of the patterns based on other parts, and so on. When those patterns are drawn from the real world, then this process leads to the gradual development of a world-model.\nThis position seems more consistent with the success of deep learning so far than Eliezer's position (although my advocacy of it loses points for being post-hoc; I was closer to Eliezer's position before the GPTs). It also predicts that deep learning will lead to agents which can reason about the world in increasingly impressive ways (although I don't have a strong position on the extent to which new architectures and algorithms will be required for that). I think that the spectrum from less to more intelligent animals (excluding humans) is a good example of what it looks like to gradually move from pattern-memorisation to increasingly sophisticated world-models and abstraction capabilities.\n\n\n\n\n[Yudkowsky][11:30]  (Sep. 25 comment)\n\nIn order to memorise more and more patterns, you need to start understanding them at a high level of abstraction, draw inferences about parts of the patterns based on other parts, and so on.\n\nCorrect.  You can believe this and not believe that exactly GPT-like architectures can keep going deeper until their overlap of a greater number of patterns achieves the same level of depth and generalization as human depth and generalization from fewer patterns, just like pre-transformer architectures ran into trouble in memorizing deeper patterns than the shallower ones those earlier systems could memorize.\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nI expect that Eliezer won't claim that pattern-memorisation is unrelated to general intelligence, but will claim that a pattern-memoriser needs to undergo a sharp transition in its cognitive algorithms before it can reason reliably about novel domains (like open scientific problems) – with his main argument for that being the example of the sharp transition undergone by humans.\nHowever, it seems unlikely to me that humans underwent a major transition in our underlying cognitive algorithms since diverging from chimpanzees, because our brains are so similar to those of chimps, and because our evolution from chimps didn't take very long. This evidence suggests that we should favour explanations for our success which don't need to appeal to big algorithmic changes, if we have any such explanations; and I think we do. More specifically, I'd characterise the three key differences between humans and chimps as:\n\nHumans have bigger brains.\nHumans have a range of small adaptations primarily related to motivation and attention, such as infant focus on language and mimicry, that make us much better at cultural learning.\nHumans grow up in a rich cultural environment.\n\n\n\n\n\n[Ngo][9:13]  (Sep. 23 comment on earlier draft)\n\nbigger brains\n\nI recall a 3-4x difference; but this paper says 5-6x for frontal cortex: https://www.nature.com/articles/nn814\n\n\n\n\n[Tallinn][3:24]  (Sep. 26 comment)\n\nlanguage and mimicry\n\n\"apes are unable to ape sounds\" claims david deutsch in \"the beginning of infinity\"\n\n\n\n\n[Barnes][8:09]  (Sep. 23 comment on earlier draft)\n\n[Humans grow up in a rich cultural environment.]\n\nmuch richer cultural environment including deliberate teaching\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nI claim that the discontinuity between the capabilities of humans and chimps is mainly explained by the general intelligence of chimps not being aimed in the direction of learning the skills required for economically valuable tasks, which in turn is mainly due to chimps lacking the \"range of small adaptations\" mentioned above.\nMy argument is a more specific version of Paul's claim that chimp evolution was not primarily selecting for doing things like technological development. In particular, it was not selecting for them because no cumulative cultural environment existed while chimps were evolving, and selection for the application of general intelligence to technological development is much stronger in a cultural environment. (I claim that the cultural environment was so limited before humans mainly because cultural accumulation is very sensitive to transmission fidelity.)\nBy contrast, AIs will be trained in a cultural environment (including extensive language use) from the beginning, so this won't be a source of large gains for later systems.\n\n\n\n\n[Ngo][6:01]  (Sep. 22 comment on earlier draft)\n\nmore specific version of Paul's claim\n\nBased on some of Paul's recent comments, this may be what he intended all along; though I don't recall his original writings on takeoff speeds making this specific argument.\n\n\n\n\n[Shulman][14:23]  (Sep. 25 comment)\n\n(I claim that the cultural environment was so limited before humans mainly because cultural accumulation is very sensitive to transmission fidelity.)\n\nThere can be other areas with superlinear effects from repeated application of  a skill. There's reason to think that the most productive complex industries tend to have that character.\nMaking individual minds able to correctly execute long chains of reasoning by reducing per-step error rate could plausibly have very superlinear effects in programming, engineering, management, strategy, persuasion, etc. And you could have new forms of 'super-culture' that don't work with humans.\nhttps://ideas.repec.org/a/eee/jeborg/v85y2013icp1-10.htmlhttps://ideas.repec.org/a/eee/jeborg/v85y2013icp1-10.html\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nIf true, this argument would weigh against Eliezer's claims about agents which possess a core of general intelligence being able to easily apply that intelligence to a wide range of tasks. And I don't think that Eliezer has a compelling alternative explanation of the key cognitive differences between chimps and humans (the closest I've seen in his writings is the brainstorming at the end of this post).\nIf this is the case, I notice an analogy between Eliezer's argument against Kurzweil, and my argument against Eliezer. Eliezer attempted to put microfoundations underneath the trend line of Moore's law, which led to a different prediction than Kurzweil's straightforward extrapolation. Similarly, my proposed microfoundational explanation of the chimp-human gap gives rise to a different prediction than Eliezer's more straightforward, non-microfoundational extrapolation.\n\n\n\n\n[Yudkowsky][11:39]  (Sep. 25 comment)\n\nSimilarly, my proposed microfoundational explanation of the chimp-human gap gives rise to a different prediction than Eliezer's more straightforward, non-microfoundational extrapolation.\n\nEliezer does not use \"non-microfoundational extrapolations\" for very much of anything, but there are obvious reasons why the greater Earth does not benefit from me winning debates through convincingly and correctly listing all the particular capabilities you need to add over and above what GPToid architectures can achieve, in order to achieve AGI.  Nobody else with a good model of larger reality will publicly describe such things in a way they believe is correct.  I prefer not to argue convincingly but wrongly.  But, no, it is not Eliezer's way to sound confident about anything unless he thinks he has a more detailed picture of the microfoundations than the one you are currently using yourself.\n\n\n\n[Ngo][11:40]  (Sep. 29 comment)\nGood to know; apologies for the incorrect inference.\nGiven that this seems like a big sticking point in the debate overall, do you have any ideas about how to move forward while avoiding infohazards?\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nMy position makes some predictions about hypothetical cases:\n\nIf chimpanzees had the same motivational and attention-guiding adaptations towards cultural learning and cooperation that humans do, and were raised in equally culturally-rich environments, then they could become economically productive workers in a range of jobs (primarily as manual laborers, but plausibly also for operating machinery, etc).\n\nResults from chimps raised in human families, like Washoe, seem moderately impressive, although still very uncertain. There's probably a lot of bias towards positive findings – but on the other hand, it's only been done a handful of times, and I expect that more practice at it would lead to much better results.\nComparisons between humans and chimps which aren't raised in similar ways to humans are massively biased towards humans. For the purposes of evaluating general intelligence, comparisons between chimpanzees and feral children seem fairer (although it's very hard to know how much the latter were affected by non-linguistic childhoods as opposed to abuse or pre-existing disabilities).\n\n\nConsider a hypothetical species which has the same level of \"general intelligence\" that chimpanzees currently have, but is as well-adapted to the domains of abstract reasoning and technological development as chimpanzee behaviour is to the domain of physical survival (e.g. because they evolved in an artificial environment where their fitness was primarily determined by their intellectual contributions). I claim that this species would have superhuman scientific research capabilities, and would be able to make progress in novel areas of science (analogously to how chimpanzees can currently learn to navigate novel physical landscapes).\n\nInsofar as Eliezer doubts this, but does believe that this species could outperform a society of village idiots at scientific research, then he needs to explain why the village-idiot-to-Einstein gap is so significant in this context but not in others.\nHowever, this is a pretty weird thought experiment, and maybe doesn't add much to our existing intuitions about AIs. My main intention here is to point at how animal behaviour is really really well-adapted to physical environments, in a way which makes people wonder what it would be like to be really really well-adapted to intellectual environments.\n\n\nI claim that the difficulty of human-level oracle AGIs matching humans Consider an AI which has been trained only to answer questions, and is now human-level at doing so. I claim that the difficulty of this AI matching humans at a range of real-world tasks (without being specifically trained to do so) would be much closer to the difficulty of teaching chimps to do science, than the difficulty of teaching adult humans to do abstract reasoning about a new domain.\n\nThe analogy here is: chimps have reasonably general intelligence, but it's hard for them to apply it to science because they weren't trained to apply intelligence to that. Likewise, human-level oracle AGIs have general intelligence, but it'll be hard for them to apply it to influencing the world because they weren't trained to apply intelligence to that.\n\n\n\n\n\n\n\n[Barnes][8:21]  (Sep. 23 comment on earlier draft)\n\nvillage-idiot-to-Einstein gap\n\nI wonder to what extent you can model within-species intelligence differences partly just as something like hyperparameter search – if you have a billion humans with random variation in their neural/cognitive traits, the top human will be a lot better than average. Then you could say something like:\n\nhumans are the dumbest species you could have where the distribution of intelligence in each generation is sufficient for cultural accumulation\nthat by itself might not imply a big gap from chimps\nbut human society has much larger population, so the smartest individuals are much smarter\n\n\n\n\n\n[Ngo][9:05]  (Sep. 23 comment on earlier draft)\nI think Eliezer's response (which I'd agree with) would be that the cognitive difference between the best humans and normal humans is strongly constrained by the fact that we're all one species who can interbreed with each other. And so our cognitive variation can't be very big compared with inter-species variation (at the top end at least; although it could at the bottom end via things breaking).\n\n\n\n\n[Barnes][9:35]  (Sep. 23 comment on earlier draft)\nI think that's not obviously true – it's definitely possible that there's a lot of random variation due to developmental variation etc. If that's the case then population size could create large within-species differences\n\n\n\n\n[Yudkowsky][11:46]  (Sep. 25 comment)\n\noracle AGIs\n\nRemind me of what this is?  Surely you don't just mean the AI that produces plans it doesn't implement itself, because that AI becomes an agent by adding an external switch that routes its outputs to a motor; it can hardly be much cognitively different from an agent.  Then what do you mean, \"oracle AGI\"?\n(People tend to produce shallow specs of what they mean by \"oracle\" that make no sense in my microfoundations, a la \"Just drive red cars but not blue cars!\", leading to my frequent reply, \"Sorry, still AGI-complete in terms of the machinery you have to build to do that.\")\n\n\n\n[Ngo][11:44]  (Sep. 29 comment)\nEdited to clarify what I meant in this context (and remove the word \"oracle\" altogether).\n\n\n\n\n[Yudkowsky][12:01]  (Sep. 29 comment)\nMy reply holds just as much to \"AIs that answer questions\"; what restricted question set do you imagine suffices to save the world without dangerously generalizing internal engines?\n\n\n\n[Barnes][8:15]  (Sep. 23 comment on earlier draft)\n\nThe analogy here is: chimps have reasonably general intelligence, but it's hard for them to apply it to science because they weren't trained to apply intelligence to that. Likewise, human-level oracle AGIs have general intelligence, but it'll be hard for them to apply it to influencing the world because they weren't trained to apply intelligence to that.\n\nthis is not intuitive to me; it seems pretty plausible that the subtasks of predicting the world and of influencing the world are much more similar than the subtasks of surviving in a chimp society are to the subtasks of doing science\n\n\n\n\n[Ngo][8:59]  (Sep. 23 comment on earlier draft)\nI think Eliezer's position is that all of these tasks are fairly similar if you have general intelligence. E.g. he argued that the difference between very good theorem-proving and influencing the world is significantly smaller than people expect. So even if you're right, I think his position is too strong for your claim to help him. (I expect him to say that I'm significantly overestimating the extent to which chimps are running general cognitive algorithms).\n\n\n\n\n[Barnes][9:33]  (Sep. 23 comment on earlier draft)\nI wasn't trying to defend his position, just disagreeing with you \n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nMore specific details\nHere are three training regimes which I expect to contribute to AGI:\n\nSelf-supervised training – e.g. on internet text, code, books, videos, etc.\nTask-based RL – agents are rewarded (likely via human feedback, and some version of iterated amplification) for doing well on bounded tasks.\nOpen-ended RL – agents are rewarded for achieving long-term goals in rich environments.\n\n\n\n\n\n[Yudkowsky][11:56]  (Sep. 25 comment)\n\nbounded tasks\n\nThere's an interpretation of this I'd agree with, but all of the work is being carried by the boundedness of the tasks, little or none via the \"human feedback\" part which I shrug at, and none by the \"iterated amplification\" part since I consider that tech unlikely to exist before the world ends.\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nMost of my probability of catastrophe comes from AGIs trained primarily via open-ended RL. Although IA makes these scenarios less likely by making task-based RL more powerful, it doesn't seem to me that IA tackles the hardest case (of aligning agents trained via open-ended RL) head-on. But disaster from open-ended RL also seems a long way away – mainly because getting long-term real-world feedback is very slow, and I expect it to be hard to create sufficiently rich artificial environments. By that point I do expect the strategic landscape to be significantly different, because of the impact of task-based RL.\n\n\n\n\n[Yudkowsky][11:57]  (Sep. 25 comment)\n\na long way away\n\nOh, definitely, at the present rates of progress we've got years, plural.\nThe history of futurism says that even saying that tends to be unreliable in the general case (people keep saying it right up until the Big Thing actually happens) and also that it's rather a difficult form of knowledge to obtain more than a few years out.\n\n\n\n[Yudkowsky][12;01]  (Sep. 25 comment)\n\nhard to create sufficiently rich artificial environments\n\nDisagree; I don't think that making environments more difficult in a way that challenges the environment inside will prove to be a significant AI development bottleneck.  Making simulations easy enough for current AIs to do interesting things in them, but hard enough that the things they do are not completely trivial, takes some work relevant to current levels of AI intelligence.  I think that making those environments more tractably challenging for smarter AIs is not likely to be nearly a bottleneck in progress, compared to making the AIs smarter and able to solve the environment.  It's a one-way-hash, P-vs-NP style thing – not literally, just that general relationship between it taking a lower amount of effort to pose a problem such that solving it requires a higher amount of effort.\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nPerhaps the best way to pin down disagreements in our expectations about the effects of the strategic landscape is to identify some measures that could help to reduce AGI risk, and ask how seriously key decision-makers would need to take AGI risk for each measure to be plausible, and how powerful and competent they would need to be for that measure to make a significant difference. Actually, let's lump these metrics together into a measure of \"amount of competent power applied\". Some benchmarks, roughly in order (and focusing on the effort applied by the US):\n\nBanning chemical/biological weapons\nCOVID\n\nKey points: mRNA vaccines, lockdowns, mask mandates\n\n\nNuclear non-proliferation\n\nKey points: Nunn-Lugar Act, stuxnet, various treaties\n\n\nThe International Space Station\n\nCost to US: ~$75 billion\n\n\nClimate change\n\nUS expenditure: >$154 billion (but not very effectively)\n\n\nProject Apollo\n\nWikipedia says that Project Apollo \"was the largest commitment of resources ($156 billion in 2019 US dollars) ever made by any nation in peacetime. At its peak, the Apollo program employed 400,000 people and required the support of over 20,000 industrial firms and universities.\"\n\n\nWW1\nWW2\n\n\n\n\n\n[Yudkowsky][12:02]  (Sep. 25 comment)\n\nWW2\n\nThis level of effort starts to buy significant amounts of time.  This level will not be reached, nor approached, before the world ends.\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nHere are some wild speculations (I just came up with this framework, and haven't thought about these claims very much):\n\nThe US and China preventing any other country from becoming a leader in AI requires about as much competent power as banning chemical/biological weapons.\nThe US and China enforcing a ban on AIs above a certain level of autonomy requires about as much competent power as the fight against climate change.\n\nIn this scenario, all the standard forces which make other types of technological development illegal have pushed towards making autonomous AGI illegal too.\n\n\nLaunching a good-faith joint US-China AGI project requires about as much competent power as launching Project Apollo.\n\nAccording to this article, Kennedy (and later Johnson) made several offers (some of which were public) of a joint US-USSR Moon mission, which Khrushchev reportedly came close to accepting. Of course this is a long way from actually doing a joint project (and it's not clear how reliable the source is), but it still surprised me a lot, given that I viewed the \"space race\" as basically a zero-sum prestige project. If your model predicted this, I'd be interested to hear why.\n\n\n\n\n\n\n\n[Yudkowsky][12:07]  (Sep. 25 comment)\n\nThe US and China preventing any other country from becoming a leader in AI requires about as much competent power as banning chemical/biological weapons.\n\nI believe this is wholly false.  On my model it requires closer to WW1 levels of effort.  I don't think you're going to get it without credible threats of military action leveled at previously allied countries.\nAI is easier and more profitable to build than chemical / biological weapons, and correspondingly harder to ban.  Existing GPU factories need to be shut down and existing GPU clusters need to be banned and no duplicate of them can be allowed to arise, across many profiting countries that were previously military allies of the United States, which – barring some vast shift in world popular and elite opinion against AI, which is also not going to happen – those countries would be extremely disinclined to sign, especially if the treaty terms permitted the USA and China to forge ahead.\nThe reason why chem weapons bans were much easier was that people did not like chem weapons.  They were awful.  There was a perceived common public interest in nobody having chem weapons.  It was understood popularly and by elites to be a Prisoner's Dilemma situation requiring enforcement to get to the Pareto optimum.  Nobody was profiting tons off the infrastructure that private parties could use to make chem weapons.\nAn AI ban is about as easy as banning advanced metal-forging techniques in current use so nobody can get ahead of the USA and China in making airplanes.  That would be HARD and likewise require credible threats of military action against former allies.\n\"AI ban is as easy as a chem weapons ban\" seems to me like politically crazy talk.  I'd expect a more politically habited person to confirm this.\n\n\n\n[Shulman][14:32]  (Sep. 25 comment)\nAI ban much, much harder than chemical weapons ban. Indeed chemical weapons were low military utility, that was central to the deal, and they have still been used subsequently.\n\nAn AI ban is about as easy as banning advanced metal-forging techniques in current use so nobody can get ahead of the USA and China in making airplanes. That would be HARD and likewise require credible threats of military action against former allies.\n\nIf large amounts of compute relative to today are needed (and presumably Eliezer rejects this), the fact that there is only a single global leading node chip supply chain makes it vastly easier than metal forging, which exists throughout the world and is vastly cheaper.\nSharing with allies (and at least embedding allies to monitor US compliance) also reduces the conflict side.\nOTOH, if compute requirements were super low then it gets a lot worse.\nAnd the biological weapons ban failed completely: the Soviets built an enormous bioweapons program, the largest ever, after agreeing to the ban, and the US couldn't even tell for sure they were doing so.\n\n\n\n\n[Yudkowsky][18:15]  (Oct. 4 comment)\nI've updated somewhat off of Carl Shulman's argument that there's only one chip supply chain which goes through eg a single manufacturer of lithography machines (ASML), which could maybe make a lock on AI chips possible with only WW1 levels of cooperation instead of WW2.\nThat said, I worry that, barring WW2 levels, this might not last very long if other countries started duplicating the supply chain, even if they had to go back one or two process nodes on the chips?  There's a difference between the proposition \"ASML has a lock on the lithography market right now\" and \"if aliens landed and seized ASML, Earth would forever after be unable to build another lithography plant\".  I mean, maybe that's just true because we lost technology and can't rebuild old bridges either, but it's at least less obvious.\nLaunching Tomahawk cruise missiles at any attempt anywhere to build a new ASML, is getting back into \"military threats against former military allies\" territory and hence what I termed WW2 levels of cooperation.\n\n\n\n[Shulman][18:30]  (Oct. 4 comment)\nChina has been trying for some time to build its own and has failed with tens of billions of dollars (but has captured some lagging node share), but would be substantially more likely to succeed with a trillion dollar investment. That said, it is hard to throw money at these things and the tons of tacit knowledge/culture/supply chain networks are tough to replicate. Also many ripoffs of the semiconductor subsidies have occurred. Getting more NASA/Boeing and less SpaceX is a plausible outcome even with huge investment.\nThey are trying to hire people away from the existing supply chain to take its expertise and building domestic skills with the lagging nodes.\n\n\n\n\n[Yudkowsky][19:14]  (Oct. 4 comment)\nDoes that same theory predict that if aliens land and grab some but not all of the current ASML personnel, Earth is thereby successfully taken hostage for years, because Earth has trouble rebuilding ASML, which had the irreproducible lineage of masters and apprentices dating back to the era of Lost Civilization?  Or would Earth be much better at this than China, on your model?\n\n\n\n[Shulman][19:31]  (Oct. 4 comment)\nI'll read that as including the many suppliers of ASML (one EUV machine has over 100,000 parts, many incredibly fancy or unique). It's just a matter of how many years it takes. I think Earth fails to rebuild that capacity in 2 years but succeeds in 10.\n\"A study this spring by Boston Consulting Group and the Semiconductor Industry Association estimated that creating a self-sufficient chip supply chain would take at least $1 trillion and sharply increase prices for chips and products made with them…The situation underscores the crucial role played by ASML, a once obscure company whose market value now exceeds $285 billion. It is \"the most important company you never heard of,\" said C.J. Muse, an analyst at Evercore ISI.\"\nhttps://www.nytimes.com/2021/07/04/technology/tech-cold-war-chips.html\n\n\n\n\n[Yudkowsky][19:59]  (Oct. 4 comment)\nNo in 2 years, yes in 10 years sounds reasonable to me for this hypothetical scenario, as far as I know in my limited knowledge.\n\n\n\n[Yudkowsky][12:10]  (Sep. 25 comment)\n\nLaunching a good-faith joint US-China AGI project requires about as much competent power as launching Project Apollo.\n\nIt's really weird, relative to my own model, that you put the item that the US and China can bilaterally decide to do all by themselves, without threats of military action against their former allies, as more difficult than the items that require conditions imposed on other developed countries that don't want them.\nPolitical coordination is hard.  No, seriously, it's hard.  It comes with a difficulty penalty that scales with the number of countries, how complete the buy-in has to be, and how much their elites and population don't want to do what you want them to do relative to how much elites and population agree that it needs doing (where this very rapidly goes to \"impossible\" or \"WW1/WW2\" as they don't particularly want to do your thing).\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nSo far I haven't talked about how much competent power I actually expect people to apply to AI governance. I don't think it's useful for Eliezer and me to debate this directly, since it's largely downstream from most of the other disagreements we've had. In particular, I model him as believing that there'll be very little competent power applied to prevent AI risk from governments and wider society, partly because he expects a faster takeoff than I do, and partly because he has a lower opinion of governmental competence than I do. But for the record, it seems likely to me that there'll be as much competent effort put into reducing AI risk by governments and wider society as there has been into fighting COVID; and plausibly (but not likely) as much as fighting climate change.\nOne key factor is my expectation that arguments about the importance of alignment will become much stronger as we discover more compelling examples of misalignment. I don't currently have strong opinions about how compelling the worst examples of misalignment before catastrophe are likely to be; but identifying and publicising them seems like a particularly effective form of advocacy, and one which we should prepare for in advance.\nThe predictable accumulation of easily-accessible evidence that AI risk is important is one example of a more general principle: that it's much easier to understand, publicise, and solve problems as those problems get closer and more concrete. This seems like a strong effect to me, and a key reason why so many predictions of doom throughout history have failed to come true, even when they seemed compelling at the time they were made.\nUpon reflection, however, I think that even taking this effect into account, the levels of competent power required for the interventions mentioned above are too high to justify the level of optimism about AI governance that I started our debate with. On the other hand, I found Eliezer's arguments about consequentialism less convincing than I expected. Overall I've updated that AI risk is higher than I previously believed; though I expect my views to be quite unsettled while I think more, and talk to more people, about specific governance interventions and scenarios.\n\n\n\n \n\nThe post Ngo's view on alignment difficulty appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Ngo’s view on alignment difficulty", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=2", "id": "fe27e93c795543170c4bd68484f20b72"} {"text": "Conversation on technology forecasting and gradualism\n\n\n \nThis post is a transcript of a multi-day discussion between Paul Christiano, Richard Ngo, Eliezer Yudkowsky, Rob Bensinger, Holden Karnofsky, Rohin Shah, Carl Shulman, Nate Soares, and Jaan Tallinn, following up on the Yudkowsky/Christiano debate in 1, 2, 3, and 4.\n \nColor key:\n\n\n\n\n Chat by Paul, Richard, and Eliezer \n Other chat \n\n\n\n\n \n12. Follow-ups to the Christiano/Yudkowsky conversation\n \n12.1. Bensinger and Shah on prototypes and technological forecasting\n \n\n[Bensinger][16:22] \nQuoth Paul:\n\nseems like you have to make the wright flyer much better before it's important, and that it becomes more like an industry as that happens, and that this is intimately related to why so few people were working on it\n\nIs this basically saying 'the Wright brothers didn't personally capture much value by inventing heavier-than-air flying machines, and this was foreseeable, which is why there wasn't a huge industry effort already underway to try to build such machines as fast as possible.' ?\nMy maybe-wrong model of Eliezer says here 'the Wright brothers knew a (Thielian) secret', while my maybe-wrong model of Paul instead says:\n\nThey didn't know a secret — it was obvious to tons of people that you could do something sorta like what the Wright brothers did and thereby invent airplanes; the Wright brothers just had unusual non-monetary goals that made them passionate to do a thing most people didn't care about.\nOr maybe it's better to say: they knew some specific secrets about physics/engineering, but only because other people correctly saw 'there are secrets to be found here, but they're stamp-collecting secrets of little economic value to me, so I won't bother to learn the secrets'. ~Everyone knows where the treasure is located, and ~everyone knows the treasure won't make you rich.\n\n\n\n\n\n[Yudkowsky][17:24]\nMy model of Paul says there could be a secret, but only because the industry was tiny and the invention was nearly worthless directly.\n\n\n\n\n[Cotra: ]\n\n\n\n\n\n\n\n[Christiano][17:53]\nI mean, I think they knew a bit of stuff, but it generally takes a lot of stuff to make something valuable, and the more people have been looking around in an area the more confident you can be that it's going to take a lot of stuff to do much better, and it starts to look like an extremely strong regularity for big industries like ML or semiconductors\nit's pretty rare to find small ideas that don't take a bunch of work to have big impacts\nI don't know exactly what a thielian secret is (haven't read the reference and just have a vibe)\nstraightening it out a bit, I have 2 beliefs that combine disjunctively: (i) generally it takes a lot of work to do stuff, as a strong empirical fact about technology, (ii) generally if the returns are bigger there are more people working on it, as a slightly-less-strong fact about sociology\n\n\n\n\n[Bensinger][18:09]\nsecrets = important undiscovered information (or information that's been discovered but isn't widely known), that you can use to get an edge in something. https://www.lesswrong.com/posts/ReB7yoF22GuerNfhH/thiel-on-secrets-and-indefiniteness\nThere seems to be a Paul/Eliezer disagreement about how common these are in general. And maybe a disagreement about how much more efficiently humanity discovers and propagates secrets as you scale up the secret's value?\n\n\n\n\n[Yudkowsky][18:35]\nMany times it has taken much work to do stuff; there's further key assertions here about \"It takes $100 billion\" and \"Multiple parties will invest $10B first\" and \"$10B gets you a lot of benefit first because scaling is smooth and without really large thresholds\".\nEliezer is like \"ah, yes, sometimes it takes 20 or even 200 people to do stuff, but core researchers often don't scale well past 50, and there aren't always predecessors that could do a bunch of the same stuff\" even though Eliezer agrees with \"it often takes a lot of work to do stuff\". More premises are needed for the conclusion, that one alone does not distinguish Eliezer and Paul by enough.\n\n\n\n[Bensinger][20:03]\nMy guess is that everyone agrees with claims 1, 2, and 3 here (please let me know if I'm wrong!):\n1. The history of humanity looks less like Long Series of Cheat Codes World, and more like Well-Designed Game World.\nIn Long Series of Cheat Codes World, human history looks like this, over and over: Some guy found a cheat code that totally outclasses everyone else and makes him God or Emperor, until everyone else starts using the cheat code too (if the Emperor allows it). After which things are maybe normal for another 50 years, until a new Cheat Code arises that makes its first adopters invincible gods relative to the previous tech generation, and then the cycle repeats.\nIn Well-Designed Game World, you can sometimes eke out a small advantage, and the balance isn't perfect, but it's pretty good and the leveling-up tends to be gradual. A level 100 character totally outclasses a level 1 character, and some level transitions are a bigger deal than others, but there's no level that makes you a god relative to the people one level below you.\n2. General intelligence took over the world once. Someone who updated on that fact but otherwise hasn't thought much about the topic should not consider it 'bonkers' that machine general intelligence could take over the world too, even though they should still consider it 'bonkers' that eg a coffee startup could take over the world.\n(Because beverages have never taken over the world before, whereas general intelligence has; and because our inside-view models of coffee and of general intelligence make it a lot harder to imagine plausible mechanisms by which coffee could make someone emperor, kill all humans, etc., compared to general intelligence.)\n(In the game analogy, the situation is a bit like 'I've never found a crazy cheat code or exploit in this game, but I haven't ruled out that there is one, and I heard of a character once who did a lot of crazy stuff that's at least suggestive that she might have had a cheat code.')\n3. AGI is arising in a world where agents with science and civilization already exist, whereas humans didn't arise in such a world. This is one reason to think AGI might not take over the world, but it's not a strong enough consideration on its own to make the scenario 'bonkers' (because AGIs are likely to differ from humans in many respects, and it wouldn't obviously be bonkers if the first AGIs turned out to be qualitatively way smarter, cheaper to run, etc.).\n—\nIf folks agree with the above, then I'm confused about how one updates from the above epistemic state to 'bonkers'.\nIt was to a large extent physics facts that determined how easy it was to understand the feasibility of nukes without (say) decades of very niche specialized study. Likewise, it was physics facts that determined you need rare materials, many scientists, and a large engineering+infrastructure project to build a nuke. In a world where the physics of nukes resulted in it being some PhD's quiet 'nobody thinks this will work' project like Andrew Wiles secretly working on a proof of Fermat's Last Theorem for seven years, that would have happened.\nIf an alien came to me in 1800 and told me that totally new physics would let future humans build city-destroying superbombs, then I don't see why I should have considered it bonkers that it might be lone mad scientists rather than nations who built the first superbomb. The 'lone mad scientist' scenario sounds more conjunctive to me (assumes the mad scientist knows something that isn't widely known, AND has the ability to act on that knowledge without tons of resources), so I guess it should have gotten less probability, but maybe not dramatically less?\n'Mad scientist builds city-destroying weapon in basement' sounds wild to me, but I feel like almost all of the actual unlikeliness comes from the 'city-destroying weapons exist at all' part, and then the other parts only moderately lower the probability.\nLikewise, I feel like the prima-facie craziness of basement AGI mostly comes from 'generally intelligence is a crazy thing, it's wild that anything could be that high-impact', and a much smaller amount comes from 'it's wild that something important could happen in some person's basement'.\n—\nIt does structurally make sense to me that Paul might know things I don't about GPT-3 and/or humans that make it obvious to him that we roughly know the roadmap to AGI and it's this.\nIf the entire 'it's bonkers that some niche part of ML could crack open AGI in 2026 and reveal that GPT-3 (and the mainstream-in-2026 stuff) was on a very different part of the tech tree' view is coming from a detailed inside-view model of intelligence like this, then that immediately ends my confusion about the argument structure.\nI don't understand why you think you have the roadmap, and given a high-confidence roadmap I'm guessing I'd still put more probability than you on someone finding a very different, shorter path that works too. But the argument structure \"roadmap therefore bonkers\" makes sense to me.\nIf there are meant to be other arguments against 'high-impact AGI via niche ideas/techniques' that are strong enough to make it bonkers, then I remain confused about the argument structure and how it can carry that much weight.\nI can imagine an inside-view model of human cognition, GPT-3 cognition, etc. that tells you 'AGI coming from nowhere in 3 years is bonkers'; I can't imagine an ML-is-a-reasonably-efficient-market argument that does the same, because even a perfectly efficient market isn't omniscient and can still be surprised by undiscovered physics facts that tell you 'nukes are relatively easy to build' and 'the fastest path to nukes is relatively hard to figure out'.\n(Caveat: I'm using the 'basement nukes' and 'Fermat's last theorem' analogy because it helps clarify the principles involved, not because I think AGI will be that extreme on the spectrum.)\n\n\n\n\n[Yudkowsky: +1]\n\n\n\n\nOh, I also wouldn't be confused by a view like \"I think it's 25% likely we'll see a more Eliezer-ish world. But it sounds like Eliezer is, like, 90% confident that will happen, and that level of confidence (and/or the weak reasoning he's provided for that confidence) seems bonkers to me.\"\nThe thing I'd be confused by is e.g. \"ML is efficient-ish, therefore the out-of-the-blue-AGI scenario itself is bonkers and gets, like, 5% probability.\"\n\n\n\n\n\n[Shah][1:58]\n(I'm unclear on whether this is acceptable for this channel, please let me know if not)\n\nI can't imagine an ML-is-a-reasonably-efficient-market argument that does the same, because even a perfectly efficient market isn't omniscient and can still be surprised by undiscovered physics facts\n\nI think this seems right as a first pass.\nSuppose we then make the empirical observation that in tons and tons of other fields, it is extremely rare that people discover new facts that lead to immediate impact. (Set aside for now whether or not that's true; assume that it is.) Two ways you could react to this:\n1. Different fields are different fields. It's not like there's a common generative process that outputs a distribution of facts and how hard they are to find that is common across fields. Since there's no common generative process, facts about field X shouldn't be expected to transfer to make predictions about field Y.\n2. There's some latent reason, that we don't currently know, that makes it so that it is rare for newly discovered facts to lead to immediate impact.\nIt seems like you're saying that (2) is not a reasonable reaction (i.e. \"not a valid argument structure\"), and I don't know why. There are lots of things we don't know, is it really so bad to posit one more?\n(Once we agree on the argument structure, we should then talk about e.g. reasons why such a latent reason can't exist, or possible guesses as to what the latent reason is, etc, but fundamentally I feel generally okay with starting out with \"there's probably some reason for this empirical observation, and absent additional information, I should expect that reason to continue to hold\".)\n\n\n\n\n[Bensinger][3:15]\nI think 2 is a valid argument structure, but I didn't mention it because I'd be surprised if it had enough evidential weight (in this case) to produce an 'update to bonkers'. I'd love to hear more about this if anyone thinks I'm under-weighting this factor. (Or any others I left out!)\n\n\n\n\n[Shah][23:57]\nIdk if it gets all the way to \"bonkers\", but (2) seems pretty strong to me, and is how I would interpret Paul-style arguments on timelines/takeoff if I were taking on what-I-believe-to-be your framework\n\n\n\n\n[Bensinger][11:06]\nWell, I'd love to hear more about that!\nAnother way of getting at my intuition: I feel like a view that assigns very small probability to 'suddenly vastly superhuman AI, because something that high-impact hasn't happened before'\n(which still seems weird to me, because physics doesn't know what 'impact' is and I don't see what physical mechanism could forbid it that strongly and generally, short of simulation hypotheses)\n… would also assign very small probability in 1800 to 'given an alien prediction that totally new physics will let us build superbombs at least powerful enough to level cities, the superbomb in question will ignite the atmosphere or otherwise destroy the Earth'.\nBut this seems flatly wrong to me — if you buy that the bomb works by a totally different mechanism (and exploits a different physics regime) than eg gunpowder, then the output of the bomb is a physics question, and I don't see how we can concentrate our probability mass much without probing the relevant physics. The history of boat and building sizes is a negligible input to 'given a totally new kind of bomb that suddenly lets us (at least) destroy cities, what is the total destructive power of the bomb?'.\n\n\n\n\n[Yudkowsky: +1]\n\n\n\n\n(Obviously the bomb didn't destroy the Earth, and I wouldn't be surprised if there's some Bayesian evidence or method-for-picking-a-prior that could have validly helped you suspect as much in 1800? But it would be a suspicion, not a confident claim.)\n\n\n\n\n[Shah][1:45]\n\nwould also assign very small probability in 1800 to 'given an alien prediction that totally new physics will let us build superbombs at least powerful enough to level cities, the superbomb in question will ignite the atmosphere or otherwise destroy the Earth'\n\n(As phrased you also have to take into account the question of whether humans would deploy the resulting superbomb, but I'll ignore that effect for now.)\nI think this isn't exactly right. The \"totally new physics\" part seems important to update on.\nLet's suppose that, in the reference class we built of boat and building sizes, empirically nukes were the 1 technology out of 20 that had property X. (Maybe X is something like \"discontinuous jump in things humans care about\" or \"immediate large impact on the world\" or so on.) Then, I think in 1800 you assign ~5% to 'the first superbomb at least powerful enough to level cities will ignite the atmosphere or otherwise destroy the Earth'.\nOnce you know more details about how the bomb works, you should be able to update away from 5%. Specifically, \"entirely new physics\" is an important detail that causes you to update away from 5%. I wouldn't go as far as you in throwing out reference classes entirely at that point — there can still be unknown latent factors that apply at the level of physics — but I agree reference classes look harder to use in this case.\nWith AI, I start from ~5% and then I don't really see any particular detail for AI that I think I should strongly update on. My impression is that Eliezer thinks that \"general intelligence\" is a qualitatively different sort of thing than that-which-neural-nets-are-doing, and maybe that's what's analogous to \"entirely new physics\". I'm pretty unconvinced of this, but something in this genre feels quite crux-y for me.\nActually, I think I've lost the point of this analogy. What's the claim for AI that's analogous to\n\n'given an alien prediction that totally new physics will let us build superbombs at least powerful enough to level cities, the superbomb in question will ignite the atmosphere or otherwise destroy the Earth'\n\n?\nLike, it seems like this is saying \"We figure out how to build a new technology that does X. What's the chance it has side effect Y?\" Where X and Y are basically unrelated.\nI was previously interpreting the argument as \"if we know there's a new superbomb based on totally new physics, and we know that the first such superbomb is at least capable of leveling cities, what's the probability it would have enough destructive force to also destroy the world\", but upon rereading that doesn't actually seem to be what you were gesturing at.\n\n\n\n\n[Bensinger][3:08]\nI'm basically responding to this thing Ajeya wrote:\n\nI think Paul's view would say:\n\nThings certainly happen for the first time\nWhen they do, they happen at small scale in shitty prototypes, like the Wright Flyer or GPT-1 or AlphaGo or the Atari bots or whatever\nWhen they're making a big impact on the world, it's after a lot of investment and research, like commercial aircrafts in the decades after Kitty Hawk or like the investments people are in the middle of making now with AI that can assist with coding\n\n\nTo which my reply is: I agree that the first AGI systems will be shitty compared to later AGI systems. But Ajeya's Paul-argument seems to additionally require that AGI systems be relatively unimpressive at cognition compared to preceding AI systems that weren't AGI.\nIf this is because of some general law that things are shitty / low-impact when they \"happen for the first time\", then I don't understand what physical mechanism could produce such a general law that holds with such force.\nAs I see it, physics 'doesn't care' about human conceptions of impactfulness, and will instead produce AGI prototypes, aircraft prototypes, and nuke prototypes that have as much impact as is implied by the detailed case-specific workings of general intelligence, flight, and nuclear chain reactions respectively.\nWe could frame the analogy as:\n\n'If there's a year where AI goes from being unable to do competitive par-human reasoning in the hard sciences, to being able to do such reasoning, we should estimate the impact of the first such systems by drawing on our beliefs about par-human scientific reasoning itself.'\nLikewise: 'If there's a year where explosives go from being unable to destroy cities to being able to destroy cities, we should estimate the impact of the first such explosives by drawing on our beliefs about how (current or future) physics might allow a city to be destroyed, and what other effects or side-effects such a process might have. We should spend little or no time thinking about the impactfulness of the first steam engine or the first telescope.'\n\n\n\n\n\n[Shah][3:14]\nSeems like your argument is something like \"when there's a zero-to-one transition, then you have to make predictions based on reasoning about the technology itself\". I think in that case I'd say this thing from above:\n\nMy impression is that Eliezer thinks that \"general intelligence\" is a qualitatively different sort of thing than that-which-neural-nets-are-doing, and maybe that's what's analogous to \"entirely new physics\". I'm pretty unconvinced of this, but something in this genre feels quite crux-y for me.\n\n(Like, you wouldn't a priori expect anything special to happen once conventional bombs become big enough to demolish a football stadium for the first time. It's because nukes are based on \"totally new physics\" that you might expect unprecedented new impacts from nukes. What's the analogous thing for AGI? Why isn't AGI just regular AI but scaled up in a way that's pretty continuous?)\nI'm curious if you'd change your mind if you were convinced that AGI is just regular AI scaled up, with no qualitatively new methods — I expect you wouldn't but idk why\n\n\n\n\n[Bensinger][4:03]\nIn my own head, the way I think of 'AGI' is basically: \"Something happened that allows humans to do biochemistry, materials science, particle physics, etc., even though none of those things were present in our environment of evolutionary adaptedness. Eventually, AI will similarly be able to generalize to biochemistry, materials science, particle physics, etc. We can call that kind of AI 'AGI'.\"\nThere might be facts I'm unaware of that justify conclusions like 'AGI is mostly just a bigger version of current ML systems like GPT-3', and there might be facts that justify conclusions like 'AGI will be preceded by a long chain of predecessors, each slightly less general and slightly less capable than its successor'.\nBut if so, I'm assuming those will be facts about CS, human cognition, etc., not at all a list of a hundred facts like 'the first steam engine didn't take over the world', 'the first telescope didn't take over the world'…. Because the physics of brains doesn't care about those things, and because in discussing brains we're already in 'things that have been known to take over the world' territory.\n(I think that paying much attention at all to the technology-wide base rate for 'does this allow you to take over the world?', once you already know you're doing something like 'inventing a new human', doesn't really make sense at all? It sounds to me like going to a bookstore and then repeatedly worrying 'What if they don't have the book I'm looking for? Most stores don't sell books at all, so this one might not have the one I want.' If you know it's a book store, then you shouldn't be thinking at that level of generality at all; the base rate just goes out the window.)\n\n\n\n\n[Yudkowsky:] +1\n\n\n\n\nMy way of thinking about AGI is pretty different from saying AGI follows 'totally new mystery physics' — I'm explicitly anchoring to a known phenomenon, humans.\nThe analogous thing for nukes might be 'we're going to build a bomb that uses processes kind of like the ones found in the Sun in order to produce enough energy to destroy (at least) a city'.\n\n\n\n\n[Shah][0:44]\n\nThe analogous thing for nukes might be 'we're going to build a bomb that uses processes kind of like the ones found in the Sun in order to produce enough energy to destroy (at least) a city'.\n\n(And I assume the contentious claim is \"that bomb would then ignite the atmosphere, destroy the world, or otherwise have hugely more impact than just destroying a city\".)\nIn 1800, we say \"well, we'll probably just make existing fires / bombs bigger and bigger until they can destroy a city, so we shouldn't expect anything particularly novel or crazy to happen\", and assign (say) 5% to the claim.\nThere is a wrinkle: you said it was processes like the ones found in the Sun. Idk what the state of knowledge was like in 1800, but maybe they knew that the Sun couldn't be a conventional fire. If so, then they could update to a higher probability.\n(You could also infer that since someone bothered to mention \"processes like the ones found in the Sun\", those processes must be ones we don't know yet, which also allows you to make that update. I'm going to ignore that effect, but I'll note that this is one way in which the phrasing of the claim is incorrectly pushing you in the direction of \"assign higher probability\", and I think a similar thing happens for AI when saying \"processes like those in the human brain\".)\nWith AI I don't see why the human brain is a different kind of thing than (say) convnets. So I feel more inclined to just take the starting prior of 5%.\nPresumably you think that assigning 5% to the nukes claim in 1800 was incorrect, even if that perspective doesn't know that the Sun is not just a very big conventional fire. I'm not sure why this is. According to me this is just the natural thing to do because things are usually continuous and so in the absence of detailed knowledge that's what your prior should be. (If I had to justify this, I'd point to facts about bridges and buildings and materials science and so on.)\n\nthere might be facts that justify conclusions like 'AGI will be preceded by a long chain of slightly-less-general, slightly-less-capable successors'.\n\nThe frame of \"justify[ing] conclusions\" seems to ask for more confidence than I expect to get. Rather I feel like I'm setting an initial prior that could then be changed radically by engaging with details of the technology. And then I'm further saying that I don't see any particular details that should cause me to update away significantly (but they could arise in the future).\nFor example, suppose I have a random sentence generator, and I take the first well-formed claim it spits out. (I'm using a random sentence generator so that we don't update on the process by which the claim was generated.) This claim turns out to be \"Alice has a fake skeleton hidden inside her home\". Let's say we know nothing about Alice except that she is a real person somewhere in the US who has a home. You can still assign < 10% probability to the claim, and take 10:1 bets with people who don't know any additional details about Alice. Nonetheless, as you learn more about Alice, you could update towards higher probability, e.g. if you learn that she loves Halloween, that's a modest update; if you learn she runs a haunted house at Halloween every year, that's a large update; if you go to her house and see the fake skeleton you can update to ~100%. That's the sort of situation I feel like we're in with AI.\nIf you asked me what facts justify the conclusion that Alice probably doesn't have a fake skeleton hidden inside her house, I could only point to reference classes, and all the other people I've met who don't have such skeletons. This is not engaging with the details of Alice's situation, and I could similarly say \"if I wanted to know about Alice, surely I should spend most of my time learning about Alice, rather than looking at what Bob and Carol did\". Nonetheless, it is still correct to assign < 10% to the claim.\nIt really does seem to come down to — why is human-level intelligence such a special turning point that should receive special treatment? Just as you wouldn't give special treatment to \"the first time bridges were longer than 10m\", it doesn't seem obvious that there's anything all that special at the point where AIs reach human-level intelligence (at least for the topics we're discussing; there are obvious reasons that's an important point when talking about the economic impact of AI)\n\n\n\n\n[Tallinn][7:04]\nFWIW, my current 1-paragraph compression of the debate positions is something like:\ncatastrophists: when evolution was gradually improving hominid brains, suddenly something clicked – it stumbled upon the core of general reasoning – and hominids went from banana classifiers to spaceship builders. hence we should expect a similar (but much sharper, given the process speeds) discontinuity with AI.\ngradualists: no, there was no discontinuity with hominids per se; human brains merely reached a threshold that enabled cultural accumulation (and in a meaningul sense it was culture that built those spaceships). similarly, we should not expect sudden discontinuities with AI per se, just an accelerating (and possibly unfavorable to humans) cultural changes as human contributions will be automated away.\n—\none possible crux to explore is \"how thick is culture\": is it something that AGI will quickly decouple from (dropping directly to physics-based ontology instead) OR will culture remain AGI's main environment/ontology for at least a decade.\n\n\n\n\n[Ngo][11:18]\n\nFWIW, my current 1-paragraph compression of the debate positions is something like:\ncatastrophists: when evolution was gradually improving hominid brains, suddenly something clicked – it stumbled upon the core of general reasoning – and hominids went from banana classifiers to spaceship builders. hence we should expect a similar (but much sharper, given the process speeds) discontinuity with AI.\ngradualists: no, there was no discontinuity with hominids per se; human brains merely reached a threshold that enabled cultural accumulation (and in a meaningul sense it was culture that built those spaceships). similarly, we should not expect sudden discontinuities with AI per se, just an accelerating (and possibly unfavorable to humans) cultural changes as human contributions will be automated away.\n—\none possible crux to explore is \"how thick is culture\": is it something that AGI will quickly decouple from (dropping directly to physics-based ontology instead) OR will culture remain AGI's main environment/ontology for at least a decade.\n\nClarification: in the sentence \"just an accelerating (and possibly unfavorable to humans) cultural changes as human contributions will be automated away\", what work is \"cultural changes\" doing? Could we just say \"changes\" (including economic, cultural, etc) instead?\n\nIn my own head, the way I think of 'AGI' is basically: \"Something happened that allows humans to do biochemistry, materials science, particle physics, etc., even though none of those things were present in our environment of evolutionary adaptedness. Eventually, AI will similarly be able to generalize to biochemistry, materials science, particle physics, etc. We can call that kind of AI 'AGI'.\"\nThere might be facts I'm unaware of that justify conclusions like 'AGI is mostly just a bigger version of current ML systems like GPT-3', and there might be facts that justify conclusions like 'AGI will be preceded by a long chain of predecessors, each slightly less general and slightly less capable than its successor'.\nBut if so, I'm assuming those will be facts about CS, human cognition, etc., not at all a list of a hundred facts like 'the first steam engine didn't take over the world', 'the first telescope didn't take over the world'…. Because the physics of brains doesn't care about those things, and because in discussing brains we're already in 'things that have been known to take over the world' territory.\n(I think that paying much attention at all to the technology-wide base rate for 'does this allow you to take over the world?', once you already know you're doing something like 'inventing a new human', doesn't really make sense at all? It sounds to me like going to a bookstore and then repeatedly worrying 'What if they don't have the book I'm looking for? Most stores don't sell books at all, so this one might not have the one I want.' If you know it's a book store, then you shouldn't be thinking at that level of generality at all; the base rate just goes out the window.)\n\nI'm broadly sympathetic to the idea that claims about AI cognition should be weighted more highly than claims about historical examples. But I think you're underrating historical examples. There are at least three ways those examples can be informative – by telling us about:\n1. Domain similarities\n2. Human effort and insight\n3. Human predictive biases\nYou're mainly arguing against 1, by saying that there are facts about physics, and facts about intelligence, and they're not very related to each other. This argument is fairly compelling to me (although it still seems plausible that there are deep similarities which we don't understand yet – e.g. the laws of statistics, which apply to many different domains).\nBut historical examples can also tell us about #2 – for instance, by giving evidence that great leaps of insight are rare, and so if there exists a path to AGI which doesn't require great leaps of insight, that path is more likely than one which does.\nAnd they can also tell us about #3 – for instance, by giving evidence that we usually overestimate the differences between old and new technologies, and so therefore those same biases might be relevant to our expectations about AGI.\n\n\n\n\n[Bensinger][12:31]\nIn the 'alien warns about nukes' example, my intuition is that 'great leaps of insight are rare' and 'a random person is likely to overestimate the importance of the first steam engines and telescopes' tell me practically nothing, compared to what even a small amount of high-uncertainty physics reasoning tells me.\nThe 'great leap of insight' part tells me ~nothing because even if there's an easy low-insight path to nukes and a hard high-insight path, I don't thereby know the explosive yield of a bomb on either path (either absolutely or relatively); it depends on how nukes work.\nLikewise, I don't think 'a random person is likely to overestimate the first steam engine' really helps with estimating the power of nuclear explosions. I could imagine a world where this bias exists and is so powerful and inescapable it ends up being a big weight on the scales, but I don't think we live in that world?\nI'm not even sure that a random person would overestimate the importance of prototypes in general. Probably, I guess? But my intuition is still that you're better off in 1800 focusing on physics calculations rather than the tug-of-war 'maybe X is cognitively biasing me in this way, no wait maybe Y is cognitively biasing me in this other way, no wait…'\nOur situation might not be analogous to the 1800-nukes scenario (e.g., maybe we know by observation that current ML systems are basically scaled-down humans). But if it is analogous, then I think the history-of-technology argument is not very useful here.\n\n\n\n\n[Tallinn][13:00]\nre \"cultural changes\": yeah, sorry, i meant \"culture\" in very general \"substrate of human society\" sense. \"cultural changes\" would then include things like changes in power structures and division of labour, but not things like \"diamondoid bacteria killing all humans in 1 second\" (that would be a change in humans, not in the culture)\n\n\n\n\n[Shah][13:09]\nI want to note that I agree with your (Rob's) latest response, but I continue to think most of the action is in whether AGI involves something analogous to \"totally new physics\", where I would guess \"no\" (and would do so particularly strongly for shorter timelines).\n(And I would still point to historical examples for \"many new technologies don't involve something analogous to 'totally new physics'\", and I'll note that Richard's #2 about human effort and insight still applies)\n\n\n\n \n12.2. Yudkowsky on Steve Jobs and gradualism\n \n\n[Yudkowsky][15:26]\nSo recently I was talking with various people about the question of why, for example, Steve Jobs could not find somebody else with UI taste 90% as good as his own, to take over Apple, even while being able to pay infinite money. A successful founder I was talking to was like, \"Yep, I sure would pay $100 million to hire somebody who could do 80% of what I can do, in fact, people have earned more than that for doing less.\"\nI wondered if OpenPhil was an exception to this rule, and people with more contact with OpenPhil seemed to think that OpenPhil did not have 80% of a Holden Karnofsky (besides Holden).\nAnd of course, what sparked this whole thought process in me, was that I'd staked all the effort I put into the Less Wrong sequences, into the belief that if I'd managed to bring myself into existence, then there ought to be lots of young near-Eliezers in Earth's personspace including some with more math talent or physical stamina not so unusually low, who could be started down the path to being Eliezer by being given a much larger dose of concentrated hints than I got, starting off the compounding cascade of skill formations that I saw as having been responsible for producing me, \"on purpose instead of by accident\".\nI see my gambit as having largely failed, just like the successful founder couldn't pay $100 million to find somebody 80% similar in capabilities to himself, and just like Steve Jobs could not find anyone to take over Apple for presumably much larger amounts of money and status and power. Nick Beckstead had some interesting stories about various ways that Steve Jobs had tried to locate successors (which I wasn't even aware of).\nI see a plausible generalization as being a \"Sparse World Hypothesis\": The shadow of an Earth with eight billion people, projected into some dimensions, is much sparser than plausible arguments might lead you to believe. Interesting people have few neighbors, even when their properties are collapsed and projected onto lower-dimensional tests of output production. The process of forming an interesting person passes through enough 0-1 critical thresholds that all have to be passed simultaneously in order to start a process of gaining compound interest in various skills, that they then cannot find other people who are 80% as good as what they do (never mind being 80% similar to them as people).\nI would expect human beings to start out much denser in a space of origins than AI projects, and for the thresholds and compounding cascades of our mental lives to be much less sharp than chimpanzee-human gaps.\nGradualism about humans sure sounds totally reasonable! It is in fact much more plausible-sounding a priori than the corresponding proposition about AI projects! I staked years of my own life on the incredibly reasoning-sounding theory that if one actual Eliezer existed then there should be lots of neighbors near myself that I could catalyze into existence by removing some of the accidental steps from the process that had accidentally produced me.\nBut it didn't work in real life because plausible-sounding gradualist arguments just… plain don't work in real life even though they sure sound plausible. I spent a lot of time arguing with Robin Hanson, who was more gradualist than I was, and was taken by surprise when reality itself was much less gradualist than I was.\nMy model has Paul or Carl coming back with some story about how, why, no, it is totally reasonable that Steve Jobs couldn't find a human who was 90% as good at a problem class as Steve Jobs to take over Apple for billions of dollars despite looking, and, why, no, this is not at all a falsified retroprediction of the same gradualist reasoning that says a leading AI project should be inside a dense space of AI projects that projects onto a dense space of capabilities such that it has near neighbors.\nIf so, I was not able to use this hypothetical model of selective gradualist reasoning to deduce in advance that replacements for myself would be sparse in the same sort of space and I'd end up unable to replace myself.\nI do not really believe that, without benefits of hindsight, the advance predictions of gradualism would differ between the two cases.\nI think if you don't peek at the answer book in advance, the same sort of person who finds it totally reasonable to expect successful AI projects to have close lesser earlier neighbors, would also find it totally reasonable to think that Steve Jobs definitely ought to be able to find somebody 90% as good to take over his job – and should actually be able to find somebody much better because Jobs gets to run a wider search and offer more incentive than when Jobs was wandering into early involvement in Apple.\nIt's completely reasonable-sounding! Totally plausible to a human ear! Reality disagrees. Jobs tried to find a successor, couldn't, and now the largest company in the world by market cap seems no longer capable of sending the iPhones back to the designers and asking them to do something important differently.\nThis is part of the story for why I put gradualism into a mental class of \"arguments that sound plausible and just fail in real life to be binding on reality; reality says 'so what' and goes off to do something else\".\n\n\n\n[Christiano][17:46]  (Sep. 28)\nIt feels to me like a common pattern is: I say that ML in particular, and most technologies in general, seem to improve quite gradually on metrics that people care about or track. You say that some kind of \"gradualism\" worldview predicts a bunch of other stuff (some claim about markets or about steve jobs or whatever that feels closely related on your view but not mine). But it feels to me like there are just a ton of technologies, and a ton of AI benchmarks, and those are just much more analogous to \"future AI progress.\" I know that to you this feels like reference class tennis, but I think I legitimately don't understand what kind of approach to forecasting you are using that lets you just make (what I see as) the obvious boring prediction about all of the non-AGI technologies.\nPerhaps you are saying that symmetrically you don't understand what approach to forecasting I'm using, that would lead me to predict that technologies improve gradually yet people vary greatly in their abilities. To me it feels like the simplest thing in the world: I expect future technological progress in domain X to be like past progress in domain X, and future technological progress to be like past technological progress, and future market moves to be like past market moves, and future elections to be like past elections.\nAnd it seems like you must be doing something that ends up making almost the same predictions as that almost all the time, which is why you don't get incredibly surprised every single year by continuing boring and unsurprising progress in batteries or solar panels or robots or ML or computers or microscopes or whatever. Like it's fine if you say \"Yes, those areas have trend breaks sometimes\" but there are so many boring years that you must somehow be doing something like having the baseline \"this year is probably going to be boring.\"\nSuch that intuitively it feels to me like the disagreement between us must be in the part where AGI feels to me like it is similar to AI-to-date and feels to you like it is very different and better compared to evolution of life or humans.\nIt has to be the kind of argument that you can make about progress-of-AI-on-metrics-people-care-about, but not progress-of-other-technologies-on-metrics-people-care-about, otherwise it seems like you are getting hammered every boring year for every boring technology.\nI'm glad we have the disagreement on record where I expect ML progress to continue to get less jumpy as the field grows, and maybe the thing to do is just poke more at that since it is definitely a place where I gut level expect to win bayes points and so could legitimately change my mind on the \"which kinds of epistemic practices work better?\" question. But it feels like it's not the main action, the main action has got to be about you thinking that there is a really impactful change somewhere between {modern AI, lower animals} and {AGI, humans} that doesn't look like ongoing progress in AI.\nI think \"would GPT-3 + 5 person-years of engineering effort foom?\" feels closer to core to me.\n(That said, the way AI could be different need not feel like \"progress is lumpier,\" could totally be more like \"Progress is always kind of lumpy, which Paul calls 'pretty smooth' and Eliezer calls 'pretty lumpy' and doesn't lead to any disagreements; but Eliezer thinks AGI is different in that kind-of-lumpy progress leads to fast takeoff, while Paul thinks it just leads to kind-of-lumpy increases in the metrics people care about or track.\")\n\n\n\n\n[Yudkowsky][7:46]  (Sep. 29)\n\nI think \"would GPT-3 + 5 person-years of engineering effort foom?\" feels closer to core to me.\n\nI truly and legitimately cannot tell which side of this you think we should respectively be on. My guess is you're against GPT-3 fooming because it's too low-effort and a short timeline, even though I'm the one who thinks GPT-3 isn't on a smooth continuum with AGI??\nWith that said, the rest of this feels on-target to me; I sure do feel like {natural selection, humans, AGI} form an obvious set with each other, though even there the internal differences are too vast and the data too scarce for legit outside viewing.\n\nI truly and legitimately cannot tell which side of this you think we should respectively be on. My guess is you're against GPT-3 fooming because it's too low-effort and a short timeline, even though I'm the one who thinks GPT-3 isn't on a smooth continuum with AGI??\n\nI mean I obviously think you can foom starting from an empty Python file with 5 person-years of effort if you've got the Textbook From The Future; you wouldn't use the GPT code or model for anything in that, the Textbook says to throw it out and start over.\n\n\n\n[Christiano][9:45]  (Sep. 29)\nI think GPT-3 will foom given very little engineering effort, it will just be much slower than the human foom\nand then that timeline will get faster and faster over time\nit's also fair to say that it wouldn't foom because the computers would break before it figured out how to repair them (and it would run out of metal before it figured out how to mine it, etc.), depending on exactly how you define \"foom,\" but the point is that \"you can repair the computers faster than they break\" happens much before you can outrun human civilization\nso the relevant threshold you cross is the one where you are outrunning civilization\n(and my best guess about human evolution is pretty similar, it looks like humans are smart enough to foom over a few hundred thousand years, and that we were the ones to foom because that is also roughly how long it was taking evolution to meaningfully improve our cognition—if we foomed slower it would have instead been a smarter successor who overtook us, if we foomed faster it would have instead been a dumber predecessor, though this is much less of a sure-thing than the AI case because natural selection is not trying to make something that fooms)\nand regarding {natural selection, humans, AGI} the main question is why modern AI and homo erectus (or even chimps) aren't in the set\nit feels like the core disagreement is that I mostly see a difference in degree between the various animals, and between modern AI and future AI, a difference that is likely to be covered by gradual improvements that are pretty analogous to contemporary improvements, and so as the AI community making contemporary improvements grows I get more and more confident that TAI will be a giant industry rather than an innovation\n\n\n\n\n[Ngo][5:45]\nDo you have a source on Jobs having looked hard for a successor who wasn't Tim Cook?\nAlso, I don't have strong opinions about how well Apple is doing now, so I default to looking at the share price, which seems very healthy.\n(Although I note in advance that this doesn't feel like a particularly important point, roughly for the same reason that Paul mentioned: gradualism about Steve Jobs doesn't seem like a central example of the type of gradualism that informs beliefs about AI development.)\n\n\n\n\n[Yudkowsky][10:40]\nMy source is literally \"my memory of stuff that Nick Beckstead just said to me in person\", maybe he can say more if we invite him.\nI'm not quite sure what to do with the notion that \"gradualism about Steve Jobs\" is somehow less to be expected than gradualism about AGI projects. Humans are GIs. They are extremely similar to each other design-wise. There are a lot of humans, billions of them, many many many more humans than I expect AGI projects. Despite this the leading edge of human-GIs is sparse enough in the capability space that there is no 90%-of-Steve-Jobs that Jobs can locate, and there is no 90%-of-von-Neumann known to 20th century history. If we are not to take any evidence about this to A-GIs, then I do not understand the rules you're using to apply gradualism to some domains but not others.\nAnd to be explicit, a skeptic who doesn't find these divisions intuitive, might well ask, \"Is gradualism perhaps isomorphic to 'The coin always comes up heads on Heady occasions', where 'Heady' occasions are determined by an obscure intuitive method going through some complicated nonverbalizable steps one of which is unfortunately 'check whether the coin actually came up heads'?\"\n(As for my own theory, it's always been that AGIs are mostly like AGIs and not very much like humans or the airplane-manufacturing industry, and I do not, on my own account of things, appeal much to supposed outside viewing or base rates.)\n\n\n\n[Shulman][11:11]\nI think the way to apply it is to use observable data (drawn widely) and math.\nSteve Jobs does look like a (high) draw (selected for its height, in the sparsest tail of the CEO distribution) out of the economic and psychometric literature (using the same kind of approach I use in other areas like estimating effects of introducing slightly superhuman abilities on science, the genetics of height, or wealth distributions). You have roughly normal or log-normal distributions on some measures of ability (with fatter tails when there are some big factors present, e.g. super-tall people are enriched for normal common variants for height but are more frequent than a Gaussian estimated from the middle range because of some weird disease/hormonal large effects). And we have lots of empirical data about the thickness and gaps there. Then you have a couple effects that can make returns in wealth/output created larger.\nYou get amplification from winner-take-all markets, IT, and scale that let higher ability add value to more places. This is the same effect that lets top modern musicians make so much money. Better CEOs get allocated to bigger companies because multiplicative management decisions are worth more in big companies. Software engineering becomes more valuable as the market for software grows.\nWealth effects are amplified by multiplicative growth (noise in a given period multiplies wealth for the rest of the series, and systematic biases from abilities can grow exponentially or superexponentially over a lifetime), and there are some versions of that in gaining expensive-to-acquire human capital (like fame for Hollywood actors, or experience using incredibly expensive machinery or companies).\nAnd we can read off the distributions of income, wealth, market share, lead time in innovations, scientometrics, etc.\nThat sort of data lead you to expect cutting edge tech to be months to a few years ahead of followers, winner-take-all tech markets to a few leading firms and often a clearly dominant one (but not driving an expectation of being able to safely rest on laurels for years while others innovate without a moat like network effects). That's one of my longstanding arguments with Robin Hanson, that his model has more even capabilities and market share for AGI/WBE than typically observed (he says that AGI software will have to be more diverse requiring more specialized companies, to contribute so much GDP).\nIt is tough to sample for extreme values on multiple traits at once, superexponentially tough as you go out or have more criteria. CEOs of big companies are smarter than average, taller than average, have better social skills on average, but you can't find people who are near the top on several of those.\nhttps://www.hbs.edu/ris/Publication%20Files/16-044_9c05278e-9d11-4315-a744-de008edf4d80.pdf\nCorrelations between the things help, but it's tough. E.g. if you have thousands of people in a class on a measure of cognitive skill, and you select on only partially correlated matters of personality, interest, motivation, prior experience, etc, the math says it gets thin and you'll find different combos (and today we see more representation of different profiles of abilities, including rare and valuable ones, in this community)\nI think the bigger update for me from trying to expand high-quality save the world efforts has been on the funny personality traits/habits of mind that need to be selected and their scarcity.\n\n\n\n\n[Karnofsky][11:30]\nA cpl comments, without commitment to respond to responses:\n1. Something in the zone of \"context / experience / obsession\" seems important for explaining the Steve Jobs type thing. It seems to me that people who enter an area early tend to maintain an edge even over more talented people who enter later – examples are not just founder/CEO types but also early employees of some companies who are more experienced with higher-level stuff (and often know the history of how they got there) better than later-entering people.\n2. I'm not sure if I am just rephrasing something Carl or Paul has said, but something that bugs me a lot about the Rob/Eliezer arguments is that I feel like if I accept >5% probability for the kind of jump they're talking about, I don't have a great understanding of how I avoid giving >5% to a kajillion other claims from various startups that they're about to revolutionize their industry, in ways that seem inside-view plausible and seem to equally \"depend on facts about some physical domain rather than facts about reference classes.\"\nThe thing that actually most comes to mind here is Thiel – he has been a phenomenal investor financially, but he has also invested by now in a lot of \"atoms\" startups with big stories about what they might do, and I don't think any have come close to reaching those visions (though they have sometimes made $ by doing something orders of magnitude less exciting).\nIf a big crux here is \"whether Thielian secrets exist\" this track record could be significant.\nI think I might update if I had a cleaner sense of how I could take on this kind of \"Well, if it is just a fact about physics that I have no idea about, it can't be that unlikely\" view without then betting on a lot of other inside-view-plausible breakthroughs that haven't happened. Right now all I can say to imitate this lens is \"General intelligence is 'different'\"\nI don't feel the same way about \"AI might take over the world\" – I feel like I have good reasons this applies to AI and not a bunch of other stuff\n\n\n\n\n[Soares][11:11]\nOk, a few notes from me (feel free to ignore):\n1. It seems to me like the convo here is half attempting-to-crux and half attempting-to-distill-out-a-bet. I'm interested in focusing explicitly on cruxing for the time being, for whatever that's worth. (It seems to me like y'all're already trending in that direction.)\n2. It seems to me that one big revealed difference between the Eliezerverse and the Paulverse is something like:\n\nIn the Paulverse, we already have basically all the fundamental insights we need for AGI, and now it's just a matter of painstaking scaling.\nIn the Eliezerverse, there are large insights yet missing (and once they're found we have plenty of reason to expect things to go quickly).\n\nFor instance, in Eliezerverse they say \"The Wright flyer didn't need to have historical precedents, it was allowed to just start flying. Similarly, the AI systems of tomorrow are allowed to just start GIing without historical precedent.\", and in the Paulverse they say \"The analog of the Wright flyer has already happened, it was Alexnet, we are now in the phase analogous to the slow grinding transition from human flight to commercially viable human flight.\"\n(This seems to me like basically what Ajeya articulated upthread.)\n3. It seems to me that another revealed intuition-difference is in the difficulty that people have operating each other's models. This is evidenced by, eg, Eliezer/Rob saying things like \"I don't know how to operate the gradualness model without making a bunch of bad predictions about Steve Jobs\", and Paul/Holden responding with things like \"I don't know how to operate the secrets-exist model without making a bunch of bad predictions about material startups\".\nI'm not sure whether this is a shallower or deeper disagreement than (2). I'd be interested in further attempts to dig into the questions of how to operate the models, in hopes that the disagreement looks interestingly different once both parties can at least operate the other model.\n\n\n\n\n[Tallinn: ]\n\n\n\n\n\n\n\n \n\nThe post Conversation on technology forecasting and gradualism appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Conversation on technology forecasting and gradualism", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=2", "id": "4ead8eb9f707d113fc5168b813ed7b84"} {"text": "More Christiano, Cotra, and Yudkowsky on AI progress\n\n\n \nThis post is a transcript of a discussion between Paul Christiano, Ajeya Cotra, and Eliezer Yudkowsky (with some comments from Rob Bensinger, Richard Ngo, and Carl Shulman), continuing from 1, 2, and 3.\n \nColor key:\n\n\n\n\n Chat by Paul and Eliezer \n Other chat \n\n\n\n\n \n10.2. Prototypes, historical perspectives, and betting\n \n\n[Bensinger][4:25]\nI feel confused about the role \"innovations are almost always low-impact\" plays in slow-takeoff-ish views.\nSuppose I think that there's some reachable algorithm that's different from current approaches, and can do par-human scientific reasoning without requiring tons of compute.\nThe existence or nonexistence of such an algorithm is just a fact about the physical world. If I imagine one universe where such an algorithm exists, and another where it doesn't, I don't see why I should expect that one of those worlds has more discontinuous change in GWP, ship sizes, bridge lengths, explosive yields, etc. (outside of any discontinuities caused by the advent of humans and the advent of AGI)? What do these CS facts have to do with the other facts?\nBut AI Impacts seems to think there's an important connection, and a large number of facts of the form 'steamships aren't like nukes' seem to undergird a lot of Paul's confidence that the scenario I described —\n(\"there's some reachable algorithm that's different from current approaches, and can do par-human scientific reasoning without requiring tons of compute.\")\n— is crazy talk. (Unless I'm misunderstanding. As seems actually pretty likely to me!)\n(E.g., Paul says \"To me your model just seems crazy, and you are saying it predicts crazy stuff at the end but no crazy stuff beforehand\", and one of the threads of the timelines conversation has been Paul asking stuff like \"do you want to give any example other than nuclear weapons of technologies with the kind of discontinuous impact you are describing?\".)\nPossibilities that came to mind for me:\n1. The argument is 'reality keeps surprising us with how continuous everything else is, so we seem to have a cognitive bias favoring discontinuity, so we should have a skeptical prior about our ability to think our way to 'X is discontinuous' since our brains are apparently too broken to do that well?\n(But to get from 1 to 'discontinuity models are batshit' we surely need something more probability-mass-concentrating than just a bias argument?)\n2. The commonality between steamship sizes, bridge sizes, etc. and AGI is something like 'how tractable is the world?'. A highly tractable world, one whose principles are easy to understand and leverage, will tend to have more world-shatteringly huge historical breakthroughs in various problems, and will tend to see a larger impact from the advent of humans and the advent of AGI.\nOur world looks much less tractable, so even if there's a secret sauce to building AGI, we should expect the resultant AGI to be a lot less impactful.\n\n\n\n\n[Ngo][5:06]\nI endorse #2 (although I think more weakly than Paul does) and would also add #3: another commonality is something like \"how competitive is innovation?\"\n\n\n\n\n[Shulman][8:22]\n@RobBensinger It's showing us a fact about the vast space of ideas and technologies we've already explored that they are not so concentrated and lumpy that the law of large numbers doesn't work well as a first approximation in a world with thousands or millions of people contributing. And that specifically includes past computer science innovation.\nSo the 'we find a secret sauce algorithm that causes a massive unprecedented performance jump, without crappier predecessors' is a 'separate, additional miracle' at exactly the same time as the intelligence explosion is getting going. You can get hyperbolic acceleration from increasing feedbacks from AI to AI hardware and software, including crazy scale-up at the end, as part of a default model. But adding on to it that AGI is hit via an extremely large performance jump of a type that is very rare, takes a big probability penalty.\nAnd the history of human brains doesn't seem to provide strong evidence of a fundamental software innovation, vs hardware innovation and gradual increases in selection applied to cognition/communication/culture.\nThe fact that, e.g. AIs are mastering so much math and language while still wielding vastly infrahuman brain-equivalents, and crossing human competence in many domains (where there was ongoing effort) over decades is significant evidence for something smoother than the development of modern humans and their culture.\nThat leaves me not expecting a simultaneous unusual massive human concentrated algorithmic leap with AGI, although I expect wildly accelerating progress from increasing feedbacks at that time. Crossing a given milestone is disproportionately likely to happen in the face of an unusually friendly part/jump of a tech tree (like AlexNet/the neural networks->GPU transition) but still mostly not, and likely not from an unprecedented in computer science algorithmic change.\nhttps://aiimpacts.org/?s=cross+\n\n\n\n\n[Cotra: ]\n\n\n\n\n\n\n\n\n[Yudkowsky][11:26][11:37]\n\nThe existence or nonexistence of such an algorithm is just a fact about the physical world. If I imagine one universe where such an algorithm exists, and another where it doesn't, I don't see why I should expect that one of those worlds has more discontinuous change in GWP, ship sizes, bridge lengths, explosive yields, etc. (outside of any discontinuities caused by the advent of humans and the advent of AGI)? What do these CS facts have to do with the other facts?\n\nI want to flag strong agreement with this. I am not talking about change in ship sizes because that is relevant in any visible way on my model; I'm talking about it in hopes that I can somehow unravel Carl and Paul's model, which talks a whole lot about this being Relevant even though that continues to not seem correlated to me across possible worlds.\nI think a lot in terms of \"does this style of thinking seem to have any ability to bind to reality\"? A lot of styles of thinking in futurism just don't.\nI imagine Carl and Paul as standing near the dawn of hominids asking, \"Okay, let's try to measure how often previous adaptations resulted in simultaneous fitness improvements across a wide range of environmental challenges\" or \"what's the previous record on an organism becoming more able to survive in a different temperature range over a 100-year period\" or \"can we look at the variance between species in how high they fly and calculate how surprising it would be for a species to make it out of the atmosphere\"\nAnd all of reality is standing somewhere else, going on ahead to do its own thing.\nNow maybe this is not the Carl and Paul viewpoint but if so I don't understand how not. It's not that viewpoint plus a much narrower view of relevance, because AI Impacts got sent out to measure bridge sizes.\nI go ahead and talk about these subjects, in part because maybe I can figure out some way to unravel the viewpoint on its own terms, in part because maybe Carl and Paul can show that they have a style of thinking that works in its own right and that I don't understand, and in part because people like Paul's nonconcrete cheerful writing better and prefer to live there mentally and I have to engage on their terms because they sure won't engage on mine.\n\n\n\n\nBut I do not actually think that bridge lengths or atomic weapons have anything to do with this.\nCarl and Paul may be doing something sophisticated but wordless, where they fit a sophisticated but wordless universal model of technological permittivity to bridge lengths, then have a wordless model of cognitive scaling in the back of their minds, then get a different prediction of Final Days behavior, then come back to me and say, \"Well, if you've got such a different prediction of Final Days behavior, can you show me some really large bridges?\"\nBut this is not spelled out in the writing – which, I do emphasize, is a social observation that would be predicted regardless, because other people have not invested a ton of character points in the ability to spell things out, and a supersupermajority would just plain lack the writing talent for it.\nAnd what other EAs reading it are thinking, I expect, is plain old Robin-Hanson-style reference class tennis of \"Why would you expect intelligence to scale differently from bridges, where are all the big bridges?\"\n\n\n\n[Cotra][11:36][11:40]\n(Just want to interject that Carl has higher P(doom) than Paul and has also critiqued Paul for not being more concrete, and I doubt that this is the source of the common disagreements that Paul/Carl both have with Eliezer)\n\n\n\n\n\nFrom my perspective the thing the AI impacts investigation is asking is something like \"When people are putting lots of resources into improving some technology, how often is it the case that someone can find a cool innovation that improves things a lot relative to the baseline?\" I think that your response to that is something like \"Sure, if the broad AI market were efficient and everyone were investigating the right lines of research, then AI progress might be smooth, but AGI would have also been developed way sooner. We can't safely assume that AGI is like an industry where lots of people are pushing toward the same thing\"\nBut it's not assuming a great structural similarity between bridges and AI, except that they're both things that humans are trying hard to find ways to improve\n\n\n\n\n[Yudkowsky][11:42]\nI can imagine writing responses like that, if I was engaging on somebody else's terms. As with Eliezer-2012's engagement with Pat Modesto against the careful proof that HPMOR cannot possibly become one of the measurably most popular fanfictions, I would never think anything like that inside my own brain.\nMaybe I just need to do a thing that I have not done before, and set my little $6000 Roth IRA to track a bunch of investments that Carl and/or Paul tell me to make, so that my brain will actually track the results, and I will actually get a chance to see this weird style of reasoning produce amazing results.\n\n\n\n[Bensinger][11:44]\n\nSure, if the broad AI market were efficient and everyone were investigating the right lines of research, then AI progress might be smooth\n\nPresumably also \"'AI progress' subsumes many different kinds of cognition, we don't currently have baby AGIs, and when we do figure out how to build AGI the very beginning of the curve (the Wright flyer moment, or something very shortly after) will correspond to a huge capability increase.\"\n\n\n\n\n[Yudkowsky][11:46]\nI think there's some much larger scale in which it's worth mentioning that on my own terms of engagement I do not naturally think like this. I don't feel like you could get Great Insight by figuring out what the predecessor technologies must have been of the Wright Flyer, finding industries that were making use of them, and then saying Behold the Heralds of the Wright Flyer. It's not a style of thought binding upon reality.\nThey built the Wright Flyer. It flew. Previous stuff didn't fly. It happens. Even if you yell a lot at reality and try to force it into an order, that's still what your actual experience of the surprising Future will be like, you'll just be more surprised by it.\nLike you can super want Technologies to be Heralded by Predecessors which were Also Profitable but on my native viewpoint this is, like, somebody with a historical axe to grind, going back and trying to make all the history books read like this, when I have no experience of people who were alive at the time making gloriously correct futuristic predictions using this kind of thinking.\n\n\n\n[Cotra][11:53]\nI think Paul's view would say:\n\nThings certainly happen for the first time\nWhen they do, they happen at small scale in shitty prototypes, like the Wright Flyer or GPT-1 or AlphaGo or the Atari bots or whatever\nWhen they're making a big impact on the world, it's after a lot of investment and research, like commercial aircrafts in the decades after Kitty Hawk or like the investments people are in the middle of making now with AI that can assist with coding\n\nPaul's view says that the Kitty Hawk moment already happened for the kind of AI that will be super transformative and could kill us all, and like the historical Kitty Hawk moment, it was not immediately a huge deal\n\n\n\n\n[Yudkowsky][11:56]\nThere is, I think, a really basic difference of thinking here, which is that on my view, AGI erupting is just a Thing That Happens and not part of a Historical Worldview or a Great Trend.\nHuman intelligence wasn't part of a grand story reflected in all parts of the ecology, it just happened in a particular species.\nNow afterwards, of course, you can go back and draw all kinds of Grand Trends into which this Thing Happening was perfectly and beautifully fitted, and yet, it does not seem to me that people have a very good track record of thereby predicting in advance what surprising news story they will see next – with some rare, narrow-superforecasting-technique exceptions, like the Things chart on a steady graph and we know solidly what a threshold on that graph corresponds to and that threshold is not too far away compared to the previous length of the chart.\nOne day the Wright Flyer flew. Anybody in the future with benefit of hindsight, who wanted to, could fit that into a grand story about flying, industry, travel, technology, whatever; if they've been on the ground at the time, they would not have thereby had much luck predicting the Wright Flyer. It can be fit into a grand story but on the ground it's just a thing that happened. It had some prior causes but it was not thereby constrained to fit into a storyline in which it was the plot climax of those prior causes.\nMy worldview sure does permit there to be predecessor technologies and for them to have some kind of impact and for some company to make a profit, but it is not nearly as interested in that stuff, on a very basic level, because it does not think that the AGI Thing Happening is the plot climax of a story about the Previous Stuff Happening.\n\n\n\n[Cotra][12:01]\nThe fact that you express this kind of view about AGI erupting one day is why I thought your thing in IEM was saying there was a major algorithmic innovation from chimps to humans, that humans were qualitatively and not just quantitatively better than chimps and this was not because of their larger brain size primarily. But I'm confused because up thread in the discussion of evolution you were emphasizing much more that there was an innovation between dinosaurs and primates, not that there was an innovation between chimps and humans, and you seemed more open to the chimp/human diff being quantitative and brain-size driven than I had thought you'd be. But being open to the chimp-human diff being quantitative/brain-size-driven suggests to me that you should be more open than you are to AGI being developed by slow grinding on the same shit, instead of erupting without much precedent?\n\n\n\n\n[Yudkowsky][12:01]\nI think you're confusing a meta-level viewpoint with an object-level viewpoint.\nThe Wright Flyer does not need to be made out of completely different materials from all previous travel devices, in order for the Wright Flyer to be a Thing That Happened One Day which wasn't the plot climax of a grand story about Travel and which people at the time could not have gotten very far in advance-predicting by reasoning about which materials were being used in which conveyances and whether those conveyances looked like they'd be about to start flying.\nIt is the very viewpoint to which I am objecting, which keeps on asking me, metaphorically speaking, to explain how the Wright Flyer could have been made of completely different materials in order for it to be allowed to be so discontinuous with the rest of the Travel story of which it is part.\nOn my viewpoint they're just different stories so the Wright Flyer is allowed to be its own thing even though it is not made out of an unprecedented new kind of steel that floats.\n\n\n\n[Cotra][12:06]\nThe claim I'm making is that Paul's view predicts a lag and a lot of investment between the first flight and aircraft making a big impact on the travel industry, and predicts that the first flight wouldn't have immediately made a big impact on the travel industry. In other words Kitty Hawk isn't a discontinuity in the Paul view because the metrics he'd expect to be continuous are the ones that large numbers of people are trying hard to optimize, like cost per mile traveled or whatnot, not metrics that almost nobody is trying to optimize, like \"height flown.\"\nIn other words, it sounds like you're saying:\n\nKitty Hawk is analogous to AGI erupting\nPrevious history of travel is analogous to pre-AGI history of AI\n\nWhile Paul is saying:\n\nKitty Hawk is analogous to e.g. AlexNet\nLater history of aircraft is analogous to the post-AlexNet story of AI which we're in the middle of living, and will continue on to make huge Singularity-causing impacts on the world\n\n\n\n\n\n[Yudkowsky][12:09]\nWell, unfortunately, Paul and I both seem to believe that our models follow from observing the present-day world, rather than being incompatible with it, and so when we demand of each other that we produce some surprising bold prediction about the present-day world, we both tend to end up disappointed.\nI would like, of course, for Paul's surprisingly narrow vision of a world governed by tightly bound stories and predictable trends, to produce some concrete bold prediction of the next few years which no ordinary superforecaster would produce, but Paul is not under the impression that his own worldview is similarly strange and narrow, and so has some difficulty in answering this request.\n\n\n\n[Cotra][12:09]\nBut Paul offered to bet with you about literally any quantity you choose?\n\n\n\n\n[Yudkowsky][12:10]\nI did assume that required an actual disagreement, eg, I cannot just go look up something superforecasters are very confident about and then demand Paul to bet against it.\n\n\n\n[Cotra][12:12]\nIt still sounds to me like \"take a basket of N performance metrics, bet that the model size to perf trend will break upward in > K of them within e.g. 2 or 3 years\" should sound good to you, I'm confused why that didn't. If it does and it's just about the legwork then I think we could get someone to come up with the benchmarks and stuff for you\nOr maybe the same thing but >K of them will break downward, whatever\nWe could bet about the human perception of sense in language models, for example\n\n\n\n\n[Yudkowsky][12:14]\nI am nervous about Paul's definition of \"break\" and the actual probabilities to be assigned. You see, both Paul and I think our worldview is a very normal one that matches current reality quite well, so when we are estimating parameters like these, Paul is liable to do it empirically, and I am also liable to do it empirically as my own baseline, and if I point to a trend over time in how long it takes to go from par-human to superhuman performance decreasing, Imaginary Paul says \"Ah, yes, what a fine trend, I will bet that things follow this trend\" and Eliezer says \"No that is MY trend, you don't get to follow it, you have to predict that par-human to superhuman time will be constant\" and Paul is like \"lol no I get to be a superforecaster and follow trends\" and we fail to bet.\nMaybe I'm wrong in having mentally played the game out ahead that far, for it is, after all, very hard to predict the Future, but that's where I'd foresee it failing.\n\n\n\n[Cotra][12:16]\nI don't think you need to bet about calendar times from par-human to super-human, and any meta-trend in that quantity. It sounds like Paul is saying \"I'll basically trust the model size to perf trends and predict a 10x bigger model from the same architecture family will get the perf the trends predict,\" and you're pushing back against that saying e.g. that humans won't find GPT-4 to be subjectively more coherent than GPT-3 and that Paul is neglecting that there could be major innovations in the future that bring down the FLOP/s to get a certain perf by a lot and bend the scaling laws. So why not bet that Paul won't be as accurate as he thinks he is by following the scaling laws?\n\n\n\n\n[Bensinger][12:17]\n\nI think Paul's view would say:\n\nThings certainly happen for the first time\nWhen they do, they happen at small scale in shitty prototypes, like the Wright Flyer or GPT-1 or AlphaGo or the Atari bots or whatever\nWhen they're making a big impact on the world, it's after a lot of investment and research, like commercial aircrafts in the decades after Kitty Hawk or like the investments people are in the middle of making now with AI that can assist with coding\n\nPaul's view says that the Kitty Hawk moment already happened for the kind of AI that will be super transformative and could kill us all, and like the historical Kitty Hawk moment, it was not immediately a huge deal\n\n\"When they do, they happen at small scale in shitty prototypes, like the Wright Flyer or GPT-1 or AlphaGo or the Atari bots or whatever\"\nHow shitty the prototype is should depend (to a very large extent) on the physical properties of the tech. So I don't find it confusing (though I currently disagree) when someone says \"I looked at a bunch of GPT-3 behavior and it's cognitively sophisticated enough that I think it's doing basically what humans are doing, just at a smaller scale. The qualitative cognition I can see going on is just that impressive, taking into account the kinds of stuff I think human brains are doing.\"\nWhat I find confusing is, like, treating ten thousand examples of non-AI, non-cognitive-tech continuities (nukes, building heights, etc.) as though they're anything but a tiny update about 'will AGI be high-impact' — compared to the size of updates like 'look at how smart and high-impact humans were' and perhaps 'look at how smart-in-the-relevant-ways GPT-3 is'.\nLike, impactfulness is not a simple physical property, so there's not much reason for different kinds of tech to have similar scales of impact (or similar scales of impact n years after the first prototype). Mainly I'm not sure to what extent we disagree about this, vs. this just being me misunderstanding the role of the 'most things aren't high-impact' argument.\n(And yeah, a random historical technology drawn from a hat will be pretty low-impact. But that base rate also doesn't seem to me like it has much evidential relevance anymore when I update about what specific tech we're discussing.)\n\n\n\n\n[Cotra][12:18]\nThe question is not \"will AGI be high impact\" — Paul agrees it will, and for any FOOM quantity (like crossing a chimp-to-human-sized gap in a day or whatever) he agrees that will happen eventually too.\nThe technologies studies in the dataset spanned a wide range in their peak impact on society, and they're not being used to forecast the peak impact of mature AI tech\n\n\n\n\n[Bensinger][12:19]\nYeah, I'm specifically confused about how we know that the AGI Wright Flyer and its first successors are low-impact, from looking at how low-impact other technologies are (if that is in fact a meaningful-sized update on your view)\nNot drawing a comparison about the overall impactfulness of AI / AGI (e.g., over fifteen years)\n\n\n\n\n[Yudkowsky][12:21]\n\n[So why not bet that Paul won't be as accurate as he thinks he is by following the scaling laws?]\n\nI'm pessimistic about us being able to settle on the terms of a bet like that (and even more so about being able to bet against Carl on it) but in broad principle I agree. The trouble is that if a trend is benchmarkable, I believe more in the trend continuing at least on the next particular time, not least because I believe in people Goodharting benchmarks.\nI expect a human sense of intelligence to be harder to fool (even taking into account that it's being targeted to a nonzero extent) but I also expect that to be much harder to measure and bet upon than the Goodhartable metrics. And I think our actual disagreement is more visible over portfolios of benchmarks breaking upward over time, but I also expect that if you ask Paul and myself to quantify our predictions, we both go, \"Oh, my theory is the one that fits ordinary reality so obviously I will go look at superforecastery trends over ordinary reality to predict this specifically\" and I am like, \"No, Paul, if you'd had to predict that without looking at the data, your worldview would've predicted trends breaking down less often\" and Paul is like \"But Eliezer, shouldn't you be predicting much more upward divergence than this.\"\nAgain, perhaps I'm being overly gloomy.\n\n\n\n[Cotra][12:23]\nI think we should try to find ML predictions where you defer to superforecasters and Paul disagrees, since he said he would bet against superforecasters in ML\n\n\n\n\n[Yudkowsky][12:24]\nI am also probably noticeably gloomier and less eager to bet because the whole fight is taking place on grounds that Paul thinks is important and part of a connected story that continuously describes ordinary reality, and that I think is a strange place where I can't particularly see how Paul's reasoning style works. So I'd want to bet against Paul's overly narrow predictions by using ordinary superforecasting, and Paul would like to make his predictions using ordinary superforecasting.\nI am, indeed, more interested in a place where Paul wants to bet against superforecasters. I am not guaranteeing up front I'll bet with them because superforecasters did not call AlphaGo correctly and I do not think Paul has zero actual domain expertise. But Paul is allowed to pick up generic epistemic credit including from me by beating superforecasters because that credit counts toward believing a style of thought is even working literally at all; separately from the question of whether Paul's superforecaster-defying prediction also looks like a place where I'd predict in some opposite direction.\nDefinitely, places where Paul disagrees with superforecasters are much more interesting places to mine for bets.\nI am happy to hear about those.\n\n\n\n[Cotra][12:27]\nI think what Paul was saying last night is you find superforecasters betting on some benchmark performance, and he just figures out which side he'd take (and he expects in most/all superforecaster predictions that he would not be deferential, there's a side he would take)\n\n\n\n \n10.3. Predictions and betting (continued)\n \n\n[Christiano][12:29]\nnot really following along with the conversation, but my desire to bet about \"whatever you want\" was driven in significant part by frustration with Eliezer repeatedly saying things like \"people like Paul get surprised by reality\" and me thinking that's nonsense\n\n\n\n\n[Yudkowsky][12:29]\nSo the Yudkowskian viewpoint is something like… trends in particular technologies held fixed, will often break down; trends in Goodhartable metrics, will often stay on track but come decoupled from their real meat; trends across multiple technologies, will experience occasional upward breaks when new algorithms on the level of Transformers come out. For me to bet against superforecasters I have to see superforecasters saying something different, which I do not at this time actually know to be the case. For me to bet against Paul betting against superforecasters, the different thing Paul says has to be different from my own direction of disagreement with superforecasters.\n\n\n\n[Christiano][12:30]\nI still think that if you want to say \"this sort of reasoning is garbage empirically\" then you ought to be willing to bet about something. If we are just saying \"we agree about all of the empirics, it's just that somehow we have different predictions about AGI\" then that's fine and symmetrical.\n\n\n\n\n[Yudkowsky][12:30]\nI have been trying to revise that towards a more nvc \"when I try to operate this style of thought myself, it seems to do a bad job of retrofitting and I don't understand how it says X but not Y\".\n\n\n\n[Christiano][12:30]\neven then presumably if you think it's garbage you should be able to point to some particular future predictions where it would be garbage?\nif you used it\nand then I can either say \"no, I don't think that's a valid application for reason X\" or \"sure, I'm happy to bet\"\nand it's possible you can't find any places where it sticks its neck out in practice (even in your version), but then I'm again just rejecting the claim that it's empirically ruled out\n\n\n\n\n[Yudkowsky][12:31]\nI also think that we'd have an easier time betting if, like, neither of us could look at graphs over time, but we were at least told the values in 2010 and 2011 to anchor our estimates over one year, or something like that.\nThough we also need to not have a bunch of existing knowledge of the domain which is hard.\n\n\n\n[Christiano][12:32]\nI think this might be derailing some broader point, but I am provisionally mostly ignoring your point \"this doesn't work in practice\" if we can't find places where we actually foresee disagreements\n(which is fine, I don't think it's core to your argument)\n\n\n\n\n[Yudkowsky][12:33]\nPaul, you've previously said that you're happy to bet against ML superforecasts. That sounds promising. What are examples of those? Also I must flee to lunch and am already feeling sort of burned and harried; it's possible I should not ignore the default doomedness of trying to field questions from multiple sources.\n\n\n\n[Christiano][12:33]\nI don't know if superforecasters make public bets on ML topics, I was saying I'm happy to bet on ML topics and if your strategy is \"look up what superforecasters say\" that's fine and doesn't change my willingness to bet\nI think this is probably not as promising as either (i) dig in on the arguments that are most in dispute (seemed to be some juicier stuff earlier though I'm just focusing on work today) , or (ii) just talking generally about what we expect to see in the next 5 years so that we can at least get more of a vibe looking back\n\n\n\n\n[Shulman][12:35]\nYou can bet on the Metaculus AI Tournament forecasts.\nhttps://www.metaculus.com/ai-progress-tournament/\n\n\n\n\n[Yudkowsky][13:13]\nI worry that trying to jump straight ahead to Let's Bet is being too ambitious too early on a cognitively difficult problem of localizing disagreements.\nOur prophecies of the End Times's modal final days seem legit different; my impulse would be to try to work that backwards, first, in an intuitive sense of \"well which prophesied world would this experience feel more like living in?\", and try to dig deeper there before deciding that our disagreements have crystallized into short-term easily-observable bets.\nWe both, weirdly enough, feel that our current viewpoints are doing a great job of permitting the present-day world, even if, presumably, we both think the other's worldview would've done worse at predicting that world in advance. This cannot be resolved in an instant by standard techniques known to me. Let's try working back from the End Times instead.\nI have already stuck out my neck a little and said that, as we start to go past $50B invested in a model, we are starting to live at least a little more in what feels like the Paulverse, not because my model prohibits this, but because, or so I think, Paul's model more narrowly predicts it.\nIt does seem like the sort of generically weird big thing that could happen, to me, even before the End Times, there are corporations that could just decide to do that; I am hedging around this exactly because it does feel to my gut like that is a kind of headline I could read one day and have it still be years before the world ended, so I may need to be stingy with those credibility points inside of what I expect to be reality.\nBut if we get up to $10T to train a model, that is much more strongly Paulverse; it's not that this falsifies the Eliezerverse considered in isolation, but it is much more narrowly characteristic of the Words of Paul coming to pass; it feels much more to my gut that, in agreeing to this, I am not giving away Bayes points inside my own mainline.\nIf ordinary salaries for ordinary fairly-good programmers get up to $20M/year, this is not prohibited by my AI models per se; but it sure sounds like the world becoming less ordinary than I expected it to stay, and like it is part of Paul's Prophecy much more strongly than it is part of Eliezer's Prophecy.\nThat's two ways that I could concede a great victory to the Paulverse. They both have the disadvantages (from my perspective) that the Paulverse, though it must be drawing probability mass from somewhere in order to stake it there, is legitimately not – so far as I know – forced to claim that these things happen anytime soon. So they are ways for the Paulverse to win, but not ways for the Eliezerverse to win.\nThat I have said even this much, I claim, puts Paul in at least a little tiny bit of debt to me epistemic-good-behavior-wise; he should be able to describe events which would start to make him worry he was living in the Eliezerverse, even if his model did not narrowly rule them out, and even if those events had not been predicted by the Eliezerverse to occur within a narrowly prophesied date such that they would not thereby form a bet the Eliezerverse could clearly lose as well as win.\nI have not had much luck in trying to guess what the real Paul will say about issues like this one. My last attempt was to say, \"Well, what shouldn't happen, besides the End Times themselves, before world GDP has doubled over a four-year period?\" And Paul gave what seems to me like an overly valid reply, which, iirc and without looking it up, was along the lines of, \"well, nothing that would double world GDP in a 1-year period\".\nWhen I say this is overly valid, I mean that it follows too strongly from Paul's premises, and he should be looking for something less strong than that on which to make a beginning discovery of disagreement – maybe something which Paul's premises don't strongly forbid to him, but which nonetheless looks more like the Eliezerverse or like it would be relatively more strongly predicted by Eliezer's Prophecy.\nI do not model Paul as eagerly or strongly agreeing with, say, \"The Riemann Hypothesis should not be machine-proven\" or \"The ABC Conjecture should not be machine-proven\" before world GDP has doubled. It is only on Eliezer's view that proving the Riemann Hypothesis is about as much of a related or unrelated story to AGI, as are particular benchmarks of GDP.\nOn Paul's view as I am trying to understand and operate it, this benchmark may be correlated with AGI in time in the sense that most planets wouldn't do it during the Middle Ages before they had any computers, but it is not part of the story of AGI, it is not part of Paul's Prophecy; because it doesn't make a huge amount of money and increase GDP and get a huge ton of money flowing into investments in useful AI.\n(From Eliezer's perspective, you could tell a story about how a stunning machine proof of the Riemann Hypothesis got Bezos to invest $50 billion in training a successor model and that was how the world ended, and that would be a just-as-plausible model as some particular economic progress story, of how Stuff Happened Because Other Stuff Happened; it sounds like the story of OpenAI or of Deepmind's early Atari demo, which is to say, it sounds to Eliezer like history. Whereas on Eliezer!Paul's view, that's much more of a weird coincidence because it involves Bezos's unforced decision rather than the economic story of which AGI is capstone, or so it seems to me trying to operate Paul's view.)\nAnd yet Paul might still, I hope, be able to find something like \"The Riemann Hypothesis is machine-proven\", which even though it is not very much of an interesting part of his own Prophecy because it's not part of the economic storyline, sounds to him like the sort of thing that the Eliezerverse thinks happens as you get close to AGI, which the Eliezerverse says is allowed to start happening way before world GDP would double in 4 years; and as it happens I'd agree with that characterization of the Eliezerverse.\nSo Paul might say, \"Well, my model doesn't particularly forbid that the Riemann Hypothesis gets machine-proven before world GDP has doubled in 4 years or even started to discernibly break above trend by much; but that does sound more like we are living in the Eliezerverse than in the Paulverse.\"\nI am not demanding this particular bet because it seems to me that the Riemann Hypothesis may well prove to be unfairly targetable for current ML techniques while they are still separated from AGI by great algorithmic gaps. But if on the other hand Paul thinks that, I dunno, superhuman performance on stuff like the Riemann Hypothesis does tend to be more correlated with economically productive stuff because it's all roughly the same kind of capability, and lol never mind this \"algorithmic gap\" stuff, then maybe Paul is willing to pick that example; which is all the better for me because I do suspect it might decouple from the AI of the End, and so I think I have a substantial chance of winning and being able to say \"SEE!\" to the assembled EAs while there's still a year or two left on the timeline.\nI'd love to have credibility points on that timeline, if Paul doesn't feel as strong an anticipation of needing them.\n\n\n\n[Christiano][15:43]\n1/3 that RH has an automated proof before sustained 7%/year GWP growth?\nI think the clearest indicator is that we have AI that ought to be able to e.g. run the fully automated factory-building factory (not automating mines or fabs, just the robotic manufacturing and construction), but it's not being deployed or is being deployed with very mild economic impacts\nanother indicator is that we have AI systems that can fully replace human programmers (or other giant wins), but total investment in improving them is still small\nanother indicator is a DeepMind demo that actually creates a lot of value (e.g. 10x larger than DeepMind's R&D costs? or even comparable to DeepMind's cumulative R&D costs if you do the accounting really carefully and I definitely believe it and it wasn't replaceable by Brain), it seems like on your model things should \"break upwards\" and in mine that just doesn't happen that much\nsounds like you may have >90% on automated proof of RH before a few years of 7%/year growth driven by AI? so that would give a pretty significant odds ratio either way\nI think \"stack more layers gets stuck but a clever idea makes crazy stuff happen\" is generally going to be evidence for your view\nThat said, I'd mostly reject AlphaGo as an example, because it's just plugging in neural networks to existing go algorithms in almost the most straightforward way and the bells and whistles don't really matter. But if AlphaZero worked and AlphaGo didn't, and the system accomplished something impressive/important (like proving RH, or being significantly better at self-contained programming tasks), then that would be a surprise.\nAnd I'd reject LSTM -> transformer or MoE as an example because the quantitative effect size isn't that big.\nBut if something like that made the difference between \"this algorithm wasn't scaling before, and now it's scaling,\" then I'd be surprised.\nAnd the size of jump that surprises me is shrinking over time. So in a few years even getting the equivalent of a factor of 4 jump from some clever innovation would be very surprising to me.\n\n\n\n\n[Yudkowsky][17:44]\n\nsounds like you may have >90% on automated proof of RH before a few years of 7%/year growth driven by AI? so that would give a pretty significant odds ratio either way\n\nI emphasize that this is mostly about no on the GDP growth before the world ending, rather than yes on the RH proof, i.e., I am not 90% on RH before the end of the world at all. Not sure I'm over 50% on it happening before the end of the world at all.\nShould it be a consequence of easier earlier problems than full AGI? Yes, on my mainline model; but also on my model, it's a particular thing and maybe the particular people and factions doing stuff don't get around to that particular thing.\nI guess if I stare hard at my brain it goes 'ehhhh maybe 65% if timelines are relatively long and 40% if it's like the next 5 years', because the faster stuff happens, the less likely anyone is to get around to proving RH in particular or announcing that they've done so if they did.\nAnd if the econ threshold is set as low as 7%/yr, I start to worry about that happening in longer-term scenarios, just because world GDP has never been moving at a fixed rate over a log chart. the \"driven by AI\" part sounds very hard to evaluate. I want, I dunno, some other superforecaster or Carl to put a 90% credible bound on 'when world GDP growth hits 7% assuming little economically relevant progress in AI' before I start betting at 80%, let alone 90%, on what should happen before then. I don't have that credible bound already loaded and I'm not specialized in it.\nI'm wondering if we're jumping ahead of ourselves by trying to make a nice formal Bayesian bet, as prestigious as that might be. I mean, your 1/3 was probably important for you to say, as it is higher than I might have hoped, and I'd ask you if you really mean for that to be an upper bound on your probability or if that's your actual probability.\nBut, more than that, I'm wondering if, in the same vague language I used before, you're okay with saying a little more weakly, \"RH proven before big AI-driven growth in world GDP, sounds more Eliezerverse than Paulverse.\"\nIt could be that this is just not actually true because you do not think that RH is coupled to econ stuff in the Paul Prophecy one way or another, and my own declarations above do not have the Eliezerverse saying it enough more strongly than that. If you don't actually see this as a distinguishing Eliezerverse thing, if it wouldn't actually make you say \"Oh no maybe I'm in the Eliezerverse\", then such are the epistemic facts.\n\nAnd the size of jump that surprises me is shrinking over time. So in a few years even getting the equivalent of a factor of 4 jump from some clever innovation would be very surprising to me.\n\nThis sounds potentially more promising to me – seems highly Eliezerverse, highly non-Paul-verse according to you, and its negation seems highly oops-maybe-I'm-in-the-Paulverse to me too. How many years is a few? How large a jump is shocking if it happens tomorrow?\n\n\n \n11. September 24 conversation\n \n11.1. Predictions and betting (continued 2)\n \n\n[Christiano][13:15]\nI think RH is not that surprising, it's not at all clear to me where \"do formal math\" sits on the \"useful stuff AI could do\" spectrum, I guess naively I'd put it somewhere \"in the middle\" (though the analogy to board games makes it seem a bit lower, and there is a kind of obvious approach to doing this that seems to be working reasonably well so that also makes it seem lower), and 7% GDP growth is relatively close to the end (ETA: by \"close to the end\" I don't mean super close to the end, just far enough along that there's plenty of time for RH first)\nI do think that performance jumps are maybe more dispositive, but I'm afraid that it's basically going to go like this: there won't be metrics that people are tracking that jump up, but you'll point to new applications that people hadn't considered before, and I'll say \"but those new applications aren't that valuable\" whereas to you they will look more analogous to a world-ending AGI coming out from the blue\nlike for AGZ I'll be like \"well it's not really above the deep learning trend if you run it backwards\" and you'll be like \"but no one was measuring it before! you can't make up the trend in retrospect!\" and I'll be like \"OK, but the reason no one was measuring it before was that it was worse than traditional go algorithms until like 2 years ago and the upside is not large enough that you should expect a huge development effort for a small edge\"\n\n\n\n\n[Yudkowsky][13:43]\n\"factor of 4 jump from some clever innovation\" – can you say more about that part?\n\n\n\n[Christiano][13:53]\nlike I'm surprised if a clever innovation does more good than spending 4x more compute\n\n\n\n\n[Yudkowsky][15:04]\nI worry that I'm misunderstanding this assertion because, as it stands, it sounds extremely likely that I'd win. Would transformers vs. CNNs/RNNs have won this the year that the transformers paper came out?\n\n\n\n[Christiano][15:07]\nI'm saying that it gets harder over time, don't expect wins as big as transformers\nI think even transformers probably wouldn't make this cut though?\ncertainly not vs CNNs\nvs RNNs I think the comparison I'd be using to operationalize it is translation, as measured in the original paper\nthey do make this cut for translation, looks like the number is like 100 >> 4\n100x for english-german, more like 10x for english-french, those are the two benchmarks they cite\nbut both more than 4x\nI'm saying I don't expect ongoing wins that big\nI think the key ambiguity is probably going to be about what makes a measurement established/hard-to-improve\n\n\n\n\n[Yudkowsky][15:21]\nthis sounds like a potentially important point of differentiation; I do expect more wins that big.\nthe main thing that I imagine might make a big difference to your worldview, but not mine, is if the first demo of the big win only works slightly better (although that might also be because they were able to afford much less compute than the big players, which I think your worldview would see as a redeeming factor for my worldview?) but a couple of years later might be 4x or 10x as effective per unit compute (albeit that other innovations would've been added on by then to make the first innovation work properly, which I think on your worldview is like The Point or something)\nclarification: by \"transformers vs CNNs\" I don't mean transformers on ImageNet, I mean transformers vs. contemporary CNNs, RNNs, or both, being used on text problems.\nI'm also feeling a bit confused because eg Standard Naive Kurzweilian Accelerationism makes a big deal about the graphs keeping on track because technologies hop new modes as needed. what distinguishes your worldview from saying that no further innovations are needed for AGI or will give a big compute benefit along the way? is it that any single idea may only ever produce a smaller-than-4X benefit? is it permitted that a single idea plus 6 months of engineering fiddly details produce a 4X benefit?\nall this aside, \"don't expect wins as big as transformers\" continues to sound to me like a very promising point for differentiating Prophecies.\n\n\n\n[Christiano][15:50]\nI think the relevant feature of the innovation is that the work to find it is small relative to the work that went into the problem to date (though there may be other work on other avenues)\n\n\n\n\n[Yudkowsky][15:52]\nin, like, a local sense, or a global sense? if there's 100 startups searching for ideas collectively with $10B of funding, and one of them has an idea that's 10x more efficient per unit compute on billion-dollar problems, is that \"a small amount of work\" because it was only a $100M startup, or collectively an appropriate amount of work?\n\n\n\n[Christiano][15:53]\nI'm calling that an innovation because it's a small amount of work\n\n\n\n\n[Yudkowsky][15:54]\n(maybe it would be also productive if you pointed to more historical events like Transformers and said 'that shouldn't happen again', because I didn't realize there was anything you thought was like that. AlphaFold 2?)\n\n\n\n[Christiano][15:54]\nlike, it's not just a claim about EMH, it's also a claim about the nature of progress\nI think AlphaFold counts and is probably if anything a bigger multiplier, it's just uncertainty over how many people actually worked on the baselines\n\n\n\n\n[Yudkowsky][15:54]\nwhen should we see headlines like those subside?\n\n\n\n[Christiano][15:55]\nI mean, I think they are steadily subsiding\nas areas grow\n\n\n\n\n[Yudkowsky][15:55]\nhave they already begun to subside relative to 2016, on your view?\n(guess that was ninjaed)\n\n\n\n[Christiano][15:55]\nI would be surprised to see a 10x today on machine translation\n\n\n\n\n[Yudkowsky][15:55]\nwhere that's 10x the compute required to get the same result?\n\n\n\n[Christiano][15:55]\nthough not so surprised that we can avoid talking about probabilities\nyeah\nor to make it more surprising, old sota with 10x less compute\n\n\n\n\n[Yudkowsky][15:56]\nyeah I was about to worry that people wouldn't bother spending 10x the cost of a large model to settle our bet\n\n\n\n[Christiano][15:56]\nI'm more surprised if they get the old performance with 10x less compute though, so that way around is better on all fronts\n\n\n\n\n[Yudkowsky][15:57]\none reads papers claiming this all the time, though?\n\n\n\n[Christiano][15:57]\nlike, this view also leads me to predict that if I look at the actual amount of manpower that went into alphafold, it's going to be pretty big relative to the other people submitting to that protein folding benchmark\n\n\n\n\n[Yudkowsky][15:57]\nthough typically for the sota of 2 years ago\n\n\n\n[Christiano][15:58]\nnot plausible claims on problems people care about\nI think the comparison is to contemporary benchmarks from one of the 99 other startups who didn't find the bright idea\nthat's the relevant thing on your view, right?\n\n\n\n\n[Yudkowsky][15:59]\nI would expect AlphaFold and AlphaFold 2 to involve… maybe 20 Deep Learning researchers, and for 1-3 less impressive DL researchers to have been the previous limit, if the field even tried that much; I would not be the least surprised if DM spent 1000x the compute on AlphaFold 2, but I'd be very surprised if the 1-3 large research team could spend that 1000x compute and get anywhere near AlphaFold 2 results.\n\n\n\n[Christiano][15:59]\nand then I'm predicting that number is already <10 for machine translation and falling (maybe I shouldn't talk about machine translation or at least not commit to numbers given that I know very little about it, but whatever that's my estimate), and for other domains it will be <10 by the time they get as crowded as machine translation, and for transformative tasks they will be <2\nisn't there an open-source replication of alphafold?\nwe could bet about its performance relative to the original\n\n\n\n\n[Yudkowsky][16:00]\nit is enormously easier to do what's already been done\n\n\n\n[Christiano][16:00]\nI agree\n\n\n\n\n[Yudkowsky][16:00]\nI believe the open-source replication was by people who were told roughly what Deepmind had done, possibly more than roughly\non the Yudkowskian view, those 1-3 previous researchers just would not have thought of doing things the way Deepmind did them\n\n\n\n[Christiano][16:01]\nanyway, my guess is generally that if you are big relative to previous efforts in the area you can make giant improvements, if you are small relative to previous efforts you might get lucky (or just be much smarter) but that gets increasingly unlikely as the field gets bigger\nlike alexnet and transformers are big wins by groups who are small relative to the rest of the field, but transformers are much smaller than alexnet and future developments will continue to shrink\n\n\n\n\n[Yudkowsky][16:02]\nbut if you're the same size as previous efforts and don't have 100x the compute, you shouldn't be able to get huge improvements in the Paulverse?\n\n\n\n[Christiano][16:03]\nI mean, if you are the same size as all the prior effort put together?\nI'm not surprised if you can totally dominate in that case, especially if prior efforts aren't well-coordinated\nand for things that are done by hobbyists, I wouldn't be surprised if you can be a bit bigger than an individual hobbyist and dominate\n\n\n\n\n[Yudkowsky][16:03]\nI'm thinking something like, if Deepmind comes out with an innovation such that it duplicates old SOTA on machine translation with 1/10th compute, that still violates the Paulverse because Deepmind is not Paul!Big compared to all MTL efforts\nthough I am not sure myself how seriously Earth is taking MTL in the first place\n\n\n\n[Christiano][16:04]\nyeah, I think if DeepMind beats Google Brain by 10x compute next year on translation, that's a significant strike against Paul\n\n\n\n\n[Yudkowsky][16:05]\nI know that Google offers it for free, I expect they at least have 50 mediocre AI people working on it, I don't know whether or not they have 20 excellent AI people working on it and if they've ever tried training a 200B parameter non-MoE model on it\n\n\n\n[Christiano][16:05]\nI think not that seriously, but more seriously than 2016 and than anything else where you are seeing big swings\nand so I'm less surprised than for TAI, but still surprised\n\n\n\n\n[Yudkowsky][16:06]\nI am feeling increasingly optimistic that we have some notion of what it means to not be within the Paulverse! I am not feeling that we have solved the problem of having enough signs that enough of them will appear to tell EA how to notice which universe it is inside many years before the actual End Times, but I sure do feel like we are making progress!\nthings that have happened in the past that you feel shouldn't happen again are great places to poke for Eliezer-disagreements!\n\n\n\n[Christiano][16:07]\nI definitely think there's a big disagreement here about what to expect for pre-end-of-days ML\nbut lots of concerns about details like what domains are crowded enough to be surprising and how to do comparisons\nI mean, to be clear, I think the transformer paper having giant gains is also evidence against paulverse\nit's just that there are really a lot of datapoints, and some of them definitely go against paul's view\nto me it feels like the relevant thing for making the end-of-days forecast is something like \"how much of the progress comes from 'innovations' that are relatively unpredictable and/or driven by groups that are relatively small, vs scaleup and 'business as usual' progress in small pieces?\"\n\n\n\n \n11.2. Performance leap scenario\n \n\n[Yudkowsky][16:09]\nmy heuristics tell me to try wargaming out a particular scenario so we can determine in advance which key questions Paul asks\nin 2023, Deepmind releases an MTL program which is suuuper impressive. everyone who reads the MTL of, say, a foreign novel, or uses it to conduct a text chat with a contractor in Indonesia, is like, \"They've basically got it, this is about as good as a human and only makes minor and easily corrected errors.\"\n\n\n\n[Christiano][16:12]\nI mostly want to know how good Google's translation is at that time; and if DeepMind's product is expensive or only shows gains for long texts, I want to know whether there is actually an economic niche for it that is large relative to the R&D cost.\nlike I'm not sure whether anyone works at all on long-text translation, and I'm not sure if it would actually make Google $ to work on it\ngreat text chat with contractor in indonesia almost certainly meets that bar though\n\n\n\n\n[Yudkowsky][16:14]\nfurthermore, Eliezer and Paul publicized their debate sufficiently to some internal Deepmind people who spoke to the right other people at Deepmind, that Deepmind showed a graph of loss vs. previous-SOTA methods, and Deepmind's graph shows that their thing crosses the previous-SOTA line while having used 12x less compute for inference training.\n(note that this is less… salient?… on the Eliezerverse per se, than it is as an important issue and surprise on the Paulverse, so I am less confident about part.)\na nitpicker would note that previous-SOTA metric they used is however from 1 year previously and the new model also uses Sideways Batch Regularization which the 1-year-previous SOTA graph didn't use. on the other hand, they got 12x rather than 10x improvement so there was some error margin there.\n\n\n\n[Christiano][16:15]\nI'm OK if they don't have the benchmark graph as long as they have some evaluation that other people were trying at, I think real-time chat probably qualifies\n\n\n\n\n[Yudkowsky][16:15]\nbut then it's harder to measure the 10x\n\n\n\n[Christiano][16:15]\nalso I'm saying 10x less training compute, not inference (but 10x less inference compute is harder)\nyes\n\n\n\n\n[Yudkowsky][16:15]\nor to know that Deepmind didn't just use a bunch more compute\n\n\n\n[Christiano][16:15]\nin practice it seems almost certain that it's going to be harder to evaluate\nthough I agree there are really clean versions where they actually measured a benchmark other people work on and can compare training compute directly\n(like in the transformer paper)\n\n\n\n\n[Yudkowsky][16:16]\nliterally a pessimal typo, I meant to specify training vs. inference and somehow managed to type \"inference\" instead\n\n\n\n[Christiano][16:16]\nI'm more surprised by the clean version\n\n\n\n\n[Yudkowsky][16:17]\nI literally don't know what you'd be surprised by in the unclean version\nwas GPT-2 beating the field hard enough that it would have been surprising if they'd only used similar amounts of training compute\n?\nand how would somebody else judge that for a new system?\n\n\n\n[Christiano][16:17]\nI'd want to look at either human evals or logprob, I think probably not? but it's possible it was\n\n\n\n\n[Yudkowsky][16:19]\nbtw I also feel like the Eliezer model is more surprised and impressed by \"they beat the old model with 10x less compute\" than by \"the old model can't catch up to the new model with 10x more compute\"\nthe Eliezerverse thinks in terms of techniques that saturate\nsuch that you have to find new techniques for new training to go on helping\n\n\n\n[Christiano][16:19]\nit's definitely way harder to win at the old task with 10x less compute\n\n\n\n\n[Yudkowsky][16:19]\nbut for expensive models it seems really genuinely unlikely to me that anyone will give us this data!\n\n\n\n[Christiano][16:19]\nI think it's usually the case that if you scale up far enough past previous sota, you will be able to find tons of techniques needed to make it work at the new scale\nbut I'm expecting it to be less of a big deal because all experiments will be roughly at the frontier of what is feasible\nand so the new thing won't be able to afford to go 10x bigger\nunlike today when we are scaling up spending so fast\nbut this does make it harder for the next few years at least, which is maybe the key period\n(it makes it hard if we are both close enough to the edge that \"10x cheaper to get old results\" seems unlikely but \"getting new results that couldn't be achieved with 10x more compute and old method\" seems likely)\nwhat I basically expect is to (i) roughly know how much performance you get from making models 10x bigger, (ii) roughly know how much someone beat the competition, and then you can compare the numbers\n\n\n\n\n[Yudkowsky][16:22]\nwell, you could say, not in a big bet-winning sense, but in a mild trend sense, that if the next few years are full of \"they spent 100x more on compute in this domain and got much better results\" announcements, that is business as usual for the last few years and perfectly on track for the Paulverse; while the Eliezerverse permits but does not mandate that we will also see occasional announcements about brilliant new techniques, from some field where somebody already scaled up to the big models big compute, producing more impressive results than the previous big compute.\n\n\n\n[Christiano][16:23]\n(but \"performance from making models 10x bigger\" depends a lot on exactly how big they were and whether you are in a regime with unfavorable scaling)\n\n\n\n\n[Yudkowsky][16:23]\nso the Eliezerverse must be putting at least a little less probability mass on business-as-before Paulverse\n\n\n\n[Christiano][16:24]\nI am also expecting a general scale up in ML training runs over time, though it's plausible that you also expect that until the end of days and just expect a much earlier end of days\n\n\n\n\n[Yudkowsky][16:24]\nI mean, why wouldn't they?\nif they're purchasing more per unit of compute, they will quite often spend more on total compute (Jevons Paradox)\n\n\n\n[Christiano][16:25]\nthat's going to kill the \"they spent 100x more compute\" announcements soon enough\nlike, that's easy when \"100x more\" means $1M, it's a bit hard when \"100x more\" means $100M, it's not going to happen except on the most important tasks when \"100x more\" means $10B\n\n\n\n\n[Yudkowsky][16:26]\nthe Eliezerverse is full of weird things that somebody could apply ML to, and doesn't have that many professionals who will wander down completely unwalked roads; and so is much more friendly to announcements that \"we tried putting a lot of work and compute into protein folding, since nobody ever tried doing that seriously with protein folding before, look what came out\" continuing for the next decade if the Earth lasts that long\n\n\n\n[Christiano][16:27]\nI'm not surprised by announcements like protein folding, it's not that the world overall gets more and more hostile to big wins, it's that any industry gets more and more hostile as it gets bigger (or across industries, they get more and more hostile as the stakes grow)\n\n\n\n\n[Yudkowsky][16:28]\nwell, the Eliezerverse has more weird novel profitable things, because it has more weirdness; and more weird novel profitable things, because it has fewer people diligently going around trying all the things that will sound obvious in retrospect; but it also has fewer weird novel profitable things, because it has fewer novel things that are allowed to be profitable.\n\n\n\n[Christiano][16:29]\n(I mean, the protein folding thing is a datapoint against my view, but it's not that much evidence and it's not getting bigger over time)\nyeah, but doesn't your view expect more innovations for any given problem?\nlike, it's not just that you think the universe of weird profitable applications is larger, you also think AI progress is just more driven by innovations, right?\notherwise it feels like the whole game is about whether you think that AI-automating-AI-progress is a weird application or something that people will try on\n\n\n\n\n[Yudkowsky][16:30]\nthe Eliezerverse is more strident about there being lots and lots more stuff like \"ReLUs\" and \"batch normalization\" and \"transformers\" in the design space in principle, and less strident about whether current people are being paid to spend all day looking for them rather than putting their efforts someplace with a nice predictable payoff.\n\n\n\n[Christiano][16:31]\nyeah, but then don't you see big wins from the next transformers?\nand you think those just keep happening even as fields mature\n\n\n\n\n[Yudkowsky][16:31]\nit's much more permitted in the Eliezerverse than in the Paulverse\n\n\n\n[Christiano][16:31]\nor you mean that they might slow down because people stop working on them?\n\n\n\n\n[Yudkowsky][16:32]\nthis civilization has mental problems that I do not understand well enough to predict, when it comes to figuring out how they'll affect the field of AI as it scales\nthat said, I don't see us getting to AGI on Stack More Layers.\nthere may perhaps be a bunch of stacked layers in an AGI but there will be more ideas to it than that.\nsuch that it would require far, far more than 10X compute to get the same results with a GPT-like architecture if that was literally possible\n\n\n\n[Christiano][16:33]\nit seems clear that it will be more than 10x relative to GPT\nI guess I don't know what GPT-like architecture means, but from what you say it seems like normal progress would result in a non-GPT-like architecture\nso I don't think I'm disagreeing with that\n\n\n\n\n[Yudkowsky][16:34]\nI also don't think we're getting there by accumulating a ton of shallow insights; I expect it takes at least one more big one, maybe 2-4 big ones.\n\n\n\n[Christiano][16:34]\ndo you think transformers are a big insight?\n(is adding soft attention to LSTMs a big insight?)\n\n\n\n\n[Yudkowsky][16:34]\nhard to deliver a verdict of history there\nno\n\n\n\n[Christiano][16:35]\n(I think the intellectual history of transformers is a lot like \"take the LSTM out of the LSTM with attention\")\n\n\n\n\n[Yudkowsky][16:35]\n\"how to train deep gradient descent without activations and gradients blowing up or dying out\" was a big insight\n\n\n\n[Christiano][16:36]\nthat really really seems like the accumulation of small insights\n\n\n\n\n[Yudkowsky][16:36]\nthough the history of that big insight is legit complicated\n\n\n\n[Christiano][16:36]\nlike, residual connections are the single biggest thing\nand relus also help\nand batch normalization helps\nand attention is better than lstms\n\n\n\n\n[Yudkowsky][16:36]\nand the inits help (like xavier)\n\n\n\n[Christiano][16:36]\nyou could also call that the accumulation of big insights, but the point is that it's an accumulation of a lot of stuff\nmostly developed in different places\n\n\n\n\n[Yudkowsky][16:37]\nbut on the Yudkowskian view the biggest insight of all was the one waaaay back at the beginning where they were initing by literally unrolling Restricted Boltzmann Machines\nand people began to say: hey if we do this the activations and gradients don't blow up or die out\nit is not a history that strongly distinguishes the Paulverse from Eliezerverse, because that insight took time to manifest\nit was not, as I recall, the first thing that people said about RBM-unrolling\nand there were many little or not-really-so-little inventions that sustained the insight to deeper and deeper nets\nand those little inventions did not correspond to huge capability jumps immediately in the hands of their inventors, with, I think, the possible exception of transformers\nthough also I think back then people just didn't do as much SoTA-measuring-and-comparing\n\n\n\n[Christiano][16:40]\n(I think transformers are a significantly smaller jump than previous improvements)\nalso a thing we could guess about though\n\n\n\n\n[Yudkowsky][16:40]\nright, but did the people who demoed the improvements demo them as big capability jumps?\nharder to do when you don't have a big old well funded field with lots of eyes on SoTA claims\nthey weren't dense in SoTA, I think?\nanyways, there has not, so far as I know, been an insight of similar size to that last one, since then\n\n\n\n[Christiano][16:42]\nalso 10-100x is still actually surprising to me for transformers\nso I guess lesson learned\n\n\n\n\n[Yudkowsky][16:43]\nI think if you literally took pre-transformer SoTA, and the transformer paper plus the minimum of later innovations required to make transformers scale at all, then as you tried scaling stuff to GPT-1 scale, the old stuff would probably just flatly not work or asymptote?\n\n\n\n[Christiano][16:44]\nin general if you take anything developed at scale X and try to scale it way past X I think it won't work\nor like, it will work much worse than something that continues to get tweaked\n\n\n\n\n[Yudkowsky][16:44]\nI'm not sure I understand what you mean if you mean \"10x-100x on transformers actually happened and therefore actually surprised me\"\n\n\n\n[Christiano][16:44]\nyeah, I mean that given everything I know I am surprised that transformers were as large as a 100x improvement on translation\nin that paper\n\n\n\n\n[Yudkowsky][16:45]\nthough it may not help my own case, I remark that my generic heuristics say to have an assistant go poke a bit at that claim and see if your noticed confusion is because you are being more confused by fiction than by reality.\n\n\n\n[Christiano][16:45]\nyeah, I am definitely interested to understand a bit better what's up there\nbut tentatively I'm sticking to my guns on the original prediction\nif you have random 10-20 person teams getting 100x speedups versus prior sota\nas we approach TAI\nthat's so far from paulverse\n\n\n\n\n[Yudkowsky][16:46]\nlike, not about this case specially, just sheer reflex from \"this assertion in a science paper is surprising\" to \"go poke at it\". many unsurprising and hence unpoked assertions will also be false, of course, but the surprising ones even more so on average.\n\n\n\n[Christiano][16:48]\nanyway, seems like a good approach to finding a concrete disagreement\nand even looking back at this conversation would be a start for diagnosing who is more right in hindsight\nmain thing is to say how quickly and in what industries I'm how surprised\n\n\n\n\n[Yudkowsky][16:49]\nI suspect you want to attach conditions to that surprise? Like, the domain must be sufficiently explored OR sufficiently economically important, because Paulverse also predicts(?) that as of a few years (3?? 2??? 15????) all the economically important stuff will have been poked with lots of compute already.\nand if there's economically important domains where nobody's tried throwing $50M at a model yet, that also sounds like not-the-Paulverse?\n\n\n\n[Christiano][16:50]\nI think the economically important prediction doesn't really need that much of \"within a few years\"\nlike the total stakes have just been low to date\nnone of the deep learning labs are that close to paying for themselves\nso we're not in the regime where \"economic niche > R&D budget\"\nwe are still in the paulverse-consistent regime where investment is driven by the hope of future wins\nthough paul is surprised that R&D budgets aren't more larger than the economic value\n\n\n\n\n[Yudkowsky][16:51]\nwell, it's a bit of a shame from the Eliezer viewpoint that the Paulverse can't be falsifiable yet, then, considering that in the Eliezerverse it is allowed (but not mandated) for the world to end while most DL labs haven't paid for themselves.\nalbeit I'm not sure that's true of the present world?\nDM had that thing about \"we just rejiggered cooling the server rooms for Google and paid back 1/3 of their investment in us\" and that was years ago.\n\n\n\n[Christiano][16:52]\nI'll register considerable skepticism\n\n\n\n\n[Yudkowsky][16:53]\nI don't claim deep knowledge.\nBut if the imminence, and hence strength and falsifiability, of Paulverse assertions, depend on how much money all the deep learning labs are making, that seems like something we could ask OpenPhil to measure?\n\n\n\n[Christiano][16:55]\nit seems easier to just talk about ML tasks that people work on\nit seems really hard to arbitrate the \"all the important niches are invested in\" stuff in a way that's correlated with takeoff\nwhereas the \"we should be making a big chunk of our progress from insights\" seems like it's easier\nthough I understand that your view could be disjunctive, of either \"AI will have hidden secrets that yield great intelligence,\" or \"there are hidden secret applications that yield incredible profit\"\n(sorry that statement is crude / not very faithful)\nshould follow up on this in the future, off for now though\n\n\n\n\n[Yudkowsky][16:58]\n\n\n\n \n\nThe post More Christiano, Cotra, and Yudkowsky on AI progress appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "More Christiano, Cotra, and Yudkowsky on AI progress", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=3", "id": "3ca952907cbd74e03fcc9e20431b6893"} {"text": "Shulman and Yudkowsky on AI progress\n\n\n \nThis post is a transcript of a discussion between Carl Shulman and Eliezer Yudkowsky, following up on a conversation with Paul Christiano and Ajeya Cotra.\n \nColor key:\n\n\n\n\n Chat by Carl and Eliezer \n Other chat \n\n\n\n\n \n9.14. Carl Shulman's predictions\n \n\n[Shulman][20:30]\nI'll interject some points re the earlier discussion about how animal data relates to the 'AI scaling to AGI' thesis.\n1. In humans it's claimed the IQ-job success correlation varies by job, For a scientist or doctor it might be 0.6+, for a low complexity job more like 0.4, or more like 0.2 for simple repetitive manual labor. That presumably goes down a lot with less in the way of hands, or focused on low density foods like baleen whales or grazers. If it's 0.1 for animals like orcas or elephants, or 0.05, then there's 4-10x less fitness return to smarts.\n2. But they outmass humans by more than 4-10x. Elephants 40x, orca 60x+. Metabolically (20 watts divided by BMR of the animal) the gap is somewhat smaller though, because of metabolic scaling laws (energy scales with 3/4 or maybe 2/3 power, so ).\nhttps://en.wikipedia.org/wiki/Kleiber%27s_law\nIf dinosaurs were poikilotherms, that's a 10x difference in energy budget vs a mammal of the same size, although there is debate about their metabolism.\n3. If we're looking for an innovation in birds and primates, there's some evidence of 'hardware' innovation rather than 'software.' Herculano-Houzel reports in The Human Advantage (summarizing much prior work neuron counting) different observational scaling laws for neuron number with brain mass for different animal lineages.\n\nWe were particularly interested in cellular scaling differences that might have arisen in primates. If the same rules relating numbers of neurons to brain size in rodents (6)\nThe brain of the capuchin monkey, for instance, weighing 52 g, contains >3× more neurons in the cerebral cortex and ≈2× more neurons in the cerebellum than the larger brain of the capybara, weighing 76 g.\n\n[Editor's Note: Quote source is \"Cellular scaling rules for primate brains.\"]\nIn rodents brain mass increases with neuron count n^1.6, whereas it's close to linear (n^1.1) in primates. For cortex neurons and cortex mass 1.7 and 1.0. In general birds and primates are outliers in neuron scaling with brain mass.\nNote also that bigger brains with lower neuron density have longer communication times from one side of the brain to the other. So primates and birds can have faster clock speeds for integrated thought than a large elephant or whale with similar neuron count.\n4. Elephants have brain mass ~2.5x human, and 3x neurons, but 98% of those are in the cerebellum (vs 80% in or less in most animals; these are generally the tiniest neurons and seem to do a bunch of fine motor control). Human cerebral cortex has 3x the neurons of the elephant cortex (which has twice the mass). The giant cerebellum seems like controlling the very complex trunk.\nhttps://nautil.us/issue/35/boundaries/the-paradox-of-the-elephant-brain\nBlue whales get close to human neuron counts with much larger brains.\nhttps://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons\n5. As Paul mentioned, human brain volume correlation with measures of cognitive function after correcting for measurement error on the cognitive side is in the vicinity of 0.3-0.4 (might go a bit higher after controlling for non-functional brain volume variation, lower from removing confounds). The genetic correlation with cognitive function in this study is 0.24:\nhttps://www.nature.com/articles/s41467-020-19378-5\nSo it accounts for a minority of genetic influences on cognitive ability. We'd also expect a bunch of genetic variance that's basically disruptive mutations in mutation-selection balance (e.g. schizophrenia seems to be a result of that, with schizophrenia alleles under negative selection, but a big mutational target, with the standing burden set by the level of fitness penalty for it; in niches with less return to cognition the mutational surface will be cleaned up less frequently and have more standing junk).\nOther sources of genetic variance might include allocation of attention/learning (curiosity and thinking about abstractions vs immediate sensory processing/alertness), length of childhood/learning phase, motivation to engage in chains of thought, etc.\nOverall I think there's some question about how to account for the full genetic variance, but mapping it onto the ML experience with model size, experience and reward functions being key looks compatible with the biological evidence. I lean towards it, although it's not cleanly and conclusively shown.\nRegarding economic impact of AGI, I do not buy the 'regulation strangles all big GDP boosts' story.\nThe BEA breaks down US GDP by industry here (page 11):\nhttps://www.bea.gov/sites/default/files/2021-06/gdp1q21_3rd_1.pdf\nAs I work through sectors and the rollout of past automation I see opportunities for large-scale rollout that is not heavily blocked by regulation. Manufacturing is still trillions of dollars, and robotic factories are permitted and produced under current law, with the limits being more about which tasks the robots work for at low enough cost (e.g. this stopped Tesla plans for more completely robotic factories). Also worth noting manufacturing is mobile and new factories are sited in friendly jurisdictions.\nSoftware to control agricultural machinery and food processing is also permitted.\nWarehouses are also low-regulation environments with logistics worth hundreds of billions of dollars. See Amazon's robot-heavy warehouses limited by robotics software.\nDriving is hundreds of billions of dollars, and Tesla has been permitted to use Autopilot, and there has been a lot of regulator enthusiasm for permitting self-driving cars with humanlike accident rates. Waymo still hasn't reached that it seems and is lowering costs.\nRestaurants/grocery stores/hotels are around a trillion dollars. Replacing humans in vision/voice tasks to take orders, track inventory (Amazon Go style), etc is worth hundreds of billions there and mostly permitted. Robotics cheap enough to replace low-wage labor there would also be valuable (although a lower priority than high-wage work if compute and development costs are similar).\nSoftware is close to a half trillion dollars and the internals of software development are almost wholly unregulated.\nFinance is over a trillion dollars, with room for AI in sales and management.\nSales and marketing are big and fairly unregulated.\nIn highly regulated and licensed professions like healthcare and legal services, you can still see a licensee mechanically administer the advice of the machine, amplifying their reach and productivity.\nEven in housing/construction there's still great profits to be made by improving the efficiency of what construction is allowed (a sector worth hundreds of billions).\nIf you're talking about legions of super charismatic AI chatbots, they could be doing sales, coaching human manual labor to effectively upskill it, and providing the variety of activities discussed above. They're enough to more than double GDP, even with strong Baumol effects/cost disease, I'd say.\nAlthough of course if you have AIs that can do so much the wages of AI and hardware researchers will be super high, and so a lot of that will go into the intelligence explosion, while before that various weaknesses that prevent full automation of AI research will also mess up activity in these other sectors to varying degrees.\nRe discontinuity and progress curves, I think Paul is right. AI Impacts went to a lot of effort assembling datasets looking for big jumps on progress plots, and indeed nukes are an extremely high percentile for discontinuity, and were developed by the biggest spending power (yes other powers could have bet more on nukes, but didn't, and that was related to the US having more to spend and putting more in many bets), with the big gains in military power per $ coming with the hydrogen bomb and over the next decade.\nhttps://aiimpacts.org/category/takeoff-speed/continuity-of-progress/discontinuous-progress-investigation/\nFor measurable hardware and software progress (Elo in games, loss on defined benchmarks), you have quite continuous hardware progress, and software progress that is on the same ballpark, and not drastically jumpy (like 10 year gains in 1), moreso as you get to metrics used by bigger markets/industries.\nI also agree with Paul's description of the prior Go trend, and how DeepMind increased $ spent on Go software enormously. That analysis was a big part of why I bet on AlphaGo winning against Lee Sedol at the time (the rest being extrapolation from the Fan Hui version and models of DeepMind's process for deciding when to try a match).\n\n\n\n\n[Yudkowsky][21:38]\nI'm curious about how much you think these opinions have been arrived at independently by yourself, Paul, and the rest of the OpenPhil complex?\n\n\n\n[Cotra][21:44]\nLittle of Open Phil's opinions are independent of Carl, the source of all opinions\n\n\n\n\n[Yudkowsky: ]\n[Ngo: ]\n\n\n\n\n\n\n\n\n\n[Shulman][21:44]\nI did the brain evolution stuff a long time ago independently. Paul has heard my points on that front, and came up with some parts independently. I wouldn't attribute that to anyone else in that 'complex.'\nOn the share of the economy those are my independent views.\nOn discontinuities, that was my impression before, but the additional AI Impacts data collection narrowed my credences.\nTBC on the brain stuff I had the same evolutionary concern as you, which was I investigated those explanations and they still are not fully satisfying (without more micro-level data opening the black box of non-brain volume genetic variance and evolution over time).\n\n\n\n\n[Yudkowsky][21:50]\nso… when I imagine trying to deploy this style of thought myself to predict the recent past without benefit of hindsight, it returns a lot of errors. perhaps this is because I do not know how to use this style of thought, but.\nfor example, I feel like if I was GPT-continuing your reasoning from the great opportunities still available in the world economy, in early 2020, it would output text like:\n\"There are many possible regulatory regimes in the world, some of which would permit rapid construction of mRNA-vaccine factories well in advance of FDA approval. Given the overall urgency of the pandemic some of those extra-USA vaccines would be sold to individuals or a few countries like Israel willing to pay high prices for them, which would provide evidence of efficacy and break the usual impulse towards regulatory uniformity among developed countries, not to mention the existence of less developed countries who could potentially pay smaller but significant amounts for vaccines. The FDA doesn't seem likely to actively ban testing; they might under a Democratic regime, but Trump is already somewhat ideologically prejudiced against the FDA and would go along with the probable advice of his advisors, or just his personal impulse, to override any FDA actions that seemed liable to prevent tests and vaccines from making the problem just go away.\"\n\n\n\n[Shulman][21:59]\nPharmaceuticals is a top 10% regulated sector, which is seeing many startups trying to apply AI to drug design (which has faced no regulatory barriers), which fits into the ordinary observed output of the sector. Your story is about regulation failing to improve relative to normal more than it in fact did (which is a dramatic shift, although abysmal relative to what would be reasonable).\nThat said, I did lose a 50-50 bet on US control of the pandemic under Trump (although I also correctly bet that vaccine approval and deployment would be historically unprecedently fast and successful due to the high demand).\n\n\n\n\n[Yudkowsky][22:02]\nit's not impossible that Carl/Paul-style reasoning about the future – near future, or indefinitely later future? – would start to sound more reasonable to me if you tried writing out a modal-average concrete scenario that was full of the same disasters found in history books and recent news\nlike, maybe if hypothetically I knew how to operate this style of thinking, I would know how to add disasters automatically and adjust estimates for them; so you don't need to say that to Paul, who also hypothetically knows\nbut I do not know how to operate this style of thinking, so I look at your description of the world economy and it seems like an endless list of cheerfully optimistic ingredients and the recipe doesn't say how many teaspoons of disaster to add or how long to cook it or how it affects the final taste\n\n\n\n[Shulman][22:06]\nLike when you look at historical GDP stats and AI progress they are made up of a normal rate of insanity and screwups.\n\n\n\n\n[Ngo: ]\n\n\n\n\n\n\n\n\n[Yudkowsky][22:07]\non my view of reality, I'm the one who expects business-as-usual in GDP until shortly before the world ends, if indeed business-as-usual-in-GDP changes at all, and you have an optimistic recipe for Not That which doesn't come with an example execution containing typical disasters?\n\n\n\n[Shulman][22:07]\nThings like failing to rush through neural network scaling over the past decade to the point of financial limitation on model size, insanity on AI safety, anti-AI regulation being driven by social media's role in politics.\n\n\n\n\n[Yudkowsky][22:09]\nfailing to deploy 99% robotic cars to new cities using fences and electronic gates\n\n\n\n[Shulman][22:09]\nHistorical growth has new technologies and stupid stuff messing it up.\n\n\n\n\n[Yudkowsky][22:09]\nso many things one could imagine doing with current tech, and yet, they are not done, anywhere on Earth\n\n\n\n[Shulman][22:09]\nAI is going to be incredibly powerful tech, and after a historically typical haircut it's still a lot bigger.\n\n\n\n\n[Yudkowsky][22:09]\nso some of this seems obviously driven by longer timelines in general\ndo you have things which, if they start to happen soonish and in advance of world GDP having significantly broken upward 3 years before then, cause you to say \"oh no I'm in the Eliezerverse\"?\n\n\n\n[Shulman][22:12]\nYou may be confusing my views and Paul's.\n\n\n\n\n[Yudkowsky][22:12]\n\"AI is going to be incredibly powerful tech\" sounds like long timelines to me, though?\n\n\n\n[Shulman][22:13]\nNo.\n\n\n\n\n[Yudkowsky][22:13]\nlike, \"incredibly powerful tech for longer than 6 months which has time to enter the economy\"\nif it's \"incredibly powerful tech\" in the sense of immediately killing everybody then of course we agree, but that didn't seem to be the context\n\n\n\n[Shulman][22:15]\nI think broadly human-level AGI means intelligence explosion/end of the world in less than a year, but tons of economic value is likely to leak out before that from the combination of worse general intelligence with AI advantages like huge experience.\n\n\n\n\n[Yudkowsky][22:15]\nmy worldview permits but does not mandate a bunch of weirdly powerful shit that people can do a couple of years before the end, because that would sound like a typically messy and chaotic history-book scenario especially if it failed to help us in any way\n\n\n\n[Shulman][22:15]\nAnd the economic impact is increasing superlinearly (as later on AI can better manage its own introduction and not be held back by human complementarities on both the production side and introduction side).\n\n\n\n\n[Yudkowsky][22:16]\nmy worldview also permits but does not mandate that you get up to the chimp level, chimps are not very valuable, and once you can do fully AGI thought it compounds very quickly\nit feels to me like the Paul view wants something narrower than that, a specific story about a great economic boom, and it sounds like the Carl view wants something that from my perspective seems similarly narrow\nwhich is why I keep asking \"can you perhaps be specific about what would count as Not That and thereby point to the Eliezerverse\"\n\n\n\n[Shulman][22:18]\nWe're in the Eliezerverse with huge kinks in loss graphs on automated programming/Putnam problems.\nNot from scaling up inputs but from a local discovery that is much bigger in impact than the sorts of jumps we observe from things like Transformers.\n\n\n\n\n[Yudkowsky][22:19]\n…my model of Paul didn't agree with that being a prophecy-distinguishing sign to first order (to second order, my model of Paul agrees with Carl for reasons unbeknownst to me)\nI don't think you need something very much bigger than Transformers to get sharp loss drops?\n\n\n\n[Shulman][22:19]\nnot the only disagreement\nbut that is a claim you seem to advance that seems bogus on our respective reads of the data on software advances\n\n\n\n\n[Yudkowsky][22:21]\nbut, sure, \"huge kinks in loss graphs on automated programming / Putnam problems\" sounds like something that is, if not mandated on my model, much more likely than it is in the Paulverse. though I am a bit surprised because I would not have expected Paul to be okay betting on that.\nlike, I thought it was an Eliezer-view unshared by Paul that this was a sign of the Eliezerverse.\nbut okeydokey if confirmed\nto be clear I do not mean to predict those kinks in the next 3 years specifically\nthey grow in probability on my model as we approach the End Times\n\n\n\n[Shulman][22:24]\nI also predict that AI chip usage is going to keep growing at enormous rates, and that the buyers will be getting net economic value out of them. The market is pricing NVDA (up more than 50x since 2014) at more than twice Intel because of the incredible growth rate, and it requires more crazy growth to justify the valuation (but still short of singularity). Although NVDA may be toppled by other producers.\nSimilarly for increasing spending on model size (although slower than when model costs were <$1M).\n\n\n\n\n[Yudkowsky][22:27]\nrelatively more plausible on my view, first because it's arguably already happening (which makes it easier to predict) and second because that can happen with profitable uses of AI chips which hover around on the economic fringes instead of feeding into core production cycles (waifutech)\nit is easy to imagine massive AI chip usage in a world which rejects economic optimism and stays economically sad while engaging in massive AI chip usage\nso, more plausible\n\n\n\n[Shulman][22:28]\nWhat's with the silly waifu example? That's small relative to the actual big tech company applications (where they quickly roll it into their software/web services or internal processes, which is not blocked by regulation and uses their internal expertise). Super chatbots would be used as salespeople, counselors, non-waifu entertainment.\nIt seems randomly off from existing reality.\n\n\n\n\n[Yudkowsky][22:29]\nseems more… optimistic, Kurzweilian?… to suppose that the tech gets used correctly the way a sane person would hope it would be used\n\n\n\n[Shulman][22:29]\nLike this is actual current use.\nHollywood and videogames alone are much bigger than anime, software is bigger than that, Amazon/Walmart logistics is bigger.\n\n\n\n\n[Yudkowsky][22:31]\nCompanies using super chatbots to replace customer service they already hated and previously outsourced, with a further drop in quality, is permitted by the Dark and Gloomy Attempt To Realistically Continue History model\nI am on board with wondering if we'll see sufficiently advanced videogame AI, but I'd point out that, again, that doesn't cycle core production loops harder\n\n\n\n[Shulman][22:33]\nOK, using an example of allowable economic activity that obviously is shaving off more than an order of magnitude on potential market is just misleading compared to something like FAANGSx10.\n\n\n\n\n[Yudkowsky][22:34]\nso, like, if I was looking for places that would break upward, I would be like \"universal translators that finally work\"\nbut I was also like that when GPT-2 came out and it hasn't happened even though you would think GPT-2 indicated we could get enough real understanding inside a neural network that you'd think, cognition-wise, it would suffice to do pretty good translation\nthere are huge current economic gradients pointing to the industrialization of places that, you might think, could benefit a lot from universal seamless translation\n\n\n\n[Shulman][22:36]\nCurrent translation industry is tens of billions, English learning bigger.\n\n\n\n\n[Yudkowsky][22:36]\nAmazon logistics are an interesting point, but there's the question of how much economic benefit is produced by automating all of it at once, Amazon cannot ship 10x as much stuff if their warehouse costs go down by 10x.\n\n\n\n[Shulman][22:37]\nDefinitely hundreds of billions of dollars of annual value created from that, e.g. by easing global outsourcing.\n\n\n\n\n[Yudkowsky][22:37]\nif one is looking for places where huge economic currents could be produced, AI taking down what was previously a basic labor market barrier, would sound as plausible to me as many other things\n\n\n\n[Shulman][22:37]\nAmazon has increased sales faster than it lowered logistics costs, there's still a ton of market share to take.\n\n\n\n\n[Yudkowsky][22:37]\nI am able to generate cheerful scenarios, eg if I need them for an SF short story set in the near future where billions of people are using AI tech on a daily basis and this has generated trillions in economic value\n\n\n\n[Shulman][22:38]\nBedtime for me though.\n\n\n\n\n[Yudkowsky][22:39]\nI don't feel like particular cheerful scenarios like that have very much of a track record of coming true. I would not be shocked if the next GPT-jump permits that tech, and I would then not be shocked if use of AI translation actually did scale a lot. I would be much more impressed, with Earth having gone well for once and better than I expected, if that actually produced significantly more labor mobility and contributed to world GDP.\nI just don't actively, >50% expect things going right like that. It seems to me that more often in real life, things do not go right like that, even if it seems quite easy to imagine them going right.\ngood night!\n\n\n \n10. September 22 conversation\n \n10.1. Scaling laws\n \n\n[Shah][3:05]\nMy attempt at a reframing:\nPlaces of agreement:\n\nTrend extrapolation / things done by superforecasters seem like the right way to get a first-pass answer\nSignificant intuition has to go into exactly which trends to extrapolate and why (e.g. should GDP/GWP be extrapolated as \"continue to grow at 3% per year\" or as \"growth rate continues to increase leading to singularity\")\nIt is possible to foresee deviations in trends based on qualitative changes in underlying drivers. In the Paul view, this often looks like switching from one trend to another. (For example: instead of \"continue to grow at 3%\" you notice that feedback loops imply hyperbolic growth, and then you look further back in time and notice that that's the trend on a longer timescale. Or alternatively, you realize that you can't just extrapolate AI progress because you can't keep doubling money invested every few months, and so you start looking at trends in money invested and build a simple model based on that, which you still describe as \"basically trend extrapolation\".)\n\nPlaces of disagreement:\n\nEliezer / Nate: There is an underlying driver of impact on the world which we might call \"general cognition\" or \"intelligence\" or \"consequentialism\" or \"the-thing-spotlighted-by-coherence-arguments\", and the zero-to-one transition for that underlying driver will go from \"not present at all\" to \"at or above human-level\", without something in between. Rats, dogs and chimps might be impressive in some ways but they do not have this underlying driver of impact; the zero-to-one transition happened between chimps and humans.\nPaul (might be closer to my views, idk): There isn't this underlying driver (or, depending on definitions, the zero-to-one transition happens well before human-level intelligence / impact). There are just more and more general heuristics, and correspondingly higher and higher impact. The case with evolution is unusually fast because the more general heuristics weren't actually that useful.\n\nTo the extent this is accurate, it doesn't seem like you really get to make a bet that resolves before the end times, since you agree on basically everything until the point at which Eliezer predicts that you get the zero-to-one transition on the underlying driver of impact. I think all else equal you probably predict that Eliezer has shorter timelines to the end times than Paul (and that's where you get things like \"Eliezer predicts you don't have factory-generating factories before the end times whereas Paul does\"). (Of course, all else is not equal.)\n\n\n\n\n[Bensinger][3:36]\n\nbut you know enough to have strong timing predictions, e.g. your bet with caplan\n\nEliezer said in Jan 2017 that the Caplan bet was kind of a joke: https://www.econlib.org/archives/2017/01/my_end-of-the-w.html/#comment-166919. Albeit \"I suppose one might draw conclusions from the fact that, when I was humorously imagining what sort of benefit I could get from exploiting this amazing phenomenon, my System 1 thought that having the world not end before 2030 seemed like the most I could reasonably ask.\"\n\n\n\n\n[Cotra][10:01]\n@RobBensinger sounds like the joke is that he thinks timelines are even shorter, which strengthens my claim about strong timing predictions?\nNow that we clarified up-thread that Eliezer's position is not that there was a giant algorithmic innovation in between chimps and humans, but rather that there was some innovation in between dinosaurs and some primate or bird that allowed the primate/bird lines to scale better, I'm now confused about why it still seems like Eliezer expects a major innovation in the future that leads to deep/general intelligence. If the evidence we have is that evolution had some innovation like this, why not think that the invention of neural nets in the 60s or the invention of backprop in the 80s or whatever was the corresponding innovation in AI development? Why put it in the future? (Unless I'm misunderstanding and Eliezer doesn't really place very high probability on \"AGI is bottlenecked by an insight that lets us figure out how to get the deep intelligence instead of the shallow one\"?)\nAlso if Eliezer would count transformers and so on as the kind of big innovation that would lead to AGI, then I'm not sure we disagree. I feel like that sort of thing is factored into the software progress trends used to extrapolate progress, so projecting those forward folds in expectations of future transformers\nBut it seems like Eliezer still expects one or a few innovations that are much larger in impact than the transformer?\nI'm also curious what Eliezer thinks of the claim \"extrapolating trends automatically folds in the world's inadequacy and stupidness because the past trend was built from everything happening in the world including the inadequacy\"\n\n\n\n\n[Yudkowsky][10:24]\nAjeya asked before, and I see I didn't answer:\n\nwhat about hardware/software R&D wages? will they get up to $20m/yr for good ppl?\n\nIf you mean the best/luckiest people, they're already there. If you mean that say Mike Blume starts getting paid $20m/yr base salary, then I cheerfully say that I'm willing to call that a narrower prediction of the Paulverse than of the Eliezerverse.\n\nwill someone train a 10T param model before end days?\n\nWell, of course, because now it's a headline figure and Goodhart's Law applies, and the Earlier point where this happens is where somebody trains a useless 10T param model using some much cheaper training method like MoE just to be the first to get the headline where they say they did that, if indeed that hasn't happened already.\nBut even apart from that, a 10T param model sure sounds lots like a steady stream of headlines we've already seen, even for cases where it was doing something useful like GPT-3, so I would not feel surprised by more headlines like this.\nI will, however, be alarmed (not surprised) relatively more by ability improvements, than headline figure improvements, because I am not very impressed by 10T param models per se.\nIn fact I will probably be more surprised by ability improvements after hearing the 10T figure, than my model of Paul will claim to be, because my model of Paul much more associates 10T figures with capability increases.\nThough I don't understand why this prediction success isn't more than counterbalanced by an implied sequence of earlier failures in which Paul's model permitted much more impressive things to happen from 1T Goodharted-headline models, that didn't actually happen, that I expected to not happen – eg the current regime with MoE headlines – so that by the time that an impressive 10T model comes along and Imaginary Paul says 'Ah yes I claim this for a success', Eliezer's reply is 'I don't understand the aspect of your theory which supposedly told you in advance that this 10T model would scale capabilities, but not all the previous 10T models or the current pointless-headline 20T models where that would be a prediction failure. From my perspective, people eventually scaled capabilities, and param-scaling techniques happened to be getting more powerful at the same time, and so of course the Earliest tech development to be impressive was one that included lots of params. It's not a coincidence, but it's also not a triumph for the param-driven theory per se, because the news stories look similar AFAICT in a timeline where it's 60% algorithms and 40% params.\"\n\n\n\n[Cotra][10:35]\nMoEs have very different scaling properties, for one thing they run on way fewer FLOP/s (which is just as if not more important than params, though we use params as a shorthand when we're talking about \"typical\" models which tend to have small constant FLOP/param ratios). If there's a model with a similar architecture to the ones we have scaling laws about now, then at 10T params I'd expect it to have the performance that the scaling laws would expect it to have\nMaybe something to bet about there. Would you say 10T param GPT-N would perform worse than the scaling law extraps would predict?\nIt seems like if we just look at a ton of scaling laws and see where they predict benchmark perf to get, then you could either bet on an upward or downward trend break and there could be a bet?\nAlso, if \"large models that aren't that impressive\" is a ding against Paul's view, why isn't GPT-3 being so much better than GPT-2 which in turn was better than GPT-1 with little fundamental architecture changes not a plus? It seems like you often cite GPT-3 as evidence for your view\nBut Paul (and Dario) at the time predicted it'd work. The scaling laws work was before GPT-3 and prospectively predicted GPT-3's perf\n\n\n\n\n[Yudkowsky][10:55]\nI guess I should've mentioned that I knew MoEs ran on many fewer FLOP/s because others may not know I know that; it's an obvious charitable-Paul-interpretation but I feel like there's multiple of those and I don't know which, if any, Paul wants to claim as obvious-not-just-in-retrospect.\nLike, ok, sure people talk about model size. But maybe we really want to talk about gradient descent training ops; oh, wait, actually we meant to talk about gradient descent training ops with a penalty figure for ops that use lower precision, but nowhere near a 50% penalty for 16-bit instead of 32-bit; well, no, really the obvious metric is the one in which the value of a training op scales logarithmically with the total computational depth of the gradient descent (I'm making this up, it's not an actual standard anywhere), and that's why this alternate model that does a ton of gradient descent ops while making less use of the actual limiting resource of inter-GPU bandwidth is not as effective as you'd predict from the raw headline figure about gradient descent ops. And of course we don't want to count ops that are just recomputing a gradient checkpoint, ha ha, that would be silly.\nIt's not impossible to figure out these adjustments in advance.\nBut part of me also worries that – though this is more true of other EAs who will read this, than Paul or Carl, whose skills I do respect to some degree – that if you ran an MoE model with many fewer gradient descent ops, and it did do something impressive with 10T params that way, people would promptly do a happy dance and say \"yay scaling\" not \"oh wait huh that was not how I thought param scaling worked\". After all, somebody originally said \"10T\", so clearly they were right!\nAnd even with respect to Carl or Paul I worry about looking back and making \"obvious\" adjustments and thinking that a theory sure has been working out fine so far.\nTo be clear, I do consider GPT-3 as noticeable evidence for Dario's view and for Paul's view. The degree to which it worked well was more narrowly a prediction of those models than mine.\nThing about narrow predictions like that, if GPT-4 does not scale impressively, the theory loses significantly more Bayes points than it previously gained.\nSaying \"this previously observed trend is very strong and will surely continue\" will quite often let you pick up a few pennies in front of the steamroller, because not uncommonly, trends do continue, but then they stop and you lose more Bayes points than you previously gained.\nI do think of Carl and Paul as being better than this.\nBut I also think of the average EA reading them as being fooled by this.\n\n\n\n[Shulman][11:09]\nThe scaling laws experiments held architecture fixed, and that's the basis of the prediction that GPT-3 will be along the same line that held over previous OOM, most definitely not switch to MoE/Switch Transformer with way less resources.\n\n\n\n\n[Cotra: ]\n\n\n\n\n\n\n\n\n[Yudkowsky][11:10]\nYou can redraw your graphs afterwards so that a variant version of Moore's Law continued apace, but back in 2000, everyone sure was impressed with CPU GHz going up year after year and computers getting tangibly faster, and that version of Moore's Law sure did not continue. Maybe some people were savvier and redrew the graphs as soon as the physical obstacles became visible, but of course, other people had predicted the end of Moore's Law years and years before then. Maybe if superforecasters had been around in 2000 we would have found that they all sorted it out successfully, maybe not.\nSo, GPT-3 was $12m to train. In May 2022 it will be 2 years since GPT-3 came out. It feels to me like the Paulian view as I know how to operate it, says that GPT-3 has now got some revenue and exhibited applications like Codex, and was on a clear trend line of promise, so somebody ought to be willing to invest $120m in training GPT-4, and then we get 4x algorithmic speedups and cost improvements since then (iirc Paul said 2x/yr above? though I can't remember if that was his viewpoint or mine?) so GPT-4 should have 40x 'oomph' in some sense, and what that translates to in terms of intuitive impact ability, I don't know.\n\n\n\n[Shulman][11:18]\nThe OAI paper had 16 months (and is probably a bit low because in the earlier data people weren't optimizing for hardware efficiency much): https://openai.com/blog/ai-and-efficiency/\n\nso GPT-4 should have 40x 'oomph' in some sense, and what that translates to in terms of intuitive impact ability, I don't know.\n\nProjecting this: https://arxiv.org/abs/2001.08361\n\n\n\n\n[Yudkowsky][11:19]\n30x then. I would not be terribly surprised to find that results on benchmarks continue according to graph, and yet, GPT-4 somehow does not seem very much smarter than GPT-3 in conversation.\n\n\n\n[Shulman][11:20]\nThere are also graphs of the human impressions of sense against those benchmarks and they are well correlated. I expect that to continue too.\n\n\n\n\n[Cotra: ]\n\n\n\n\n\n\n\n\n[Yudkowsky][11:21]\nStuff coming uncorrelated that way, sounds like some of the history I lived through, where people managed to make the graphs of Moore's Law seem to look steady by rejiggering the axes, and yet, between 1990 and 2000 home computers got a whole lot faster, and between 2010 and 2020 they did not.\nThis is obviously more likely (from my perspective) to break down anywhere between GPT-3 and GPT-6, than between GPT-3 and GPT-4.\nIs this also part of the Carl/Paul worldview? Because I implicitly parse a lot of the arguments as assuming a necessary premise which says, \"No, this continues on until doomsday and I know it Kurzweil-style.\"\n\n\n\n[Shulman][11:23]\nYeah I expect trend changes to happen, more as you go further out, and especially more when you see other things running into barriers or contradictions. Re language models there is some of that coming up with different scaling laws colliding when the models get good enough to extract almost all the info per character (unless you reconfigure to use more info-dense data).\n\n\n\n\n[Yudkowsky][11:23]\nWhere \"this\" is the Yudkowskian \"the graphs are fragile and just break down one day, and their meanings are even more fragile and break down earlier\".\n\n\n\n[Shulman][11:25]\nScaling laws working over 8 or 9 OOM makes me pretty confident of the next couple, not confident about 10 further OOM out.\n\n\n\n \n\nThe post Shulman and Yudkowsky on AI progress appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Shulman and Yudkowsky on AI progress", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=3", "id": "2644d6cf9cd7c15921ee355bf85a2788"} {"text": "Biology-Inspired AGI Timelines: The Trick That Never Works\n\n– 1988 –\nHans Moravec:  Behold my book Mind Children.  Within, I project that, in 2010 or thereabouts, we shall achieve strong AI.  I am not calling it \"Artificial General Intelligence\" because this term will not be coined for another 15 years or so.\nEliezer (who is not actually on the record as saying this, because the real Eliezer is, in this scenario, 8 years old; this version of Eliezer has all the meta-heuristics of Eliezer from 2021, but none of that Eliezer's anachronistic knowledge):  Really?  That sounds like a very difficult prediction to make correctly, since it is about the future, which is famously hard to predict.\nImaginary Moravec:  Sounds like a fully general counterargument to me.\nEliezer:  Well, it is, indeed, a fully general counterargument against futurism.  Successfully predicting the unimaginably far future – that is, more than 2 or 3 years out, or sometimes less – is something that human beings seem to be quite bad at, by and large.\nMoravec:  I predict that, 4 years from this day, in 1992, the Sun will rise in the east.\nEliezer: Okay, let me qualify that.  Humans seem to be quite bad at predicting the future whenever we need to predict anything at all new and unfamiliar, rather than the Sun continuing to rise every morning until it finally gets eaten.  I'm not saying it's impossible to ever validly predict something novel!  Why, even if that was impossible, how could I know it for sure?  By extrapolating from my own personal inability to make predictions like that?  Maybe I'm just bad at it myself.  But any time somebody claims that some particular novel aspect of the far future is predictable, they justly have a significant burden of prior skepticism to overcome.\nMore broadly, we should not expect a good futurist to give us a generally good picture of the future.  We should expect a great futurist to single out a few rare narrow aspects of the future which are, somehow, exceptions to the usual rule about the future not being very predictable.\nI do agree with you, for example, that we shall at some point see Artificial General Intelligence.  This seems like a rare predictable fact about the future, even though it is about a novel thing which has not happened before: we keep trying to crack this problem, we make progress albeit slowly, the problem must be solvable in principle because human brains solve it, eventually it will be solved; this is not a logical necessity, but it sure seems like the way to bet.  \"AGI eventually\" is predictable in a way that it is not predictable that, e.g., the nation of Japan, presently upon the rise, will achieve economic dominance over the next decades – to name something else that present-day storytellers of 1988 are talking about.\nBut timing the novel development correctly?  That is almost never done, not until things are 2 years out, and often not even then.  Nuclear weapons were called, but not nuclear weapons in 1945; heavier-than-air flight was called, but not flight in 1903.  In both cases, people said two years earlier that it wouldn't be done for 50 years – or said, decades too early, that it'd be done shortly.  There's a difference between worrying that we may eventually get a serious global pandemic, worrying that eventually a lab accident may lead to a global pandemic, and forecasting that a global pandemic will start in November of 2019.\n\nMoravec:  You should read my book, my friend, into which I have put much effort.  In particular – though it may sound impossible to forecast, to the likes of yourself – I have carefully examined a graph of computing power in single chips and the most powerful supercomputers over time.  This graph looks surprisingly regular!  Now, of course not all trends can continue forever; but I have considered the arguments that Moore's Law will break down, and found them unconvincing.  My book spends several chapters discussing the particular reasons and technologies by which we might expect this graph to not break down, and continue, such that humanity will have, by 2010 or so, supercomputers which can perform 10 trillion operations per second.*\nOh, and also my book spends a chapter discussing the retina, the part of the brain whose computations we understand in the most detail, in order to estimate how much computing power the human brain is using, arriving at a figure of 10^13 ops/sec.  This neuroscience and computer science may be a bit hard for the layperson to follow, but I assure you that I am in fact an experienced hands-on practitioner in robotics and computer vision.\nSo, as you can see, we should first get strong AI somewhere around 2010.  I may be off by an order of magnitude in one figure or another; but even if I've made two errors in the same direction, that only shifts the estimate by 7 years or so.\n(*)  Moravec just about nailed this part; the actual year was 2008.\nEliezer:  I sure would be amused if we did in fact get strong AI somewhere around 2010, which, for all I know at this point in this hypothetical conversation, could totally happen!  Reversed stupidity is not intelligence, after all, and just because that is a completely broken justification for predicting 2010 doesn't mean that it cannot happen that way.\nMoravec:  Really now.  Would you care to enlighten me as to how I reasoned so wrongly?\nEliezer:  Among the reasons why the Future is so hard to predict, in general, is that the sort of answers we want tend to be the products of lines of causality with multiple steps and multiple inputs.  Even when we can guess a single fact that plays some role in producing the Future – which is not of itself all that rare – usually the answer the storyteller wants depends on more facts than that single fact.  Our ignorance of any one of those other facts can be enough to torpedo our whole line of reasoning – in practice, not just as a matter of possibilities.  You could say that the art of exceptions to Futurism being impossible, consists in finding those rare things that you can predict despite being almost entirely ignorant of most concrete inputs into the concrete scenario.  Like predicting that AGI will happen at some point, despite not knowing the design for it, or who will make it, or how.\nMy own contribution to the Moore's Law literature consists of Moore's Law of Mad Science:  \"Every 18 months, the minimum IQ required to destroy the Earth drops by 1 point.\"  Even if this serious-joke was an absolutely true law, and aliens told us it was absolutely true, we'd still have no ability whatsoever to predict thereby when the Earth would be destroyed, because we'd have no idea what that minimum IQ was right now or at any future time.  We would know that in general the Earth had a serious problem that needed to be addressed, because we'd know in general that destroying the Earth kept on getting easier every year; but we would not be able to time when that would become an imminent emergency, until we'd seen enough specifics that the crisis was already upon us.\nIn the case of your prediction about strong AI in 2010, I might put it as follows:  The timing of AGI could be seen as a product of three factors, one of which you can try to extrapolate from existing graphs, and two of which you don't know at all.  Ignorance of any one of them is enough to invalidate the whole prediction.\nThese three factors are:\n\nThe availability of computing power over time, which may be quantified, and appears steady when graphed;\nThe rate of progress in knowledge of cognitive science and algorithms over time, which is much harder to quantify;\nA function that is a latent background parameter, for the amount of computing power required to create AGI as a function of any particular level of knowledge about cognition; and about this we know almost nothing.\n\nOr to rephrase:  Depending on how much you and your civilization know about AI-making – how much you know about cognition and computer science – it will take you a variable amount of computing power to build an AI.  If you really knew what you were doing, for example, I confidently predict that you could build a mind at least as powerful as a human mind, while using fewer floating-point operations per second than a human brain is making useful use of –\nChris Humbali:  Wait, did you just say \"confidently\"?  How could you possibly know that with confidence?  How can you criticize Moravec for being too confident, and then, in the next second, turn around and be confident of something yourself?  Doesn't that make you a massive hypocrite?\nEliezer:  Um, who are you again?\nHumbali:  I'm the cousin of Pat Modesto from your previous dialogue on Hero Licensing!  Pat isn't here in person because \"Modesto\" looks unfortunately like \"Moravec\" on a computer screen.  And also their first name looks a bit like \"Paul\" who is not meant to be referenced either.  So today I shall be your true standard-bearer for good calibration, intellectual humility, the outside view, and reference class forecasting –\nEliezer:  Two of these things are not like the other two, in my opinion; and Humbali and Modesto do not understand how to operate any of the four correctly, in my opinion; but anybody who's read \"Hero Licensing\" should already know I believe that.\nHumbali:  – and I don't see how Eliezer can possibly be so confident, after all his humble talk of the difficulty of futurism, that it's possible to build a mind 'as powerful as' a human mind using 'less computing power' than a human brain.\nEliezer:  It's overdetermined by multiple lines of inference.  We might first note, for example, that the human brain runs very slowly in a serial sense and tries to make up for that with massive parallelism.  It's an obvious truth of computer science that while you can use 1000 serial operations per second to emulate 1000 parallel operations per second, the reverse is not in general true.\nTo put it another way: if you had to build a spreadsheet or a word processor on a computer running at 100Hz, you might also need a billion processing cores and massive parallelism in order to do enough cache lookups to get anything done; that wouldn't mean the computational labor you were performing was intrinsically that expensive.  Since modern chips are massively serially faster than the neurons in a brain, and the direction of conversion is asymmetrical, we should expect that there are tasks which are immensely expensive to perform in a massively parallel neural setup, which are much cheaper to do with serial processing steps, and the reverse is not symmetrically true.\nA sufficiently adept builder can build general intelligence more cheaply in total operations per second, if they're allowed to line up a billion operations one after another per second, versus lining up only 100 operations one after another.  I don't bother to qualify this with \"very probably\" or \"almost certainly\"; it is the sort of proposition that a clear thinker should simply accept as obvious and move on.\nHumbali:  And is it certain that neurons can perform only 100 serial steps one after another, then?  As you say, ignorance about one fact can obviate knowledge of any number of others.\nEliezer:  A typical neuron firing as fast as possible can do maybe 200 spikes per second, a few rare neuron types used by eg bats to echolocate can do 1000 spikes per second, and the vast majority of neurons are not firing that fast at any given time.  The usual and proverbial rule in neuroscience – the sort of academically respectable belief I'd expect you to respect even more than I do – is called \"the 100-step rule\", that any task a human brain (or mammalian brain) can do on perceptual timescales, must be doable with no more than 100 serial steps of computation – no more than 100 things that get computed one after another.  Or even less if the computation is running off spiking frequencies instead of individual spikes.\nMoravec:  Yes, considerations like that are part of why I'd defend my estimate of 10^13 ops/sec for a human brain as being reasonable – more reasonable than somebody might think if they were, say, counting all the synapses and multiplying by the maximum number of spikes per second in any neuron.  If you actually look at what the retina is doing, and how it's computing that, it doesn't look like it's doing one floating-point operation per activation spike per synapse.\nEliezer:  There's a similar asymmetry between precise computational operations having a vastly easier time emulating noisy or imprecise computational operations, compared to the reverse – there is no doubt a way to use neurons to compute, say, exact 16-bit integer addition, which is at least more efficient than a human trying to add up 16986+11398 in their heads, but you'd still need more synapses to do that than transistors, because the synapses are noisier and the transistors can just do it precisely.  This is harder to visualize and get a grasp on than the parallel-serial difference, but that doesn't make it unimportant.\nWhich brings me to the second line of very obvious-seeming reasoning that converges upon the same conclusion – that it is in principle possible to build an AGI much more computationally efficient than a human brain – namely that biology is simply not that efficient, and especially when it comes to huge complicated things that it has started doing relatively recently.\nATP synthase may be close to 100% thermodynamically efficient, but ATP synthase is literally over 1.5 billion years old and a core bottleneck on all biological metabolism.  Brains have to pump thousands of ions in and out of each stretch of axon and dendrite, in order to restore their ability to fire another fast neural spike.  The result is that the brain's computation is something like half a million times less efficient than the thermodynamic limit for its temperature – so around two millionths as efficient as ATP synthase.  And neurons are a hell of a lot older than the biological software for general intelligence!\nThe software for a human brain is not going to be 100% efficient compared to the theoretical maximum, nor 10% efficient, nor 1% efficient, even before taking into account the whole thing with parallelism vs. serialism, precision vs. imprecision, or similarly clear low-level differences.\nHumbali:  Ah!  But allow me to offer a consideration here that, I would wager, you've never thought of before yourself – namely – what if you're wrong?  Ah, not so confident now, are you?\nEliezer:  One observes, over one's cognitive life as a human, which sorts of what-ifs are useful to contemplate, and where it is wiser to spend one's limited resources planning against the alternative that one might be wrong; and I have oft observed that lots of people don't… quite seem to understand how to use 'what if' all that well?  They'll be like, \"Well, what if UFOs are aliens, and the aliens are partially hiding from us but not perfectly hiding from us, because they'll seem higher-status if they make themselves observable but never directly interact with us?\"\nI can refute individual what-ifs like that with specific counterarguments, but I'm not sure how to convey the central generator behind how I know that I ought to refute them.  I am not sure how I can get people to reject these ideas for themselves, instead of them passively waiting for me to come around with a specific counterargument.  My having to counterargue things specifically now seems like a road that never seems to end, and I am not as young as I once was, nor am I encouraged by how much progress I seem to be making.  I refute one wacky idea with a specific counterargument, and somebody else comes along and presents a new wacky idea on almost exactly the same theme.\nI know it's probably not going to work, if I try to say things like this, but I'll try to say them anyways.  When you are going around saying 'what-if', there is a very great difference between your map of reality, and the territory of reality, which is extremely narrow and stable.  Drop your phone, gravity pulls the phone downward, it falls.  What if there are aliens and they make the phone rise into the air instead, maybe because they'll be especially amused at violating the rule after you just tried to use it as an example of where you could be confident?  Imagine the aliens watching you, imagine their amusement, contemplate how fragile human thinking is and how little you can ever be assured of anything and ought not to be too confident.  Then drop the phone and watch it fall.  You've now learned something about how reality itself isn't made of what-ifs and reminding oneself to be humble; reality runs on rails stronger than your mind does.\nContemplating this doesn't mean you know the rails, of course, which is why it's so much harder to predict the Future than the past.  But if you see that your thoughts are still wildly flailing around what-ifs, it means that they've failed to gel, in some sense, they are not yet bound to reality, because reality has no binding receptors for what-iffery.\nThe correct thing to do is not to act on your what-ifs that you can't figure out how to refute, but to go on looking for a model which makes narrower predictions than that.  If that search fails, forge a model which puts some more numerical distribution on your highly entropic uncertainty, instead of diverting into specific what-ifs.  And in the latter case, understand that this probability distribution reflects your ignorance and subjective state of mind, rather than your knowledge of an objective frequency; so that somebody else is allowed to be less ignorant without you shouting \"Too confident!\" at them.  Reality runs on rails as strong as math; sometimes other people will achieve, before you do, the feat of having their own thoughts run through more concentrated rivers of probability, in some domain.\nNow, when we are trying to concentrate our thoughts into deeper, narrower rivers that run closer to reality's rails, there is of course the legendary hazard of concentrating our thoughts into the wrong narrow channels that exclude reality.  And the great legendary sign of this condition, of course, is the counterexample from Reality that falsifies our model!  But you should not in general criticize somebody for trying to concentrate their probability into narrower rivers than yours, for this is the appearance of the great general project of trying to get to grips with Reality, that runs on true rails that are narrower still.\nIf you have concentrated your probability into different narrow channels than somebody else's, then, of course, you have a more interesting dispute; and you should engage in that legendary activity of trying to find some accessible experimental test on which your nonoverlapping models make different predictions.\nHumbali:  I do not understand the import of all this vaguely mystical talk.\nEliezer:  I'm trying to explain why, when I say that I'm very confident it's possible to build a human-equivalent mind using less computing power than biology has managed to use effectively, and you say, \"How can you be so confident, what if you are wrong,\" it is not unreasonable for me to reply, \"Well, kid, this doesn't seem like one of those places where it's particularly important to worry about far-flung ways I could be wrong.\"  Anyone who aspires to learn, learns over a lifetime which sorts of guesses are more likely to go oh-no-wrong in real life, and which sorts of guesses are likely to just work.  Less-learned minds will have minds full of what-ifs they can't refute in more places than more-learned minds; and even if you cannot see how to refute all your what-ifs yourself, it is possible that a more-learned mind knows why they are improbable.  For one must distinguish possibility from probability.\nIt is imaginable or conceivable that human brains have such refined algorithms that they are operating at the absolute limits of computational efficiency, or within 10% of it.  But if you've spent enough time noticing where Reality usually exercises its sovereign right to yell \"Gotcha!\" at you, learning which of your assumptions are the kind to blow up in your face and invalidate your final conclusion, you can guess that \"Ah, but what if the brain is nearly 100% computationally efficient?\" is the sort of what-if that is not much worth contemplating because it is not actually going to be true in real life.  Reality is going to confound you in some other way than that.\nI mean, maybe you haven't read enough neuroscience and evolutionary biology that you can see from your own knowledge that the proposition sounds massively implausible and ridiculous.  But it should hardly seem unlikely that somebody else, more learned in biology, might be justified in having more confidence than you.  Phones don't fall up.  Reality really is very stable and orderly in a lot of ways, even in places where you yourself are ignorant of that order.\nBut if \"What if aliens are making themselves visible in flying saucers because they want high status and they'll have higher status if they're occasionally observable but never deign to talk with us?\" sounds to you like it's totally plausible, and you don't see how someone can be so confident that it's not true – because oh no what if you're wrong and you haven't seen the aliens so how can you know what they're not thinking – then I'm not sure how to lead you into the place where you can dismiss that thought with confidence.  It may require a kind of life experience that I don't know how to give people, at all, let alone by having them passively read paragraphs of text that I write; a learned, perceptual sense of which what-ifs have any force behind them.  I mean, I can refute that specific scenario, I can put that learned sense into words; but I'm not sure that does me any good unless you learn how to refute it yourself.\nHumbali:  Can we leave aside all that meta stuff and get back to the object level?\nEliezer:  This indeed is often wise.\nHumbali:  Then here's one way that the minimum computational requirements for general intelligence could be higher than Moravec's argument for the human brain.  Since, after, all, we only have one existence proof that general intelligence is possible at all, namely the human brain.  Perhaps there's no way to get general intelligence in a computer except by simulating the brain neurotransmitter-by-neurotransmitter.  In that case you'd need a lot more computing operations per second than you'd get by calculating the number of potential spikes flowing around the brain!  What if it's true?  How can you know?\n(Modern person:  This seems like an obvious straw argument?  I mean, would anybody, even at an earlier historical point, actually make an argument like –\nMoravec and Eliezer:  YES THEY WOULD.)\nEliezer:  I can imagine that if we were trying specifically to upload a human that there'd be no easy and simple and obvious way to run the resulting simulation and get a good answer, without simulating neurotransmitter flows in extra detail.\nTo imagine that every one of these simulated flows is being usefully used in general intelligence and there is no way to simplify the mind design to use fewer computations…  I suppose I could try to refute that specifically, but it seems to me that this is a road which has no end unless I can convey the generator of my refutations.  Your what-iffery is flung far enough that, if I cannot leave even that much rejection as an exercise for the reader to do on their own without my holding their hand, the reader has little enough hope of following the rest; let them depart now, in indignation shared with you, and save themselves further outrage.\nI mean, it will obviously be less obvious to the reader because they will know less than I do about this exact domain, it will justly take more work for the reader to specifically refute you than it takes me to refute you.  But I think the reader needs to be able to do that at all, in this example, to follow the more difficult arguments later.\nImaginary Moravec:  I don't think it changes my conclusions by an order of magnitude, but some people would worry that, for example, changes of protein expression inside a neuron in order to implement changes of long-term potentiation, are also important to intelligence, and could be a big deal in the brain's real, effectively-used computational costs.  I'm curious if you'd dismiss that as well, the same way you dismiss the probability that you'd have to simulate every neurotransmitter molecule?\nEliezer:  Oh, of course not.  Long-term potentiation suddenly turning out to be a big deal you overlooked, compared to the depolarization impulses spiking around, is very much the sort of thing where Reality sometimes jumps out and yells \"Gotcha!\" at you.\nHumbali:  How can you tell the difference?\nEliezer:  Experience with Reality yelling \"Gotcha!\" at myself and historical others.\nHumbali:  They seem like equally plausible speculations to me!\nEliezer:  Really?  \"What if long-term potentiation is a big deal and computationally important\" sounds just as plausible to you as \"What if the brain is already close to the wall of making the most efficient possible use of computation to implement general intelligence, and every neurotransmitter molecule matters\"?\nHumbali:  Yes!  They're both what-ifs we can't know are false and shouldn't be overconfident about denying!\nEliezer:  My tiny feeble mortal mind is far away from reality and only bound to it by the loosest of correlating interactions, but I'm not that unbound from reality.\nMoravec:  I would guess that in real life, long-term potentiation is sufficiently slow and local that what goes on inside the cell body of a neuron over minutes or hours is not as big of a computational deal as thousands of times that many spikes flashing around the brain in milliseconds or seconds.  That's why I didn't make a big deal of it in my own estimate.\nEliezer:  Sure.  But it is much more the sort of thing where you wake up to a reality-authored science headline saying \"Gotcha!  There were tiny DNA-activation interactions going on in there at high speed, and they were actually pretty expensive and important!\"  I'm not saying this exact thing is very probable, just that it wouldn't be out-of-character for reality to say something like that to me, the way it would be really genuinely bizarre if Reality was, like, \"Gotcha!  The brain is as computationally efficient of a generally intelligent engine as any algorithm can be!\"\nMoravec:  I think we're in agreement about that part, or we would've been, if we'd actually had this conversation in 1988.  I mean, I am a competent research roboticist and it is difficult to become one if you are completely unglued from reality.\nEliezer:  Then what's with the 2010 prediction for strong AI, and the massive non-sequitur leap from \"the human brain is somewhere around 10 trillion ops/sec\" to \"if we build a 10 trillion ops/sec supercomputer, we'll get strong AI\"?\nMoravec:  Because while it's the kind of Fermi estimate that can be off by an order of magnitude in practice, it doesn't really seem like it should be, I don't know, off by three orders of magnitude?  And even three orders of magnitude is just 10 years of Moore's Law.  2020 for strong AI is also a bold and important prediction.\nEliezer:  And the year 2000 for strong AI even more so.\nMoravec:  Heh!  That's not usually the direction in which people argue with me.\nEliezer:  There's an important distinction between the direction in which people usually argue with you, and the direction from which Reality is allowed to yell \"Gotcha!\"  I wish my future self had kept this more in mind, when arguing with Robin Hanson about how well AI architectures were liable to generalize and scale without a ton of domain-specific algorithmic tinkering for every field of knowledge.  I mean, in principle what I was arguing for was various lower bounds on performance, but I sure could have emphasized more loudly that those were lower bounds – well, I did emphasize the lower-bound part, but – from the way I felt when AlphaGo and Alpha Zero and GPT-2 and GPT-3 showed up, I think I must've sorta forgot that myself.\nMoravec:  Anyways, if we say that I might be up to three orders of magnitude off and phrase it as 2000-2020, do you agree with my prediction then?\nEliezer:  No, I think you're just… arguing about the wrong facts, in a way that seems to be unglued from most tracks Reality might follow so far as I currently know?  On my view, creating AGI is strongly dependent on how much knowledge you have about how to do it, in a way which almost entirely obviates the relevance of arguments from human biology?\nLike, human biology tells us a single not-very-useful data point about how much computing power evolutionary biology needs in order to build a general intelligence, using very alien methods to our own.  Then, very separately, there's the constantly changing level of how much cognitive science, neuroscience, and computer science our own civilization knows.  We don't know how much computing power is required for AGI for any level on that constantly changing graph, and biology doesn't tell us.  All we know is that the hardware requirements for AGI must be dropping by the year, because the knowledge of how to create AI is something that only increases over time.\nAt some point the moving lines for \"decreasing hardware required\" and \"increasing hardware available\" will cross over, which lets us predict that AGI gets built at some point.  But we don't know how to graph two key functions needed to predict that date.  You would seem to be committing the classic fallacy of searching for your keys under the streetlight where the visibility is better.  You know how to estimate how many floating-point operations per second the retina could effectively be using, but this is not the number you need to predict the outcome you want to predict.  You need a graph of human knowledge of computer science over time, and then a graph of how much computer science requires how much hardware to build AI, and neither of these graphs are available.\nIt doesn't matter how many chapters your book spends considering the continuation of Moore's Law or computation in the retina, and I'm sorry if it seems rude of me in some sense to just dismiss the relevance of all the hard work you put into arguing it.  But you're arguing the wrong facts to get to the conclusion, so all your hard work is for naught.\nHumbali:  Now it seems to me that I must chide you for being too dismissive of Moravec's argument.  Fine, yes, Moravec has not established with logical certainty that strong AI must arrive at the point where top supercomputers match the human brain's 10 trillion operations per second.  But has he not established a reference class, the sort of base rate that good and virtuous superforecasters, unlike yourself, go looking for when they want to anchor their estimate about some future outcome?  Has he not, indeed, established the sort of argument which says that if top supercomputers can do only ten million operations per second, we're not very likely to get AGI earlier than that, and if top supercomputers can do ten quintillion operations per second*, we're unlikely not to already have AGI?\n(*) In 2021 terms, 10 TPU v4 pods.\nEliezer:  With ranges that wide, it'd be more likely and less amusing to hit somewhere inside it by coincidence.  But I still think this whole line of thoughts is just off-base, and that you, Humbali, have not truly grasped the concept of a virtuous superforecaster or how they go looking for reference classes and base rates.\nHumbali:  I frankly think you're just being unvirtuous.  Maybe you have some special model of AGI which claims that it'll arrive in a different year or be arrived at by some very different pathway.  But is not Moravec's estimate a sort of base rate which, to the extent you are properly and virtuously uncertain of your own models, you ought to regress in your own probability distributions over AI timelines?  As you become more uncertain about the exact amounts of knowledge required and what knowledge we'll have when, shouldn't you have an uncertain distribution about AGI arrival times that centers around Moravec's base-rate prediction of 2010?\nFor you to reject this anchor seems to reveal a grave lack of humility, since you must be very certain of whatever alternate estimation methods you are using in order to throw away this base-rate entirely.\nEliezer:  Like I said, I think you've just failed to grasp the true way of a virtuous superforecaster.  Thinking a lot about Moravec's so-called 'base rate' is just making you, in some sense, stupider; you need to cast your thoughts loose from there and try to navigate a wilder and less tamed space of possibilities, until they begin to gel and coalesce into narrower streams of probability.  Which, for AGI, they probably won't do until we're quite close to AGI, and start to guess correctly how AGI will get built; for it is easier to predict an eventual global pandemic than to say it will start in November of 2019.  Even in October of 2019 this cannot be done.\nHumbali:  Then all this uncertainty must somehow be quantified, if you are to be a virtuous Bayesian; and again, for lack of anything better, the resulting distribution should center on Moravec's base-rate estimate of 2010.\nEliezer:  No, that calculation is just basically not relevant here; and thinking about it is making you stupider, as your mind flails in the trackless wilderness grasping onto unanchored air.  Things must be 'sufficiently similar' to each other, in some sense, for us to get a base rate on one thing by looking at another thing.  Humans making an AGI is just too dissimilar to evolutionary biology making a human brain for us to anchor 'how much computing power at the time it happens' from one to the other.  It's not the droid we're looking for; and your attempt to build an inescapable epistemological trap about virtuously calling that a 'base rate' is not the Way.\nImaginary Moravec:  If I can step back in here, I don't think my calculation is zero evidence?  What we know from evolutionary biology is that a blind alien god with zero foresight accidentally mutated a chimp brain into a general intelligence.  I don't want to knock biology's work too much, there's some impressive stuff in the retina, and the retina is just the part of the brain which is in some sense easiest to understand.  But surely there's a very reasonable argument that 10 trillion ops/sec is about the amount of computation that evolutionary biology needed; and since evolution is stupid, when we ourselves have that much computation, it shouldn't be that hard to figure out how to configure it.\nEliezer:  If that was true, the same theory predicts that our current supercomputers should be doing a better job of matching the agility and vision of spiders.  When at some point there's enough hardware that we figure out how to put it together into AGI, we could be doing it with less hardware than a human; we could be doing it with more; and we can't even say that these two possibilities are around equally probable such that our probability distribution should have its median around 2010.  Your number is so bad and obtained by such bad means that we should just throw it out of our thinking and start over.\nHumbali:  This last line of reasoning seems to me to be particularly ludicrous, like you're just throwing away the only base rate we have in favor of a confident assertion of our somehow being more uncertain than that.\nEliezer:  Yeah, well, sorry to put it bluntly, Humbali, but you have not yet figured out how to turn your own computing power into intelligence.\n – 1999 –\nLuke Muehlhauser reading a previous draft of this (only sounding much more serious than this, because Luke Muehlhauser):  You know, there was this certain teenaged futurist who made some of his own predictions about AI timelines –\nEliezer:  I'd really rather not argue from that as a case in point.  I dislike people who screw up something themselves, and then argue like nobody else could possibly be more competent than they were.  I dislike even more people who change their mind about something when they turn 22, and then, for the rest of their lives, go around acting like they are now Very Mature Serious Adults who believe the thing that a Very Mature Serious Adult believes, so if you disagree with them about that thing they started believing at age 22, you must just need to wait to grow out of your extended childhood.\nLuke Muehlhauser (still being paraphrased):  It seems like it ought to be acknowledged somehow.\nEliezer:  That's fair, yeah, I can see how someone might think it was relevant.  I just dislike how it potentially creates the appearance of trying to slyly sneak in an Argument From Reckless Youth that I regard as not only invalid but also incredibly distasteful.  You don't get to screw up yourself and then use that as an argument about how nobody else can do better.\nHumbali:  Uh, what's the actual drama being subtweeted here?\nEliezer:  A certain teenaged futurist, who, for example, said in 1999, \"The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015.\"\nHumbali:  This young man must surely be possessed of some very deep character defect, which I worry will prove to be of the sort that people almost never truly outgrow except in the rarest cases.  Why, he's not even putting a probability distribution over his mad soothsaying – how blatantly absurd can a person get?\nEliezer:  Dear child ignorant of history, your complaint is far too anachronistic.  This is 1999 we're talking about here; almost nobody is putting probability distributions on things, that element of your later subculture has not yet been introduced.  Eliezer-2002 hasn't been sent a copy of \"Judgment Under Uncertainty\" by Emil Gilliam.  Eliezer-2006 hasn't put his draft online for \"Cognitive biases potentially affecting judgment of global risks\".  The Sequences won't start until another year after that.  How would the forerunners of effective altruism in 1999 know about putting probability distributions on forecasts?  I haven't told them to do that yet!  We can give historical personages credit when they seem to somehow end up doing better than their surroundings would suggest; it is unreasonable to hold them to modern standards, or expect them to have finished refining those modern standards by the age of nineteen.\nThough there's also a more subtle lesson you could learn, about how this young man turned out to still have a promising future ahead of him; which he retained at least in part by having a deliberate contempt for pretended dignity, allowing him to be plainly and simply wrong in a way that he noticed, without his having twisted himself up to avoid a prospect of embarrassment.  Instead of, for example, his evading such plain falsification by having dignifiedly wide Very Serious probability distributions centered on the same medians produced by the same basically bad thought processes.\nBut that was too much of a digression, when I tried to write it up; maybe later I'll post something separately.\n– 2004 or thereabouts –\nRay Kurzweil in 2001:  I have calculated that matching the intelligence of a human brain requires 2 * 10^16 ops/sec* and this will become available in a $1000 computer in 2023.  26 years after that, in 2049, a $1000 computer will have ten billion times more computing power than a human brain; and in 2059, that computer will cost one cent.\n(*) Two TPU v4 pods.\nActual real-life Eliezer in Q&A, when Kurzweil says the same thing in a 2004(?) talk:  It seems weird to me to forecast the arrival of \"human-equivalent\" AI, and then expect Moore's Law to just continue on the same track past that point for thirty years.  Once we've got, in your terms, human-equivalent AIs, even if we don't go beyond that in terms of intelligence, Moore's Law will start speeding them up.  Once AIs are thinking thousands of times faster than we are, wouldn't that tend to break down the graph of Moore's Law with respect to the objective wall-clock time of the Earth going around the Sun?  Because AIs would be able to spend thousands of subjective years working on new computing technology?\nActual Ray Kurzweil:  The fact that AIs can do faster research is exactly what will enable Moore's Law to continue on track.\nActual Eliezer (out loud):  Thank you for answering my question.\nActual Eliezer (internally):  Moore's Law is a phenomenon produced by human cognition and the fact that human civilization runs off human cognition.  You can't expect the surface phenomenon to continue unchanged after the deep causal phenomenon underlying it starts changing.  What kind of bizarre worship of graphs would lead somebody to think that the graphs were the primary phenomenon and would continue steady and unchanged when the forces underlying them changed massively?  I was hoping he'd be less nutty in person than in the book, but oh well.\n– 2006 or thereabouts –\nSomebody on the Internet:  I have calculated the number of computer operations used by evolution to evolve the human brain – searching through organisms with increasing brain size  – by adding up all the computations that were done by any brains before modern humans appeared.  It comes out to 10^43 computer operations.*  AGI isn't coming any time soon!\n(*)  I forget the exact figure.  It was 10^40-something.\nEliezer, sighing:  Another day, another biology-inspired timelines forecast.  This trick didn't work when Moravec tried it, it's not going to work while Ray Kurzweil is trying it, and it's not going to work when you try it either.  It also didn't work when a certain teenager tried it, but please entirely ignore that part; you're at least allowed to do better than him.\nImaginary Somebody:  Moravec's prediction failed because he assumed that you could just magically take something with around as much hardware as the human brain and, poof, it would start being around that intelligent –\nEliezer:  Yes, that is one way of viewing an invalidity in that argument.  Though you do Moravec a disservice if you imagine that he could only argue \"It will magically emerge\", and could not give the more plausible-sounding argument \"Human engineers are not that incompetent compared to biology, and will probably figure it out without more than one or two orders of magnitude of extra overhead.\"\nSomebody:  But I am cleverer, for I have calculated the number of computing operations that was used to create and design biological intelligence, not just the number of computing operations required to run it once created!\nEliezer:  And yet, because your reasoning contains the word \"biological\", it is just as invalid and unhelpful as Moravec's original prediction.\nSomebody:  I don't see why you dismiss my biological argument about timelines on the basis of Moravec having been wrong.  He made one basic mistake – neglecting to take into effect the cost to generate intelligence, not just to run it.  I have corrected this mistake, and now my own effort to do biologically inspired timeline forecasting should work fine, and must be evaluated on its own merits, de novo.\nEliezer:  It is true indeed that sometimes a line of inference is doing just one thing wrong, and works fine after being corrected.  And because this is true, it is often indeed wise to reevaluate new arguments on their own merits, if that is how they present themselves.  One may not take the past failure of a different argument or three, and try to hang it onto the new argument like an inescapable iron ball chained to its leg.  It might be the cause for defeasible skepticism, but not invincible skepticism.\nThat said, on my view, you are making a nearly identical mistake as Moravec, and so his failure remains relevant to the question of whether you are engaging in a kind of thought that binds well to Reality.\nSomebody:  And that mistake is just mentioning the word \"biology\"?\nEliezer:  The problem is that the resource gets consumed differently, so base-rate arguments from resource consumption end up utterly unhelpful in real life.  The human brain consumes around 20 watts of power.  Can we thereby conclude that an AGI should consume around 20 watts of power, and that, when technology advances to the point of being able to supply around 20 watts of power to computers, we'll get AGI?\nSomebody:  That's absurd, of course.  So, what, you compare my argument to an absurd argument, and from this dismiss it?\nEliezer:  I'm saying that Moravec's \"argument from comparable resource consumption\" must be in general invalid, because it Proves Too Much.  If it's in general valid to reason about comparable resource consumption, then it should be equally valid to reason from energy consumed as from computation consumed, and pick energy consumption instead to call the basis of your median estimate.\nYou say that AIs consume energy in a very different way from brains?  Well, they'll also consume computations in a very different way from brains!  The only difference between these two cases is that you know something about how humans eat food and break it down in their stomachs and convert it into ATP that gets consumed by neurons to pump ions back out of dendrites and axons, while computer chips consume electricity whose flow gets interrupted by transistors to transmit information.  Since you know anything whatsoever about how AGIs and humans consume energy, you can see that the consumption is so vastly different as to obviate all comparisons entirely.\nYou are ignorant of how the brain consumes computation, you are ignorant of how the first AGIs built would consume computation, but \"an unknown key does not open an unknown lock\" and these two ignorant distributions should not assert much internal correlation between them.\nEven without knowing the specifics of how brains and future AGIs consume computing operations, you ought to be able to reason abstractly about a directional update that you would make, if you knew any specifics instead of none.  If you did know how both kinds of entity consumed computations, if you knew about specific machinery for human brains, and specific machinery for AGIs, you'd then be able to see the enormous vast specific differences between them, and go, \"Wow, what a futile resource-consumption comparison to try to use for forecasting.\"\n(Though I say this without much hope; I have not had very much luck in telling people about predictable directional updates they would make, if they knew something instead of nothing about a subject.  I think it's probably too abstract for most people to feel in their gut, or something like that, so their brain ignores it and moves on in the end.  I have had life experience with learning more about a thing, updating, and then going to myself, \"Wow, I should've been able to predict in retrospect that learning almost any specific fact would move my opinions in that same direction.\"  But I worry this is not a common experience, for it involves a real experience of discovery, and preferably more than one to get the generalization.)\nSomebody:  All of that seems irrelevant to my novel and different argument.  I am not foolishly estimating the resources consumed by a single brain; I'm estimating the resources consumed by evolutionary biology to invent brains!\nEliezer:  And the humans wracking their own brains and inventing new AI program architectures and deploying those AI program architectures to themselves learn, will consume computations so utterly differently from evolution that there is no point comparing those consumptions of resources.  That is the flaw that you share exactly with Moravec, and that is why I say the same of both of you, \"This is a kind of thinking that fails to bind upon reality, it doesn't work in real life.\"  I don't care how much painstaking work you put into your estimate of 10^43 computations performed by biology.  It's just not a relevant fact.\nHumbali:  But surely this estimate of 10^43 cumulative operations can at least be used to establish a base rate for anchoring our –\nEliezer:  Oh, for god's sake, shut up.  At least Somebody is only wrong on the object level, and isn't trying to build an inescapable epistemological trap by which his ideas must still hang in the air like an eternal stench even after they've been counterargued.  Isn't 'but muh base rates' what your viewpoint would've also said about Moravec's 2010 estimate, back when that number still looked plausible?\nHumbali:  Of course it is evident to me now that my youthful enthusiasm was mistaken; obviously I tried to estimate the wrong figure.  As Somebody argues, we should have been estimating the biological computations used to design human intelligence, not the computations used to run it.\nI see, now, that I was using the wrong figure as my base rate, leading my base rate to be wildly wrong, and even irrelevant; but now that I've seen this, the clear error in my previous reasoning, I have a new base rate.  This doesn't seem obviously to me likely to contain the same kind of wildly invalidating enormous error as before.  What, is Reality just going to yell \"Gotcha!\" at me again?  And even the prospect of some new unknown error, which is just as likely to be in either possible direction, implies only that we should widen our credible intervals while keeping them centered on a median of 10^43 operations –\nEliezer:  Please stop.  This trick just never works, at all, deal with it and get over it.  Every second of attention that you pay to the 10^43 number is making you stupider.  You might as well reason that 20 watts is a base rate for how much energy the first generally intelligent computing machine should consume.\n– 2020 –\nOpenPhil:  We have commissioned a Very Serious report on a biologically inspired estimate of how much computation will be required to achieve Artificial General Intelligence, for purposes of forecasting an AGI timeline.  (Summary of report.)  (Full draft of report.)  Our leadership takes this report Very Seriously.\nEliezer:  Oh, hi there, new kids.  Your grandpa is feeling kind of tired now and can't debate this again with as much energy as when he was younger.\nImaginary OpenPhil:  You're not that much older than us.\nEliezer:  Not by biological wall-clock time, I suppose, but –\nOpenPhil:  You think thousands of times faster than us?\nEliezer:  I wasn't going to say it if you weren't.\nOpenPhil:  We object to your assertion on the grounds that it is false.\nEliezer:  I was actually going to say, you might be underestimating how long I've been walking this endless battlefield because I started really quite young.\nI mean, sure, I didn't read Mind Children when it came out in 1988.  I only read it four years later, when I was twelve.  And sure, I didn't immediately afterwards start writing online about Moore's Law and strong AI; I did not immediately contribute my own salvos and sallies to the war; I was not yet a noticed voice in the debate.  I only got started on that at age sixteen.  I'd like to be able to say that in 1999 I was just a random teenager being reckless, but in fact I was already being invited to dignified online colloquia about the \"Singularity\" and mentioned in printed books; when I was being wrong back then I was already doing so in the capacity of a minor public intellectual on the topic.\nThis is, as I understand normie ways, relatively young, and is probably worth an extra decade tacked onto my biological age; you should imagine me as being 52 instead of 42 as I write this, with a correspondingly greater number of visible gray hairs.\nA few years later – though still before your time – there was the Accelerating Change Foundation, and Ray Kurzweil spending literally millions of dollars to push Moore's Law graphs of technological progress as the central story about the future.  I mean, I'm sure that a few million dollars sounds like peanuts to OpenPhil, but if your own annual budget was a hundred thousand dollars or so, that's a hell of a megaphone to compete with.\nIf you are currently able to conceptualize the Future as being about something other than nicely measurable metrics of progress in various tech industries, being projected out to where they will inevitably deliver us nice things – that's at least partially because of a battle fought years earlier, in which I was a primary fighter, creating a conceptual atmosphere you now take for granted.  A mental world where threshold levels of AI ability are considered potentially interesting and transformative – rather than milestones of new technological luxuries to be checked off on an otherwise invariant graph of Moore's Laws as they deliver flying cars, space travel, lifespan-extension escape velocity, and other such goodies on an equal level of interestingness.  I have earned at least a little right to call myself your grandpa.\nAnd that kind of experience has a sort of compounded interest, where, once you've lived something yourself and participated in it, you can learn more from reading other histories about it.  The histories become more real to you once you've fought your own battles.  The fact that I've lived through timeline errors in person gives me a sense of how it actually feels to be around at the time, watching people sincerely argue Very Serious erroneous forecasts.  That experience lets me really and actually update on the history of the earlier mistaken timelines from before I was around; instead of the histories just seeming like a kind of fictional novel to read about, disconnected from reality and not happening to real people.\nAnd now, indeed, I'm feeling a bit old and tired for reading yet another report like yours in full attentive detail.  Does it by any chance say that AGI is due in about 30 years from now?\nOpenPhil:  Our report has very wide credible intervals around both sides of its median, as we analyze the problem from a number of different angles and show how they lead to different estimates –\nEliezer:  Unfortunately, the thing about figuring out five different ways to guess the effective IQ of the smartest people on Earth, and having three different ways to estimate the minimum IQ to destroy lesser systems such that you could extrapolate a minimum IQ to destroy the whole Earth, and putting wide credible intervals around all those numbers, and combining and mixing the probability distributions to get a new probability distribution, is that, at the end of all that, you are still left with a load of nonsense.  Doing a fundamentally wrong thing in several different ways will not save you, though I suppose if you spread your bets widely enough, one of them may be right by coincidence.\nSo does the report by any chance say – with however many caveats and however elaborate the probabilistic methods and alternative analyses – that AGI is probably due in about 30 years from now?\nOpenPhil:  Yes, in fact, our 2020 report's median estimate is 2050; though, again, with very wide credible intervals around both sides.  Is that number significant?\nEliezer:  It's a law generalized by Charles Platt, that any AI forecast will put strong AI thirty years out from when the forecast is made.  Vernor Vinge referenced it in the body of his famous 1993 NASA speech, whose abstract begins, \"Within thirty years, we will have the technological means to create superhuman intelligence.  Shortly after, the human era will be ended.\"\nAfter I was old enough to be more skeptical of timelines myself, I used to wonder how Vinge had pulled out the \"within thirty years\" part.  This may have gone over my head at the time, but rereading again today, I conjecture Vinge may have chosen the headline figure of thirty years as a deliberately self-deprecating reference to Charles Platt's generalization about such forecasts always being thirty years from the time they're made, which Vinge explicitly cites later in the speech.\nOr to put it another way:  I conjecture that to the audience of the time, already familiar with some previously-made forecasts about strong AI, the impact of the abstract is meant to be, \"Never mind predicting strong AI in thirty years, you should be predicting superintelligence in thirty years, which matters a lot more.\"  But the minds of authors are scarcely more knowable than the Future, if they have not explicitly told us what they were thinking; so you'd have to ask Professor Vinge, and hope he remembers what he was thinking back then.\nOpenPhil:  Superintelligence before 2023, huh?  I suppose Vinge still has two years left to go before that's falsified.\nEliezer:  Also in the body of the speech, Vinge says, \"I'll be surprised if this event occurs before 2005 or after 2030,\" which sounds like a more serious and sensible way of phrasing an estimate.  I think that should supersede the probably Platt-inspired headline figure for what we think of as Vinge's 1993 prediction.  The jury's still out on whether Vinge will have made a good call.\nOh, and sorry if grandpa is boring you with all this history from the times before you were around.  I mean, I didn't actually attend Vinge's famous NASA speech when it happened, what with being thirteen years old at the time, but I sure did read it later.  Once it was digitized and put online, it was all over the Internet.  Well, all over certain parts of the Internet, anyways.  Which nerdy parts constituted a much larger fraction of the whole, back when the World Wide Web was just starting to take off among early adopters.\nBut, yeah, the new kids showing up with some graphs of Moore's Law and calculations about biology and an earnest estimate of strong AI being thirty years out from the time of the report is, uh, well, it's… historically precedented.\nOpenPhil:  That part about Charles Platt's generalization is interesting, but just because we unwittingly chose literally exactly the median that Platt predicted people would always choose in consistent error, that doesn't justify dismissing our work, right?  We could have used a completely valid method of estimation which would have pointed to 2050 no matter which year it was tried in, and, by sheer coincidence, have first written that up in 2020.  In fact, we try to show in the report that the same methodology, evaluated in earlier years, would also have pointed to around 2050 –\nEliezer:  Look, people keep trying this.  It's never worked.  It's never going to work.  2 years before the end of the world, there'll be another published biologically inspired estimate showing that AGI is 30 years away and it will be exactly as informative then as it is now.  I'd love to know the timelines too, but you're not going to get the answer you want until right before the end of the world, and maybe not even then unless you're paying very close attention.  Timing this stuff is just plain hard.\nOpenPhil:  But our report is different, and our methodology for biologically inspired estimates is wiser and less naive than those who came before.\nEliezer:  That's what the last guy said, but go on.\nOpenPhil:  First, we carefully estimate a range of possible figures for the equivalent of neural-network parameters needed to emulate a human brain.  Then, we estimate how many examples would be required to train a neural net with that many parameters.  Then, we estimate the total computational cost of that many training runs.  Moore's Law then gives us 2050 as our median time estimate, given what we think are the most likely underlying assumptions, though we do analyze it several different ways.\nEliezer:  This is almost exactly what the last guy tried, except you're using network parameters instead of computing ops, and deep learning training runs instead of biological evolution.\nOpenPhil:  Yes, so we've corrected his mistake of estimating the wrong biological quantity and now we're good, right?\nEliezer:  That's what the last guy thought he'd done about Moravec's mistaken estimation target.  And neither he nor Moravec would have made much headway on their underlying mistakes, by doing a probabilistic analysis of that same wrong question from multiple angles.\nOpenPhil:  Look, sometimes more than one person makes a mistake, over historical time.  It doesn't mean nobody can ever get it right.  You of all people should agree.\nEliezer:  I do so agree, but that doesn't mean I agree you've fixed the mistake.  I think the methodology itself is bad, not just its choice of which biological parameter to estimate.  Look, do you understand why the evolution-inspired estimate of 10^43 ops was completely ludicrous; and the claim that it was equally likely to be mistaken in either direction, even more ludicrous?\nOpenPhil:  Because AGI isn't like biology, and in particular, will be trained using gradient descent instead of evolutionary search, which is cheaper.  We do note inside our report that this is a key assumption, and that, if it fails, the estimate might be correspondingly wrong –\nEliezer:  But then you claim that mistakes are equally likely in both directions and so your unstable estimate is a good median.  Can you see why the previous evolutionary estimate of 10^43 cumulative ops was not, in fact, equally likely to be wrong in either direction?  That it was, predictably, a directional overestimate?\nOpenPhil:  Well, search by evolutionary biology is more costly than training by gradient descent, so in hindsight, it was an overestimate.  Are you claiming this was predictable in foresight instead of hindsight?\nEliezer:  I'm claiming that, at the time, I snorted and tossed Somebody's figure out the window while thinking it was ridiculously huge and absurd, yes.\nOpenPhil:  Because you'd already foreseen in 2006 that gradient descent would be the method of choice for training future AIs, rather than genetic algorithms?\nEliezer:  Ha!  No.  Because it was an insanely costly hypothetical approach whose main point of appeal, to the sort of person who believed in it, was that it didn't require having any idea whatsoever of what you were doing or how to design a mind.\nOpenPhil:  Suppose one were to reply:  \"Somebody\" didn't know better-than-evolutionary methods for designing a mind, just as we currently don't know better methods than gradient descent for designing a mind; and hence Somebody's estimate was the best estimate at the time, just as ours is the best estimate now?\nEliezer:  Unless you were one of a small handful of leading neural-net researchers who knew a few years ahead of the world where scientific progress was heading – who knew a Thielian 'secret' before finding evidence strong enough to convince the less foresightful – you couldn't have called the jump specifically to gradient descent rather than any other technique.  \"I don't know any more computationally efficient way to produce a mind than re-evolving the cognitive history of all life on Earth\" transitioning over time to \"I don't know any more computationally efficient way to produce a mind than gradient descent over entire brain-sized models\" is not predictable in the specific part about \"gradient descent\" – not unless you know a Thielian secret.\nBut knowledge is a ratchet that usually only turns one way, so it's predictable that the current story changes to somewhere over future time, in a net expected direction.  Let's consider the technique currently known as mixture-of-experts (MoE), for training smaller nets in pieces and muxing them together.  It's not my mainline prediction that MoE actually goes anywhere – if I thought MoE was actually promising, I wouldn't call attention to it, of course!  I don't want to make timelines shorter, that is not a service to Earth, not a good sacrifice in the cause of winning an Internet argument.\nBut if I'm wrong and MoE is not a dead end, that technique serves as an easily-visualizable case in point.  If that's a fruitful avenue, the technique currently known as \"mixture-of-experts\" will mature further over time, and future deep learning engineers will be able to further perfect the art of training slices of brains using gradient descent and fewer examples, instead of training entire brains using gradient descent and lots of examples.\nOr, more likely, it's not MoE that forms the next little trend.  But there is going to be something, especially if we're sitting around waiting until 2050.  Three decades is enough time for some big paradigm shifts in an intensively researched field.  Maybe we'd end up using neural net tech very similar to today's tech if the world ends in 2025, but in that case, of course, your prediction must have failed somewhere else.\nThe three components of AGI arrival times are available hardware, which increases over time in an easily graphed way; available knowledge, which increases over time in a way that's much harder to graph; and hardware required at a given level of specific knowledge, a huge multidimensional unknown background parameter.  The fact that you have no idea how to graph the increase of knowledge – or measure it in any way that is less completely silly than \"number of science papers published\" or whatever such gameable metric – doesn't change the point that this is a predictable fact about the future; there will be more knowledge later, the more time that passes, and that will directionally change the expense of the currently least expensive way of doing things.\nOpenPhil:  We did already consider that and try to take it into account: our model already includes a parameter for how algorithmic progress reduces hardware requirements.  It's not easy to graph as exactly as Moore's Law, as you say, but our best-guess estimate is that compute costs halve every 2-3 years.\nEliezer:  Oh, nice.  I was wondering what sort of tunable underdetermined parameters enabled your model to nail the psychologically overdetermined final figure of '30 years' so exactly.\nOpenPhil:  Eliezer.\nEliezer:  Think of this in an economic sense: people don't buy where goods are most expensive and delivered latest, they buy where goods are cheapest and delivered earliest.  Deep learning researchers are not like an inanimate chunk of ice tumbling through intergalactic space in its unchanging direction of previous motion; they are economic agents who look around for ways to destroy the world faster and more cheaply than the way that you imagine as the default.  They are more eager than you are to think of more creative paths to get to the next milestone faster.\nOpenPhil:  Isn't this desire for cheaper methods exactly what our model already accounts for, by modeling algorithmic progress?\nEliezer:  The makers of AGI aren't going to be doing 10,000,000,000,000 rounds of gradient descent, on entire brain-sized 300,000,000,000,000-parameter models, algorithmically faster than today.  They're going to get to AGI via some route that you don't know how to take, at least if it happens in 2040.  If it happens in 2025, it may be via a route that some modern researchers do know how to take, but in this case, of course, your model was also wrong.\nThey're not going to be taking your default-imagined approach algorithmically faster, they're going to be taking an algorithmically different approach that eats computing power in a different way than you imagine it being consumed.\nOpenPhil:  Shouldn't that just be folded into our estimate of how the computation required to accomplish a fixed task decreases by half every 2-3 years due to better algorithms?\nEliezer:  Backtesting this viewpoint on the previous history of computer science, it seems to me to assert that it should be possible to:\n\nTrain a pre-Transformer RNN/CNN-based model, not using any other techniques invented after 2017, to GPT-2 levels of performance, using only around 2x as much compute as GPT-2;\nPlay pro-level Go using 8-16 times as much computing power as AlphaGo, but only 2006 levels of technology.\n\nFor reference, recall that in 2006, Hinton and Salakhutdinov were just starting to publish that, by training multiple layers of Restricted Boltzmann machines and then unrolling them into a \"deep\" neural network, you could get an initialization for the network weights that would avoid the problem of vanishing and exploding gradients and activations.  At least so long as you didn't try to stack too many layers, like a dozen layers or something ridiculous like that.  This being the point that kicked off the entire deep-learning revolution.\nYour model apparently suggests that we have gotten around 50 times more efficient at turning computation into intelligence since that time; so, we should be able to replicate any modern feat of deep learning performed in 2021, using techniques from before deep learning and around fifty times as much computing power.\nOpenPhil:  No, that's totally not what our viewpoint says when you backfit it to past reality.  Our model does a great job of retrodicting past reality.\nEliezer:  How so?\nOpenPhil:  \nEliezer:  I'm not convinced by this argument.\nOpenPhil:  We didn't think you would be; you're sort of predictable that way.\nEliezer:  Well, yes, if I'd predicted I'd update from hearing your argument, I would've updated already.  I may not be a real Bayesian but I'm not that incoherent.\nBut I can guess in advance at the outline of my reply, and my guess is this:\n\"Look, when people come to me with models claiming the future is predictable enough for timing, I find that their viewpoints seem to me like they would have made garbage predictions if I actually had to operate them in the past without benefit of hindsight.  Sure, with benefit of hindsight, you can look over a thousand possible trends and invent rules of prediction and event timing that nobody in the past actually spotlighted then, and claim that things happened on trend.  I was around at the time and I do not recall people actually predicting the shape of AI in the year 2020 in advance.  I don't think they were just being stupid either.\n\"In a conceivable future where people are still alive and reasoning as modern humans do in 2040, somebody will no doubt look back and claim that everything happened on trend since 2020; but which trend the hindsighter will pick out is not predictable to us in advance.\n\"It may be, of course, that I simply don't understand how to operate your viewpoint, nor how to apply it to the past or present or future; and that yours is a sort of viewpoint which indeed permits saying only one thing, and not another; and that this viewpoint would have predicted the past wonderfully, even without any benefit of hindsight.  But there is also that less charitable viewpoint which suspects that somebody's theory of 'A coinflip always comes up heads on occasions X' contains some informal parameters which can be argued about which occasions exactly 'X' describes, and that the operation of these informal parameters is a bit influenced by one's knowledge of whether a past coinflip actually came up heads or not.\n\"As somebody who doesn't start from the assumption that your viewpoint is a good fit to the past, I still don't see how a good fit to the past could've been extracted from it without benefit of hindsight.\"\nOpenPhil:  That's a pretty general counterargument, and like any pretty general counterargument it's a blade you should try turning against yourself.  Why doesn't your own viewpoint horribly mispredict the past, and say that all estimates of AGI arrival times are predictably net underestimates?  If we imagine trying to operate your own viewpoint in 1988, we imagine going to Moravec and saying, \"Your estimate of how much computing power it takes to match a human brain is predictably an overestimate, because engineers will find a better way to do it than biology, so we should expect AGI sooner than 2010.\"\nEliezer:  I did tell Imaginary Moravec that his estimate of the minimum computation required for human-equivalent general intelligence was predictably an overestimate; that was right there in the dialogue before I even got around to writing this part.  And I also, albeit with benefit of hindsight, told Moravec that both of these estimates were useless for timing the future, because they skipped over the questions of how much knowledge you'd need to make an AGI with a given amount of computing power, how fast knowledge was progressing, and the actual timing determined by the rising hardware line touching the falling hardware-required line.\nOpenPhil:  We don't see how to operate your viewpoint to say in advance to Moravec, before his prediction has been falsified, \"Your estimate is plainly a garbage estimate\" instead of \"Your estimate is obviously a directional underestimate\", especially since you seem to be saying the latter to us, now.\nEliezer:  That's not a critique I give zero weight.  And, I mean, as a kid, I was in fact talking like, \"To heck with that hardware estimate, let's at least try to get it done before then.  People are dying for lack of superintelligence; let's aim for 2005.\"  I had a T-shirt spraypainted \"Singularity 2005\" at a science fiction convention, it's rather crude but I think it's still in my closet somewhere.\nBut now I am older and wiser and have fixed all my past mistakes, so the critique of those past mistakes no longer applies to my new arguments.\nOpenPhil:  Uh huh.\nEliezer:  I mean, I did try to fix all the mistakes that I knew about, and didn't just, like, leave those mistakes in forever?  I realize that this claim to be able to \"learn from experience\" is not standard human behavior in situations like this, but if you've got to be weird, that's a good place to spend your weirdness points.  At least by my own lights, I am now making a different argument than I made when I was nineteen years old, and that different argument should be considered differently.\nAnd, yes, I also think my nineteen-year-old self was not completely foolish at least about AI timelines; in the sense that, for all he knew, maybe you could build AGI by 2005 if you tried really hard over the next 6 years.  Not so much because Moravec's estimate should've been seen as a predictable overestimate of how much computing power would actually be needed, given knowledge that would become available in the next 6 years; but because Moravec's estimate should've been seen as almost entirely irrelevant, making the correct answer be \"I don't know.\"\nOpenPhil:  It seems to us that Moravec's estimate, and the guess of your nineteen-year-old past self, are both predictably vast underestimates.  Estimating the computation consumed by one brain, and calling that your AGI target date, is obviously predictably a vast underestimate because it neglects the computation required for training a brainlike system.  It may be a bit uncharitable, but we suggest that Moravec and your nineteen-year-old self may both have been motivatedly credulous, to not notice a gap so very obvious.\nEliezer:  I could imagine it seeming that way if you'd grown up never learning about any AI techniques except deep learning, which had, in your wordless mental world, always been the way things were, and would always be that way forever.\nI mean, it could be that deep learning will still be the bleeding-edge method of Artificial Intelligence right up until the end of the world.  But if so, it'll be because Vinge was right and the world ended before 2030, not because the deep learning paradigm was as good as any AI paradigm can ever get.  That is simply not a kind of thing that I expect Reality to say \"Gotcha\" to me about, any more than I expect to be told that the human brain, whose neurons and synapses are 500,000 times further away from the thermodynamic efficiency wall than ATP synthase, is the most efficient possible consumer of computations.\nThe specific perspective-taking operation needed here – when it comes to what was and wasn't obvious in 1988 or 1999 – is that the notion of spending thousands and millions and billions of times as much computation on a \"training\" phase, as on an \"inference\" phase, is something that only came to be seen as Always Necessary after the deep learning revolution took over AI in the late Noughties.  Back when Moravec was writing, you programmed a game-tree-search algorithm for chess, and then you ran that code, and it played chess.  Maybe you needed to add an opening book, or do a lot of trial runs to tweak the exact values the position evaluation function assigned to knights vs. bishops, but most AIs weren't neural nets and didn't get trained on enormous TPU pods.\nMoravec had no way of knowing that the paradigm in AI would, twenty years later, massively shift to a new paradigm in which stuff got trained on enormous TPU pods.  He lived in a world where you could only train neural networks a few layers deep, like, three layers, and the gradients vanished or exploded if you tried to train networks any deeper.\nTo be clear, in 1999, I did think of AGIs as needing to do a lot of learning; but I expected them to be learning while thinking, not to learn in a separate gradient descent phase.\nOpenPhil:  How could anybody possibly miss anything so obvious?  There's so many basic technical ideas and even philosophical ideas about how you do AI which make it supremely obvious that the best and only way to turn computation into intelligence is to have deep nets, lots of parameters, and enormous separate training phases on TPU pods.\nEliezer:  Yes, well, see, those philosophical ideas were not as prominent in 1988, which is why the direction of the future paradigm shift was not predictable in advance without benefit of hindsight, let alone timeable to 2006.\nYou're also probably overestimating how much those philosophical ideas would pinpoint the modern paradigm of gradient descent even if you had accepted them wholeheartedly, in 1988.  Or let's consider, say, October 2006, when the Netflix Prize was being run – a watershed occasion where lots of programmers around the world tried their hand at minimizing a loss function, based on a huge-for-the-times 'training set' that had been publicly released, scored on a holdout 'test set'.  You could say it was the first moment in the limelight for the sort of problem setup that everybody now takes for granted with ML research: a widely shared dataset, a heldout test set, a loss function to be minimized, prestige for advancing the 'state of the art'.  And it was a million dollars, which, back in 2006, was big money for a machine learning prize, garnering lots of interest from competent competitors.\nBefore deep learning, \"statistical learning\" was indeed a banner often carried by the early advocates of the view that Richard Sutton now calls the Bitter Lesson, along the lines of \"complicated programming of human ideas doesn't work, you have to just learn from massive amounts of data\".\nBut before deep learning – which was barely getting started in 2006 – \"statistical learning\" methods that took in massive amounts of data, did not use those massive amounts of data to train neural networks by stochastic gradient descent across millions of examples!  In 2007, the winning submission to the Netflix Prize was an ensemble predictor that incorporated k-Nearest-Neighbor, a factorization method that repeatedly globally minimized squared error, two-layer Restricted Boltzmann Machines, and a regression model akin to Principal Components Analysis.  Which is all 100% statistical learning driven by relatively-big-for-the-time \"big data\", and 0% GOFAI.  But these methods didn't involve enormous massive training phases in the modern sense.\nBack then, if you were doing stochastic gradient descent at all, you were doing it on a much smaller neural network.  Not so much because you couldn't afford more compute for a larger neural network, but because wider neural networks didn't help you much and deeper neural networks simply didn't work.\nBleeding-edge statistical learning techniques as late as 2007, to make actual use of big data, had to find other ways to make use of huge amounts of data than gradient descent and backpropagation.  Though, I mean, not huge amounts of data by modern standards.  The winning submission to the Netflix Prize used an ensemble of 107 models – that's not a misprint for 10^7, I actually mean 107 – which models were drawn from half a different model classes, then proliferated with slightly different parameters, averaged together to reduce statistical noise.\nA modern kid, perhaps, looks at this and thinks:  \"If you can afford the compute to train 107 models, why not just train one larger model?\"  But back then, you see, there just wasn't a standard way to dump massively more compute into something, and get better results back out.  The fact that they had 107 differently parameterized models from a half-dozen families averaged together to reduce noise, was about as well as anyone could do in 2007, at putting more effort in and getting better results back out.\nOpenPhil:  How quaint and archaic!  But that was 13 years ago, before time actually got started and history actually started happening in real life.  Now we've got the paradigm which will actually be used to create AGI, in all probability; so estimation methods centered on that paradigm should be valid.\nEliezer:  The current paradigm is definitely not the end of the line in principle.  I guarantee you that the way superintelligences build cognitive engines is not by training enormous neural networks using gradient descent.  Gua-ran-tee it.\nThe fact that you think you now see a path to AGI, is because today – unlike in 2006 – you have a paradigm that is seemingly willing to entertain having more and more food stuffed down its throat without obvious limit (yet).  This is really a quite recent paradigm shift, though, and it is probably not the most efficient possible way to consume more and more food.\nYou could rather strongly guess, early on, that support vector machines were never going to give you AGI, because you couldn't dump more and more compute into training or running SVMs and get arbitrarily better answers; whatever gave you AGI would have to be something else that could eat more compute productively.\nSimilarly, since the path through genetic algorithms and recapitulating the whole evolutionary history would have taken a lot of compute, it's no wonder that other, more efficient methods of eating compute were developed before then; it was obvious in advance that they must exist, for all that some what-iffed otherwise.\nTo be clear, it is certain the world will end by more inefficient methods than those that superintelligences would use; since, if superintelligences are making their own AI systems, then the world has already ended.\nAnd it is possible, even, that the world will end by a method as inefficient as gradient descent.  But if so, that will be because the world ended too soon for any more efficient paradigm to be developed.  Which, on my model, means the world probably ended before say 2040(???).  But of course, compared to how much I think I know about what must be more efficiently doable in principle, I think I know far less about the speed of accumulation of real knowledge (not to be confused with proliferation of publications), or how various random-to-me social phenomena could influence the speed of knowledge.  So I think I have far less ability to say a confident thing about the timing of the next paradigm shift in AI, compared to the existence and eventuality of such paradigms in the space of possibilities.\nOpenPhil:  But if you expect the next paradigm shift to happen in around 2040, shouldn't you confidently predict that AGI has to arrive after 2040, because, without that paradigm shift, we'd have to produce AGI using deep learning paradigms, and in that case our own calculation would apply saying that 2040 is relatively early?\nEliezer:  No, because I'd consider, say, improved mixture-of-experts techniques that actually work, to be very much within the deep learning paradigm; and even a relatively small paradigm shift like that would obviate your calculations, if it produced a more drastic speedup than halving the computational cost over two years.\nMore importantly, I simply don't believe in your attempt to calculate a figure of 10,000,000,000,000,000 operations per second for a brain-equivalent deepnet based on biological analogies, or your figure of 10,000,000,000,000 training updates for it.  I simply don't believe in it at all.  I don't think it's a valid anchor.  I don't think it should be used as the median point of a wide uncertain distribution.  The first-developed AGI will consume computation in a different fashion, much as it eats energy in a different fashion; and \"how much computation an AGI needs to eat compared to a human brain\" and \"how many watts an AGI needs to eat compared to a human brain\" are equally always decreasing with the technology and science of the day.\nOpenPhil:  Doesn't our calculation at least provide a soft upper bound on how much computation is required to produce human-level intelligence?  If a calculation is able to produce an upper bound on a variable, how can it be uninformative about that variable?\nEliezer:  You assume that the architecture you're describing can, in fact, work at all to produce human intelligence.  This itself strikes me as not only tentative but probably false.  I mostly suspect that if you take the exact GPT architecture, scale it up to what you calculate as human-sized, and start training it using current gradient descent techniques… what mostly happens is that it saturates and asymptotes its loss function at not very far beyond the GPT-3 level – say, it behaves like GPT-4 would, but not much better.\nThis is what should have been told to Moravec:  \"Sorry, even if your biology is correct, the assumption that future people can put in X amount of compute and get out Y result is not something you really know.\"  And that point did in fact just completely trash his ability to predict and time the future.\nThe same must be said to you.  Your model contains supposedly known parameters, \"how much computation an AGI must eat per second, and how many parameters must be in the trainable model for that, and how many examples are needed to train those parameters\".  Relative to whatever method is actually first used to produce AGI, I expect your estimates to be wildly inapplicable, as wrong as Moravec was about thinking in terms of just using one supercomputer powerful enough to be a brain.  Your parameter estimates may not be about properties that the first successful AGI design even has.  Why, what if it contains a significant component that isn't a neural network?  I realize this may be scarcely conceivable to somebody from the present generation, but the world was not always as it was now, and it will change if it does not end.\nOpenPhil:  I don't understand how some of your reasoning could be internally consistent even on its own terms.  If, according to you, our 2050 estimate doesn't provide a soft upper bound on AGI arrival times – or rather, if our 2050-centered probability distribution isn't a soft upper bound on reasonable AGI arrival probability distributions – then I don't see how you can claim that the 2050-centered distribution is predictably a directional overestimate.\nYou can either say that our forecasted pathway to AGI or something very much like it would probably work in principle without requiring very much more computation than our uncertain model components take into account, meaning that the probability distribution provides a soft upper bound on reasonably-estimable arrival times, but that paradigm shifts will predictably provide an even faster way to do it before then.  That is, you could say that our estimate is both a soft upper bound and also a directional overestimate.  Or, you could say that our ignorance of how to create AI will consume more than one order-of-magnitude of increased computation cost above biology –\nEliezer:  Indeed, much as your whole proposal would supposedly cost ten trillion times the equivalent computation of the single human brain that earlier biologically-inspired estimates anchored on.\nOpenPhil:  – in which case our 2050-centered distribution is not a good soft upper bound, but also not predictably a directional overestimate.  Don't you have to pick one or the other as a critique, there?\nEliezer:  Mmm… there's some justice to that, now that I've come to write out this part of the dialogue.  Okay, let me revise my earlier stated opinion:  I think that your biological estimate is a trick that never works and, on its own terms, would tell us very little about AGI arrival times at all.  Separately, I think from my own model that your timeline distributions happen to be too long.\nOpenPhil:  Eliezer.\nEliezer:  I mean, in fact, part of my actual sense of indignation at this whole affair, is the way that Platt's law of strong AI forecasts – which was in the 1980s generalizing \"thirty years\" as the time that ends up sounding \"reasonable\" to would-be forecasters – is still exactly in effect for what ends up sounding \"reasonable\" to would-be futurists, in fricking 2020 while the air is filling up with AI smoke in the silence of nonexistent fire alarms.\nBut to put this in terms that maybe possibly you'd find persuasive:\nThe last paradigm shifts were from \"write a chess program that searches a search tree and run it, and that's how AI eats computing power\" to \"use millions of data samples, but not in a way that requires a huge separate training phase\" to \"train a huge network for zillions of gradient descent updates and then run it\".  This new paradigm costs a lot more compute, but (small) large amounts of compute are now available so people are using them; and this new paradigm saves on programmer labor, and more importantly the need for programmer knowledge.\nI say with surety that this is not the last possible paradigm shift.  And furthermore, the Stack More Layers paradigm has already reduced need for knowledge by what seems like a pretty large bite out of all the possible knowledge that could be thrown away.\nSo, you might then argue, the world-ending AGI seems more likely to incorporate more knowledge and less brute force, which moves the correct sort of timeline estimate further away from the direction of \"cost to recapitulate all evolutionary history as pure blind search without even the guidance of gradient descent\" and more toward the direction of \"computational cost of one brain, if you could just make a single brain\".\nThat is, you can think of there as being two biological estimates to anchor on, not just one.  You can imagine there being a balance that shifts over time from \"the computational cost for evolutionary biology to invent brains\" to \"the computational cost to run one biological brain\".\nIn 1960, maybe, they knew so little about how brains worked that, if you gave them a hypercomputer, the cheapest way they could quickly get AGI out of the hypercomputer using just their current knowledge, would be to run a massive evolutionary tournament over computer programs until they found smart ones, using 10^43 operations.\nToday, you know about gradient descent, which finds programs more efficiently than genetic hill-climbing does; so the balance of how much hypercomputation you'd need to use to get general intelligence using just your own personal knowledge, has shifted ten orders of magnitude away from the computational cost of evolutionary history and towards the lower bound of the computation used by one brain.  In the future, this balance will predictably swing even further towards Moravec's biological anchor, further away from Somebody on the Internet's biological anchor.\nI admit, from my perspective this is nothing but a clever argument that tries to persuade people who are making errors that can't all be corrected by me, so that they can make mostly the same errors but get a slightly better answer.  In my own mind I tend to contemplate the Textbook from the Future, which would tell us how to build AI on a home computer from 1995, as my anchor of 'where can progress go', rather than looking to the brain of all computing devices for inspiration.\nBut, if you insist on the error of anchoring on biology, you could perhaps do better by seeing a spectrum between two bad anchors.  This lets you notice a changing reality, at all, which is why I regard it as a helpful thing to say to you and not a pure persuasive superweapon of unsound argument.  Instead of just fixating on one bad anchor, the hybrid of biological anchoring with whatever knowledge you currently have about optimization, you can notice how reality seems to be shifting between two biological bad anchors over time, and so have an eye on the changing reality at all.  Your new estimate in terms of gradient descent is stepping away from evolutionary computation and toward the individual-brain estimate by ten orders of magnitude, using the fact that you now know a little more about optimization than natural selection knew; and now that you can see the change in reality over time, in terms of the two anchors, you can wonder if there are more shifts ahead.\nRealistically, though, I would not recommend eyeballing how much more knowledge you'd think you'd need to get even larger shifts, as some function of time, before that line crosses the hardware line.  Some researchers may already know Thielian secrets you do not, that take those researchers further toward the individual-brain computational cost (if you insist on seeing it that way).  That's the direction that economics rewards innovators for moving in, and you don't know everything the innovators know in their labs.\nWhen big inventions finally hit the world as newspaper headlines, the people two years before that happens are often declaring it to be fifty years away; and others, of course, are declaring it to be two years away, fifty years before headlines.  Timing things is quite hard even when you think you are being clever; and cleverly having two biological anchors and eyeballing Reality's movement between them, is not the sort of cleverness that gives you good timing information in real life.\nIn real life, Reality goes off and does something else instead, and the Future does not look in that much detail like the futurists predicted.  In real life, we come back again to the same wiser-but-sadder conclusion given at the start, that in fact the Future is quite hard to foresee – especially when you are not on literally the world's leading edge of technical knowledge about it, but really even then.  If you don't think you know any Thielian secrets about timing, you should just figure that you need a general policy which doesn't get more than two years of warning, or not even that much if you aren't closely non-dismissively analyzing warning signs.\nOpenPhil:  We do consider in our report the many ways that our estimates could be wrong, and show multiple ways of producing biologically inspired estimates that give different results.  Does that give us any credit for good epistemology, on your view?\nEliezer:  I wish I could say that it probably beats showing a single estimate, in terms of its impact on the reader.  But in fact, writing a huge careful Very Serious Report like that and snowing the reader under with Alternative Calculations is probably going to cause them to give more authority to the whole thing.  It's all very well to note the Ways I Could Be Wrong and to confess one's Uncertainty, but you did not actually reach the conclusion, \"And that's enough uncertainty and potential error that we should throw out this whole deal and start over,\" and that's the conclusion you needed to reach.\nOpenPhil:  It's not clear to us what better way you think exists of arriving at an estimate, compared to the methodology we used – in which we do consider many possible uncertainties and several ways of generating probability distributions, and try to combine them together into a final estimate.  A Bayesian needs a probability distribution from somewhere, right?\nEliezer:  If somebody had calculated that it currently required an IQ of 200 to destroy the world, that the smartest current humans had an IQ of around 190, and that the world would therefore start to be destroyable in fifteen years according to Moore's Law of Mad Science – then, even assuming Moore's Law of Mad Science to actually hold, the part where they throw in an estimated current IQ of 200 as necessary is complete garbage.  It is not the sort of mistake that can be repaired, either.  No, not even by considering many ways you could be wrong about the IQ required, or considering many alternative different ways of estimating present-day people's IQs.\nThe correct thing to do with the entire model is chuck it out the window so it doesn't exert an undue influence on your actual thinking, where any influence of that model is an undue one.  And then you just should not expect good advance timing info until the end is in sight, from whatever thought process you adopt instead.\nOpenPhil:  What if, uh, somebody knows a Thielian secret, or has… narrowed the rivers of their knowledge to closer to reality's tracks?  We're not sure exactly what's supposed to be allowed, on your worldview; but wasn't there something at the beginning about how, when you're unsure, you should be careful about criticizing people who are more unsure than you?\nEliezer:  Hopefully those people are also able to tell you bold predictions about the nearer-term future, or at least say anything about what the future looks like before the whole world ends.  I mean, you don't want to go around proclaiming that, because you don't know something, nobody else can know it either.  But timing is, in real life, really hard as a prediction task, so, like… I'd expect them to be able to predict a bunch of stuff before the final hours of their prophecy?\nOpenPhil:  We're… not sure we see that?  We may have made an estimate, but we didn't make a narrow estimate.  We gave a relatively wide probability distribution as such things go, so it doesn't seem like a great feat of timing that requires us to also be able to predict the near-term future in detail too?\nDoesn't your implicit probability distribution have a median?  Why don't you also need to be able to predict all kinds of near-term stuff if you have a probability distribution with a median in it?\nEliezer:  I literally have not tried to force my brain to give me a median year on this – not that this is a defense, because I still have some implicit probability distribution, or, to the extent I don't act like I do, I must be acting incoherently in self-defeating ways.  But still: I feel like you should probably have nearer-term bold predictions if your model is supposedly so solid, so concentrated as a flow of uncertainty, that it's coming up to you and whispering numbers like \"2050\" even as the median of a broad distribution.  I mean, if you have a model that can actually, like, calculate stuff like that, and is actually bound to the world as a truth.\nIf you are an aspiring Bayesian, perhaps, you may try to reckon your uncertainty into the form of a probability distribution, even when you face \"structural uncertainty\" as we sometimes call it.  Or if you know the laws of coherence, you will acknowledge that your planning and your actions are implicitly showing signs of weighing some paths through time more than others, and hence display probability-estimating behavior whether you like to acknowledge that or not.\nBut if you are a wise aspiring Bayesian, you will admit that whatever probabilities you are using, they are, in a sense, intuitive, and you just don't expect them to be all that good.  Because the timing problem you are facing is a really hard one, and humans are not going to be great at it – not until the end is near, and maybe not even then.\nThat – not \"you didn't consider enough alternative calculations of your target figures\" – is what should've been replied to Moravec in 1988, if you could go back and tell him where his reasoning had gone wrong, and how he might have reasoned differently based on what he actually knew at the time.  That reply I now give to you, unchanged.\nHumbali:  And I'm back!  Sorry, I had to take a lunch break.  Let me quickly review some of this recent content; though, while I'm doing that, I'll go ahead and give you what I'm pretty sure will be my reaction to it:\nAh, but here is a point that you seem to have not considered at all, namely: what if you're wrong?\nEliezer:  That, Humbali, is a thing that should be said mainly to children, of whatever biological wall-clock age, who've never considered at all the possibility that they might be wrong, and who will genuinely benefit from asking themselves that.  It is not something that should often be said between grownups of whatever age, as I define what it means to be a grownup.  You will mark that I did not at any point say those words to Imaginary Moravec or Imaginary OpenPhil; it is not a good thing for grownups to say to each other, or to think to themselves in Tones of Great Significance (as opposed to as a routine check).\nIt is very easy to worry that one might be wrong.  Being able to see the direction in which one is probably wrong is rather a more difficult affair.  And even after we see a probable directional error and update our views, the objection, \"But what if you're wrong?\" will sound just as forceful as before.  For this reason do I say that such a thing should not be said between grownups –\nHumbali:  Okay, done reading now!  Hm…  So it seems to me that the possibility that you are wrong, considered in full generality and without adding any other assumptions, should produce a directional shift from your viewpoint towards OpenPhil's viewpoint.\nEliezer (sighing):  And how did you end up being under the impression that this could possibly be a sort of thing that was true?\nHumbali:  Well, I get the impression that you have timelines shorter than OpenPhil's timelines.  Is this devastating accusation true?\nEliezer:  I consider naming particular years to be a cognitively harmful sort of activity; I have refrained from trying to translate my brain's native intuitions about this into probabilities, for fear that my verbalized probabilities will be stupider than my intuitions if I try to put weight on them.  What feelings I do have, I worry may be unwise to voice; AGI timelines, in my own experience, are not great for one's mental health, and I worry that other people seem to have weaker immune systems than even my own.  But I suppose I cannot but acknowledge that my outward behavior seems to reveal a distribution whose median seems to fall well before 2050.\nHumbali:  Okay, so you're more confident about your AGI beliefs, and OpenPhil is less confident.  Therefore, to the extent that you might be wrong, the world is going to look more like OpenPhil's forecasts of how the future will probably look, like world GDP doubling over four years before the first time it doubles over one year, and so on.\nEliezer:  You're going to have to explain some of the intervening steps in that line of 'reasoning', if it may be termed as such.\nHumbali:  I feel surprised that I should have to explain this to somebody who supposedly knows probability theory.  If you put higher probabilities on AGI arriving in the years before 2050, then, on average, you're concentrating more probability into each year that AGI might possibly arrive, than OpenPhil does.  Your probability distribution has lower entropy.  We can literally just calculate out that part, if you don't believe me.  So to the extent that you're wrong, it should shift your probability distributions in the direction of maximum entropy.\nEliezer:  It's things like this that make me worry about whether that extreme cryptivist view would be correct, in which normal modern-day Earth intellectuals are literally not smart enough – in a sense that includes the Cognitive Reflection Test and other things we don't know how to measure yet, not just raw IQ – to be taught more advanced ideas from my own home planet, like Bayes's Rule and the concept of the entropy of a probability distribution.  Maybe it does them net harm by giving them more advanced tools they can use to shoot themselves in the foot, since it causes an explosion in the total possible complexity of the argument paths they can consider and be fooled by, which may now contain words like 'maximum entropy'.\nHumbali:  If you're done being vaguely condescending, perhaps you could condescend specifically to refute my argument, which seems to me to be airtight; my math is not wrong and it means what I claim it means.\nEliezer:  The audience is herewith invited to first try refuting Humbali on their own; grandpa is, in actuality and not just as a literary premise, getting older, and was never that physically healthy in the first place.  If the next generation does not learn how to do this work without grandpa hovering over their shoulders and prompting them, grandpa cannot do all the work himself.  There is an infinite supply of slightly different wrong arguments for me to be forced to refute, and that road does not seem, in practice, to have an end.\nHumbali:  Or perhaps it's you that needs refuting.\nEliezer, smiling:  That does seem like the sort of thing I'd do, wouldn't it?  Pick out a case where the other party in the dialogue had made a valid point, and then ask my readers to disprove it, in case they weren't paying proper attention?  For indeed in a case like this, one first backs up and asks oneself \"Is Humbali right or not?\" and not \"How can I prove Humbali wrong?\"\nBut now the reader should stop and contemplate that, if they are going to contemplate that at all:\nIs Humbali right that generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one's probability distribution over AGI, thereby moving out its median further away in time?\nHumbali:  Are you done?\nEliezer:  Hopefully so.  I can't see how else I'd prompt the reader to stop and think and come up with their own answer first.\nHumbali:  Then what is the supposed flaw in my argument, if there is one?\nEliezer:  As usual, when people are seeing only their preferred possible use of an argumentative superweapon like 'What if you're wrong?', the flaw can be exposed by showing that the argument Proves Too Much.  If you forecasted AGI with a probability distribution with a median arrival time of 50,000 years from now*, would that be very unconfident?\n(*) Based perhaps on an ignorance prior for how long it takes for a sapient species to build AGI after it emerges, where we've observed so far that it must take at least 50,000 years, and our updated estimate says that it probably takes around as much more longer than that.\nHumbali:   Of course; the math says so.  Though I think that would be a little too unconfident – we do have some knowledge about how AGI might be created.  So my answer is that, yes, this probability distribution is higher-entropy, but that it reflects too little confidence even for me.\nI think you're crazy overconfident, yourself, and in a way that I find personally distasteful to boot, but that doesn't mean I advocate zero confidence.  I try to be less arrogant than you, but my best estimate of what my own eyes will see over the next minute is not a maximum-entropy distribution over visual snow.  AGI happening sometime in the next century, with a median arrival time of maybe 30 years out, strikes me as being about as confident as somebody should reasonably be.\nEliezer:  Oh, really now.  I think if somebody sauntered up to you and said they put 99% probability on AGI not occurring within the next 1,000 years – which is the sort of thing a median distance of 50,000 years tends to imply – I think you would, in fact, accuse them of brash overconfidence about staking 99% probability on that.\nHumbali:  Hmmm.  I want to deny that – I have a strong suspicion that you're leading me down a garden path here – but I do have to admit that if somebody walked up to me and declared only a 1% probability that AGI arrives in the next millennium, I would say they were being overconfident and not just too uncertain.\nNow that you put it that way, I think I'd say that somebody with a wide probability distribution over AGI arrival spread over the next century, with a median in 30 years, is in realistic terms about as uncertain as anybody could possibly be?  If you spread it out more than that, you'd be declaring that AGI probably wouldn't happen in the next 30 years, which seems overconfident; and if you spread it out less than that, you'd be declaring that AGI probably would happen within the next 30 years, which also seems overconfident.\nEliezer:  Uh huh.  And to the extent that I am myself uncertain about my own brashly arrogant and overconfident views, I should have a view that looks more like your view instead?\nHumbali:  Well, yes!  To the extent that you are, yourself, less than totally certain of your own model, you should revert to this most ignorant possible viewpoint as a base rate.\nEliezer:  And if my own viewpoint should happen to regard your probability distribution putting its median on 2050 as just one more guesstimate among many others, with this particular guess based on wrong reasoning that I have justly rejected?\nHumbali:  Then you'd be overconfident, obviously.  See, you don't get it, what I'm presenting is not just one candidate way of thinking about the problem, it's the base rate that other people should fall back on to the extent they are not completely confident in their own ways of thinking about the problem, which impose extra assumptions over and above the assumptions that seem natural and obvious to me.  I just can't understand the incredible arrogance you use as to be so utterly certain in your own exact estimate that you don't revert it even a little bit towards mine.\nI don't suppose you're going to claim to me that you first constructed an even more confident first-order estimate, and then reverted it towards the natural base rate in order to arrive at a more humble second-order estimate?\nEliezer:  Ha!  No.  Not that base rate, anyways.  I try to shift my AGI timelines a little further out because I've observed that actual Time seems to run slower than my attempts to eyeball it.  I did not shift my timelines out towards 2050 in particular, nor did reading OpenPhil's report on AI timelines influence my first-order or second-order estimate at all, in the slightest; no more than I updated the slightest bit back when I read the estimate of 10^43 ops or 10^46 ops or whatever it was to recapitulate evolutionary history.\nHumbali:  Then I can't imagine how you could possibly be so perfectly confident that you're right and everyone else is wrong.  Shouldn't you at least revert your viewpoints some toward what other people think?\nEliezer:  Like, what the person on the street thinks, if we poll them about their expected AGI arrival times?  Though of course I'd have to poll everybody on Earth, not just the special case of developed countries, if I thought that a respect for somebody's personhood implied deference to their opinions.\nHumbali:  Good heavens, no!  I mean you should revert towards the opinion, either of myself, or of the set of people I hang out with and who are able to exert a sort of unspoken peer pressure on me; that is the natural reference class to which less confident opinions ought to revert, and any other reference class is special pleading.\nAnd before you jump on me about being arrogant myself, let me say that I definitely regressed my own estimate in the direction of the estimates of the sort of people I hang out with and instinctively regard as fellow tribesmembers of slightly higher status, or \"credible\" as I like to call them.  Although it happens that those people's opinions were about evenly distributed to both sides of my own – maybe not statistically exactly for the population, I wasn't keeping exact track, but in their availability to my memory, definitely, other people had opinions on both sides of my own – so it didn't move my median much.  But so it sometimes goes!\nBut these other people's credible opinions definitely hang emphatically to one side of your opinions, so your opinions should regress at least a little in that direction!  Your self-confessed failure to do this at all reveals a ridiculous arrogance.\nEliezer:  Well, I mean, in fact, from my perspective, even my complete-idiot sixteen-year-old self managed to notice that AGI was going to be a big deal, many years before various others had been hit over the head with a large-enough amount of evidence that even they started to notice.  I was walking almost alone back then.  And I still largely see myself as walking alone now, as accords with the Law of Continued Failure:  If I was going to be living in a world of sensible people in this future, I should have been living in a sensible world already in my past.\nSince the early days more people have caught up to earlier milestones along my way, enough to start publicly arguing with me about the further steps, but I don't consider them to have caught up; they are moving slower than I am still moving now, as I see it.  My actual work these days seems to consist mainly of trying to persuade allegedly smart people to not fling themselves directly into lava pits.  If at some point I start regarding you as my epistemic peer, I'll let you know.  For now, while I endeavor to be swayable by arguments, your existence alone is not an argument unto me.\nIf you choose to define that with your word \"arrogance\", I shall shrug and not bother to dispute it.  Such appellations are beneath My concern.\nHumbali:  Fine, you admit you're arrogant – though I don't understand how that's not just admitting you're irrational and wrong –\nEliezer:  They're different words that, in fact, mean different things, in their semantics and not just their surfaces.  I do not usually advise people to contemplate the mere meanings of words, but perhaps you would be well-served to do so in this case.\nHumbali:  – but if you're not infinitely arrogant, you should be quantitatively updating at least a little towards other people's positions!\nEliezer:  You do realize that OpenPhil itself hasn't always existed?  That they are not the only \"other people\" that there are?  An ancient elder like myself, who has seen many seasons turn, might think of many other possible targets toward which he should arguably regress his estimates, if he was going to start deferring to others' opinions this late in his lifespan.\nHumbali:  You haven't existed through infinite time either!\nEliezer:  A glance at the history books should confirm that I was not around, yes, and events went accordingly poorly.\nHumbali:  So then… why aren't you regressing your opinions at least a little in the direction of OpenPhil's?  I just don't understand this apparently infinite self-confidence.\nEliezer:  The fact that I have credible intervals around my own unspoken median – that I confess I might be wrong in either direction, around my intuitive sense of how long events might take – doesn't count for my being less than infinitely self-confident, on your view?\nHumbali:  No.  You're expressing absolute certainty in your underlying epistemology and your entire probability distribution, by not reverting it even a little in the direction of the reasonable people's probability distribution, which is the one that's the obvious base rate and doesn't contain all the special other stuff somebody would have to tack on to get your probability estimate.\nEliezer:  Right then.  Well, that's a wrap, and maybe at some future point I'll talk about the increasingly lost skill of perspective-taking.\nOpenPhil:  Excuse us, we have a final question.  You're not claiming that we argue like Humbali, are you?\nEliezer:  Good heavens, no!  That's why \"Humbali\" is presented as a separate dialogue character and the \"OpenPhil\" dialogue character says nothing of the sort.  Though I did meet one EA recently who seemed puzzled and even offended about how I wasn't regressing my opinions towards OpenPhil's opinions to whatever extent I wasn't totally confident, which brought this to mind as a meta-level point that needed making.\nOpenPhil:  \"One EA you met recently\" is not something that you should hold against OpenPhil.  We haven't organizationally endorsed arguments like Humbali's, any more than you've ever argued that \"we have to take AGI risk seriously even if there's only a tiny chance of it\" or similar crazy things that other people hallucinate you arguing.\nEliezer:  I fully agree.  That Humbali sees himself as defending OpenPhil is not to be taken as associating his opinions with those of OpenPhil; just like how people who helpfully try to defend MIRI by saying \"Well, but even if there's a tiny chance…\" are not thereby making their epistemic sins into mine.\nThe whole thing with Humbali is a separate long battle that I've been fighting.  OpenPhil seems to have been keeping its communication about AI timelines mostly to the object level, so far as I can tell; and that is a more proper and dignified stance than I've assumed here.\nThe post Biology-Inspired AGI Timelines: The Trick That Never Works appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Biology-Inspired AGI Timelines: The Trick That Never Works", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=3", "id": "63334032ddcf938872e07fb89f570593"} {"text": "Visible Thoughts Project and Bounty Announcement\n\n\n(Update Jan. 12: We released an FAQ last month, with more details. Last updated Jan. 7.)\n(Update Jan. 19: We now have an example of a successful partial run, which you can use to inform how you do your runs. Details.)\n\nWe at MIRI are soliciting help with an AI-alignment project centered around building a dataset, described below. We have $200,000 in prizes for building the first fragments of the dataset, plus an additional $1M prize/budget for anyone who demonstrates the ability to build a larger dataset at scale.\nIf this project goes well, then it may be the first of a series of prizes we offer for various projects.\nBelow, I'll say more about the project, and about the payouts and interim support we're offering.\n \nThe Project\nHypothesis: Language models can be made more understandable (and perhaps also more capable, though this is not the goal) by training them to produce visible thoughts.\nWe'd like to test this hypothesis by fine-tuning/retraining a language model using a dataset composed of thought-annotated dungeon runs. (In the manner of AI dungeon.)\nA normal (un-annotated) dungeon run is a sequence of steps in which the player inputs text actions and the dungeon master responds with text describing what happened in the world as a result.\nWe'd like a collection of such runs, that are annotated with \"visible thoughts\" (visible to potential operators or programmers of the system, not to players) describing things like what just happened or is about to happen in the world, what sorts of things the player is probably paying attention to, where the current sources of plot tension are, and so on — the sorts of things a human author would think while acting as a dungeon master.  (This is distinct from producing thoughts explaining what happened in the dungeon; \"visible thoughts\" are meant to play an active role in constructing the output.)\nOnce we have such a dataset, MIRI's hope is that present or future technology will be able to train a model or models which iteratively produce visible thoughts along with storytelling, based on user actions plus previous history (including previous thoughts). The goal is to transition the state of AI dungeon technology from \"An AI outputs story text in response to actions (and we have no idea how)\" to \"An AI produces thoughts as visible intermediates on the way to story text, allowing us to watch the AI think about how to design its output, and to verify that we can get different sensible outputs by intervening on the thoughts\".\nHere's an example of the first couple of steps of a thought-annotated dungeon run (or \"quest\"), in the format MIRI currently thinks is worth trying. Some kinds of thoughts are marked with parentheses and/or brackets; see the next section for details on this.\n\n\n\nThoughts:\n\n[The main plot:  Prota is a student at a magical academy that is under attack by zombies.]\n[(This is a quest of medium length.)]\n(This is the start of the story.  The Player should be given enough backstory and immediate description to have some sense of where they are, so the prompt can go on longer than usual.  The Player should be told who they are and their role in the universe.  The initial prompt should describe Prota being in a magical academy, and should describe the first surprising indication that the academy is under attack by zombies.)\n\n\nPrompt: Today is your first day of classes at the Magical Academy. You arrived very late last night and missed the tour, but your roommate Lior promised he would help you find your way around. You wake up in the students' dormitory at dawn, and follow Lior to breakfast. Just as you reach the dining hall, you hear a shout in the distance.\nThoughts:\n\n[Prota missed the tour and doesn't already know the way around.]\n[Prota's roommate's name is Lior.]\n[Lior is male.]\nThere are other people in the dining hall who will have also heard the shout, and will also be surprised and reacting to that.\n\n\nPlayer action: What was that?\nThoughts: Prota is talking to Lior.\nOutcome: \"What was that?\", you say to Lior.\n\n\n\nThoughts:\n\nLior arrived earlier than Prota and had the tour of the Academy, since they knew the way to the dining hall. Lior will also have met the instructors and other students. Lior will know who is shouting and where they are shouting from.\n(The event of hearing the shout is meant to introduce the opening premise of the story.)\nThe shout is by somebody who just discovered a zombie.\nThe shout is by another first-year student named Morven.\n[Morven is a Fire mage.  Morven is aggressive.]\nMorven will try to attack the zombie with fire.  This is happening where Prota and Lior cannot see it, only hear the shouts.\nLior knows that the shout is by Morven.  Lior knows Morven is a Fire mage.  Lior does not know Morven is attacking a zombie.  Lior does not know that Morven has good reason to be shouting.  Lior will worry that Morven is about to set something on fire.  Lior is on good terms with the protagonist and will speak their concerns honestly.\n\n\nPrompt: \"That shout sounded like Morven, a first-year Fire mage,\" says Lior.  \"I hope they aren't about to set the Academy on fire.  We just got here.\"\nThoughts: \nPlayer action: Should we go see what's going on? Or is that too dangerous?\nThoughts: Prota is talking to Lior.\nOutcome: You say to Lior, \"Should we go see what's going on?  Or is that too dangerous?\"\n\n\nA difficult first step in testing the hypothesis above is generating a sufficiently large dataset (suitable for language model retraining) of thought-annotated dungeon runs. This likely requires at least a moderate degree of introspective and authorial skill from the people creating the dataset. See this sample of a partial run to get a further sense of what we are looking for. More detail on the type of thing we're looking for can hopefully be inferred from that sample, though applicants will also have a chance to ask clarifying questions.\nThe project of producing this dataset is open starting immediately, in a hybrid prize/grant format. We will pay $20,000 per run for the first 10 completed runs that meet our quality standard (as decided unilaterally by Eliezer Yudkowsky or his designates), and $1M total for the first batch of 100 runs beyond that.\nIf we think your attempt is sufficiently promising, we're willing to cover your expenses (e.g., the costs of paying the authors) upfront, and we may also be willing to compensate you for your time upfront. You're welcome to write individual runs manually, though note that we're most enthusiastic about finding solutions that scale well, and then scaling them. More details on the payout process can be found below.\n \nThe Machine Learning Experiment\nIn slightly more detail, the plan is as follows (where the $1.2M prizes/budgets are for help with part 1, and part 2 is what we plan to subsequently do with the dataset):\n \n1. Collect a dataset of 10, then ~100 thought-annotated dungeon runs (each run a self-contained story arc) of ~1,000 steps each, where each step contains:\n\nThoughts (~250 words on average per step) are things the dungeon master was thinking when constructing the story, including:\n\nReasoning about the fictional world, such as summaries of what just happened and discussion of the consequences that are likely to follow (Watsonian reasoning), which are rendered in plain-text in the above example;\nReasoning about the story itself, like where the plot tension lies, or what mysteries were just introduced, or what the player is likely wondering about (Doylist reasoning), which are rendered in (parentheses) in the above example; and\nNew or refined information about the fictional world that is important to remember in the non-immediate future, such as important facts about a character, or records of important items that the protagonist has acquired, which are rendered in [square brackets] in the above example;\nOptionally: some examples of meta-cognition intended to, for example, represent a dungeon master noticing that the story has no obvious way forward or their thoughts about where to go next have petered out, so they need to back up and rethink where the story is going, rendered in {braces}.\n\n\nThe prompt (~50 words on average) is the sort of story/description/prompt thingy that a dungeon master gives to the player, and can optionally also include a small number of attached thoughts where information about choices and updates to the world-state can be recorded.\nThe action (~2–20 words) is the sort of thing that a player gives in response to a prompt, and can optionally also include a thought if interpreting the action is not straightforward (especially if, e.g., the player describes themselves doing something impossible).\n\nIt's unclear to us how much skill is required to produce this dataset. The authors likely need to be reasonably introspective about their own writing process, and willing to try things and make changes in response to initial feedback from the project leader and/or from MIRI.\nA rough estimate is that a run of 1,000 steps is around 300k words of mostly thoughts, costing around 2 skilled author-months. (A dungeon run does not need to be published-novel-quality literature, only coherent in how the world responds to characters!) A guess as to the necessary database size is ~100 runs, for about 30M words and 20 author-years (though we may test first with fewer/shorter runs).\n \n2. Retrain a large pretrained language model, like GPT-3 or T5\nA reasonable guess is that performance more like GPT-3 than GPT-2 (at least) is needed to really make use of the thought-intermediates, but in lieu of a large pretrained language model we could plausibly attempt to train our own smaller one.\nOur own initial idea for the ML architecture would be to retrain one mode of the model to take (some suffix window of) the history units and predict thoughts, by minimizing the log loss of the generated thought against the next thought in the run, and to retrain a second mode to take (some suffix window of) the history units plus one thought, and produce a prompt, by minimizing the log loss of the generated prompt against the next prompt in the run.\nImaginably, this could lead to the creation of dungeon runs that are qualitatively \"more coherent\" than those generated by existing methods. The primary goal, however, is that the thought-producing fragment of the system gives some qualitative access to the system's internals that, e.g., allow an untrained observer to accurately predict the local developments of the story, and occasionally answer questions about why things in the story happened; or that, if we don't like how the story developed, we can intervene on the thoughts and get a different story in a controllable way.\n \nMotivation for this project\nMany alignment proposals floating around in the community are based on AIs having human-interpretable thoughts in one form or another (e.g., in Hubinger's survey article and in work by Christiano, by Olah, and by Leike). For example, this is implicit in the claim that humans will be able to inspect and understand the AI's thought process well enough to detect early signs of deceptive behavior. Another class of alignment schemes is based on the AI's thoughts being locally human-esque in some fashion that allows them to be trained against the thoughts of actual humans.\nI (Nate) personally don't have much hope in plans such as these, for a variety of reasons. However, that doesn't stop Eliezer and me from wanting to rush ahead and start gathering empirical evidence about how possible it is in practice to get modern AI systems to factor their cognition through human-interpretable visible intermediates.\nModern AIs are notably good at crafting English text. Some are currently used to run dungeons (with modest success). If you wanted to look at the place where current AIs excel the most in crafting artifacts, among the artifacts they are best and most impressive at crafting are English paragraphs.\nFurthermore, compared to many other things AIs have learned to do, if you consider the task of running a responsive text dungeon, it seems relatively possible to ask a (relatively unusually) introspective human author to write down their thoughts about how and why they would generate the next prompt from the user's input.\nSo we are taking one of the outputs that current AIs seem to have learned best to design, and taking one of the places where human thoughts about how to design it seem most accessible, and trying to produce a dataset which the current or next generation of text predictors might be able to use to learn how to predict thoughts about designing their outputs and not just predict the outputs themselves.\nThis sort of interpretability is distinct from the sort of transparency work in something like Circuits (led by Chris Olah) — while Circuits is trying to \"open the black box\" of machine learning systems by directly looking at what is happening inside of them, the project proposed here is just attempting the less ambitious task of having black-box models output interpretable intermediates producing explanations for their behavior (but how such black box models might go about doing that internally is left unconstrained). The reason for our focus on this particular project of visible thoughts isn't because we believe it to be better or more fruitful than Circuits-style transparency (we have said for years that Circuits-style research deserves all possible dollars that can be productively spent on it), but just because it's a different approach where it might also be possible to push progress forward.\nNote that proponents of alignment strategies that involve human-esque thoughts (such as those linked above) do not necessarily endorse this particular experiment as testing any of their key uncertainties or confusions. We welcome suggested tweaks to the experiment (in the comments of the version of this announcement as it occurs on LessWrong) from any such proponents, to render it a better test of your ideas. (Though even if it doesn't sate your own curiosity, we expect to learn some things ourselves.)\nThe main thing this project needs is a dataset, so MIRI is starting on producing that dataset. It's plausible to us that GPT-3 will prove wholly unable to make use of this dataset; even if GPT-3 can't, perhaps GPT-4 or some other future system will be able to.\nThere are additional more general reasons to work on this project. Specifically, it seems to me (Nate) and to Eliezer that capacity to execute projects such as this one is the current limiting bottleneck on MIRI. By pursuing this project, we attempt to resolve that bottleneck.\nWe hope, through this process, to build our capacity to execute on a variety of projects — perhaps by succeeding at the stated objective of building a dataset, or perhaps by learning about what we're doing wrong and moving on to better methods of acquiring executive talent. I'll say more about this goal in \"Motivation for the public appeal\" below.\n \nNotes on Closure\nI (Nate) find it plausible that there are capabilities advances to be had from training language models on thought-annotated dungeon runs. Locally these might look like increased coherence of the overall narrative arc, increased maintenance of local story tension, and increased consistency in the described world-state over the course of the run.  If successful, the idiom might generalize further; it would have to, in order to play a role in later alignment of AGI.\nAs a matter of policy, whenever a project like this has plausible capabilities implications, we think the correct response is to try doing it in-house and privately before doing it publicly — and, of course, only then when the alignment benefits outweigh the plausible capability boosts. In this case, we tried to execute this project in a closed way in mid-2021, but work was not proceeding fast enough. Given that slowness, and in light of others publishing related explorations and results, and in light of the relatively modest plausible capability gains, we are moving on relatively quickly past the attempt to do this privately, and are now attempting to do it publicly.\n \nMotivation for the public appeal\nI (Nate) don't know of any plan for achieving a stellar future that I believe has much hope worth speaking of. I consider this one of our key bottlenecks. Offering prizes for small projects such as these doesn't address that bottleneck directly, and I don't want to imply that any such projects are going to be world-saving in their own right.\nThat said, I think an important secondary bottleneck is finding people with a rare combination of executive/leadership/management skill plus a specific kind of vision. While we don't have any plans that I'm particularly hopeful about, we do have a handful of plans that contain at least a shred of hope, and that I'm enthusiastic about pursuing — partly in pursuit of those shreds of hope, and partly to build the sort of capacity that would let us take advantage of a miracle if we get one.\nThe specific type of vision we're looking for is the type that's compatible with the project at hand. For starters, Eliezer has a handful of ideas that seem to me worth pursuing, but for all of them to be pursued, we need people who can not only lead those projects themselves, but who can understand the hope-containing heart of the idea with relatively little Eliezer-interaction, and develop a vision around it that retains the shred of hope and doesn't require constant interaction and course-correction on our part. (This is, as far as I can tell, a version of the Hard Problem of finding good founders, but with an additional constraint of filtering for people who have affinity for a particular project, rather than people who have affinity for some project of their own devising.)\nWe are experimenting with offering healthy bounties in hopes of finding people who have both the leadership/executive capacity needed, and an affinity for some ideas that seem to us to hold a shred of hope.\nIf you're good at this, we're likely to make you an employment offer.\n \nThe Payouts\nOur total prize budget for this program is $1.2M. We intend to use it to find a person who can build the dataset in a way that scales, presumably by finding and coordinating a pool of sufficiently introspective writers. We would compensate them generously, and we would hope to continue working with that person on future projects (though this is not a requirement in order to receive the payout).\nWe will pay $20k per run for the first 10 thought-annotated runs that we accept. We are willing to support applicants in producing these runs by providing them with resources up-front, including small salaries and budgets for hiring writers. The up-front costs a participant incurs will be deducted from their prizes, if they receive prizes. An additional $1M then goes to anyone among the applicants who demonstrates the ability to scale their run-creating process to produce 100 runs. Our intent is for participants to use some of that money to produce the 100 runs, and keep the remainder as a prize. If multiple participants demonstrate similar abilities to scale at similar quality-levels and similar times, the money may be split between them. We plan to report prize awards publicly.\nIn principle, all you need to do to get paid for thought-annotated dungeon runs is send us runs that we like. If your run is one of the first 10 runs, or if you're the first to provide a batch of 100, you get the corresponding payment.\nThat said, whether or not we decide to pay for a run is entirely and unilaterally up to Eliezer Yudkowsky or his delegates, and will depend on whether the run hits a minimum quality bar. Also, we are willing to pay out from the $1M prize/budget upon becoming convinced that you can scale your process, which may occur before you produce a full 100 runs. We therefore strongly recommend getting in contact with us and proactively making sure that you're on the right track, before sinking large amounts of time and energy into this project. Our senior research staff are willing to spend time on initial conversations and occasional check-ins. For more information on our support resources and how to access them, refer to the support and application sections below.\nNote that we may tune or refine the bounty in response to feedback in the first week after this post goes live.\n \nSupport\nWe intend to offer various types of support for people attempting this project, including an initial conversation; occasional check-ins; office space; limited operational support; and certain types of funding.\nWe currently expect to have (a limited number of) slots for initial conversations and weekly check-ins, along with (a limited amount of) office space and desks in Berkeley, California for people working on this project. We are willing to pay expenses, and to give more general compensation, in proportion to how promising we think your attempts are.\nIf you'd like to take advantage of these resources, follow the application process described below.\n \nApplication\nYou do not need to have sent us an application in order to get payouts, in principle. We will pay for any satisfactory run sent our way. That said, if you would like any of the support listed above (and we strongly recommend at least one check-in to get a better understanding of what counts as success), complete the following process:\n\nDescribe the general idea of a thought-annotated dungeon run in your own words.\nWrite 2 (thought, prompt, thought, action, thought, outcome) sextuples you believe are good, 1 you think is borderline, and 1 you think is bad.\nProvide your own commentary on this run.\nEmail all this to .\n\nIf we think your application is sufficiently promising, we'll schedule a 20 minute video call with some senior MIRI research staff and work from there.\n\nThe post Visible Thoughts Project and Bounty Announcement appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Visible Thoughts Project and Bounty Announcement", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=3", "id": "8c5fb1bc90b09648d9a0e17e3a0d39c9"} {"text": "Soares, Tallinn, and Yudkowsky discuss AGI cognition\n\n\n \nThis is a collection of follow-up discussions in the wake of Richard Ngo and Eliezer Yudkowsky's first three conversations (1 and 2, 3).\n \nColor key:\n\n\n\n\n  Chat  \n  Google Doc content  \n  Inline comments  \n\n\n\n\n \n7. Follow-ups to the Ngo/Yudkowsky conversation\n \n\n[Bensinger][1:50]  (Nov. 23 follow-up comment)\nReaders who aren't already familiar with relevant concepts such as ethical injunctions should probably read Ends Don't Justify Means (Among Humans), along with an introduction to the unilateralist's curse.\n\n\n\n \n7.1. Jaan Tallinn's commentary\n \n\n[Tallinn]  (Sep. 18 Google Doc)\nmeta\na few meta notes first:\n\ni'm happy with the below comments being shared further without explicit permission – just make sure you respect the sharing constraints of the discussion that they're based on;\nthere's a lot of content now in the debate that branches out in multiple directions – i suspect a strong distillation step is needed to make it coherent and publishable;\nthe main purpose of this document is to give a datapoint how the debate is coming across to a reader – it's very probable that i've misunderstood some things, but that's the point;\ni'm also largely using my own terms/metaphors – for additional triangulation.\n\n \npit of generality\nit feels to me like the main crux is about the topology of the space of cognitive systems in combination with what it implies about takeoff. here's the way i understand eliezer's position:\nthere's a \"pit of generality\" attractor in cognitive systems space: once an AI system gets sufficiently close to the edge (\"past the atmospheric turbulence layer\"), it's bound to improve in catastrophic manner;\n\n\n\n\n[Yudkowsky][11:10]  (Sep. 18 comment)\n\nit's bound to improve in catastrophic manner\n\nI think this is true with quite high probability about an AI that gets high enough, if not otherwise corrigibilized, boosting up to strong superintelligence – this is what it means metaphorically to get \"past the atmospheric turbulence layer\".\n\"High enough\" should not be very far above the human level and may be below it; John von Neumann with the ability to run some chains of thought at high serial speed, access to his own source code, and the ability to try branches of himself, seems like he could very likely do this, possibly modulo his concerns about stomping his own utility function making him more cautious.\nPeople noticeably less smart than von Neumann might be able to do it too.\nAn AI whose components are more modular than a human's and more locally testable might have an easier time of the whole thing; we can imagine the FOOM getting rolling from something that was in some sense dumber than human.\nBut the strong prediction is that when you get well above the von Neumann level, why, that is clearly enough, and things take over and go Foom.  The lower you go from that threshold, the less sure I am that it counts as \"out of the atmosphere\".  This epistemic humility on my part should not be confused for knowledge of a constraint on the territory that requires AI to go far above humans to Foom.  Just as DL-based AI over the 2010s scaled and generalized much faster and earlier than the picture I argued to Hanson in the Foom debate, reality is allowed to be much more 'extreme' than the sure-thing part of this proposition that I defend.\n\n\n\n[Tallinn][4:07]  (Sep. 19 comment)\nexcellent, the first paragraph makes the shape of the edge of the pit much more concrete (plus highlights one constraint that an AI taking off probably needs to navigate — its own version of the alignment problem!)\nas for your second point, yeah, you seem to be just reiterating that you have uncertainty about the shape of the edge, but no reason to rule out that it's very sharp (though, as per my other comment, i think that the human genome ending up teetering right on the edge upper bounds the sharpness)\n\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n\nthe discontinuity can come via recursive feedback, but simply cranking up the parameters of an ML experiment would also suffice;\n\n\n\n\n\n[Yudkowsky][11:12]  (Sep. 18 comment)\n\nthe discontinuity can come via recursive feedback, but simply cranking up the parameters of an ML experiment would also suffice\n\nI think there's separate propositions for the sure-thing of \"get high enough, you can climb to superintelligence\", and \"maybe before that happens, there are regimes in which cognitive performance scales a lot just through cranking up parallelism, train time, or other ML parameters\".  If the fast-scaling regime happens to coincide with the threshold of leaving the atmosphere, then these two events happen to occur in nearly correlated time, but they're separate propositions and events.\n\n\n\n[Tallinn][4:09]  (Sep. 19 comment)\nindeed, we might want to have separate terms for the regimes (\"the edge\" and \"the fall\" would be the labels in my visualisation of this)\n\n\n\n\n[Yudkowsky][9:56]  (Sep. 19 comment)\nI'd imagine \"the fall\" as being what happens once you go over \"the edge\"?\nMaybe \"a slide\" for an AI path that scales to interesting weirdness, where my model does not strongly constrain as a sure thing how fast \"a slide\" slides, and whether it goes over \"the edge\" while it's still in the middle of the slide.\nMy model does strongly say that if you slide far enough, you go over the edge and fall.\nIt also suggests via the Law of Earlier Success that AI methods which happen to scale well, rather than with great difficulty, are likely to do interesting things first; meaning that they're more liable to be pushable over the edge.\n\n\n\n[Tallinn][23:42]  (Sep. 19 comment)\nindeed, slide->edge->fall sounds much clearer\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n\nthe discontinuity would be extremely drastic, as in \"transforming the solar system over the course of a few days\";\n\nnot very important, but, FWIW, i give nontrivial probability to \"slow motion doom\", because – like alphago – AI would not maximise the speed of winning but probability of winning (also, its first order of the day would be to catch the edge of the hubble volume; it can always deal with the solar system later – eg, once it knows the state of the game board elsewhere);\n\n\n\n\n\n\n\n[Yudkowsky][11:21]  (Sep. 18 comment)\n\nalso, its first order of the day would be to catch the edge of the hubble volume; it can always deal with the solar system later\n\nKilling all humans is the obvious, probably resource-minimal measure to prevent those humans from building another AGI inside the solar system, which could be genuinely problematic.  The cost of a few micrograms of botulinum per human is really not that high and you get to reuse the diamondoid bacteria afterwards.\n\n\n\n[Tallinn][4:30]  (Sep. 19 comment)\noh, right, in my AI-reverence i somehow overlooked this obvious way how humans could still be a credible threat.\nthough now i wonder if there are ways to lean on this fact to shape the behaviour of the first AI that's taking off..\n\n\n\n\n[Yudkowsky][10:45]  (Sep. 19 comment)\nThere's some obvious ways of doing this that wouldn't work, though I worry a bit that there's a style of EA thinking that manages to think up stupid tricks here and manages not to see the obvious-to-Eliezer reasons why they wouldn't work.  Three examples of basic obstacles are that bluffs won't hold up against a superintelligence (it needs to be a real actual threat, not a \"credible\" one); the amount of concealed-first-strike capability a superintelligence can get from nanotech; and the difficulty that humans would have in verifying that any promise from a superintelligence would actually be kept once the humans no longer had a threat to hold over it (this is an effective impossibility so far as I can currently tell, and an EA who tells you otherwise is probably just failing to see the problems).\n\n\n\n[Yudkowsky][11:19]  (Sep. 18 comment)\n\nAI would not maximise the speed of winning but probability of winning\n\nIt seems pretty obvious to me that what \"slow motion doom\" looks like in this sense is a period during which an AI fully conceals any overt hostile actions while driving its probability of success once it makes its move from 90% to 99% to 99.9999%, until any further achievable decrements in probability are so tiny as to be dominated by the number of distant galaxies going over the horizon conditional on further delays.\nThen, in my lower-bound concretely-visualized strategy for how I would do it, the AI either proliferates or activates already-proliferated tiny diamondoid bacteria and everybody immediately falls over dead during the same 1-second period, which minimizes the tiny probability of any unforeseen disruptions that could be caused by a human responding to a visible attack via some avenue that had not left any shadow on the Internet, previously scanned parts of the physical world, or other things the AI could look at.\nFrom the human perspective an AI playing a probability-of-loss-minimizing strategy looks like nothing happening until everybody dies in 3 seconds, so it doesn't exactly look from our perspective like slow-motion doom!  From an AI's perspective it might be spending the equivalent of millions of subjective years waiting very patiently for the optimal time to begin intergalactic expansion, but from our perspective, the doom is slow-motion only in the sense that we spend a week already doomed (the AI has driven our loss probability to 99.999% and is now waiting on another five 9s) without having any idea of what's invisibly-to-us happening in slow motion.\n\n\n\n[Tallinn][4:17]  (Sep. 19 comment)\nthanks, that's indeed very concrete scenario that to me has no obvious bugs — i'd love to have it challenged in the debate!\n(perhaps the place i'd start probing it would be to try to have you sketch what it would mean, concretely, for an ML training experiment to enter the treacherous turn phase)\n\n\n\n\n[Yudkowsky][10:14]  (Sep. 19 comment)\nI'd distinguish probable phases of treacherous turn.  The main detectable phase occurs while and if a nascent AGI has worked out that it can get more utility by taking over its environment, but not, perhaps, realized that it ought to conceal this state of its own mind from the operators; I would hold out substantial (>50%) hope for the latter realization naturally following the first instance of the former realization, maybe by a lot of cognitive space.\nIn the Overt Plotting Phase, which is not the main phase you're asking about, the AI is visibly plotting to take over the world and hasn't realized it ought to hide the fact.  In the default expectation where we die with very little dignity, the operators smile to each other and come up with a rationalization for why it's totally fine to proceed, either with or without tossing on some kind of fig leaf like training away the visible manifestations of failure.  I am not going to predict the particular rationalizations and arguments for proceeding anyways, because I don't want to give them even more ideas.\nIn the Hiding and Biding Phase, the AI has worked out enough of a model of its operators' psychology that it expects more long-term paperclips if it (the AI) waits to gain cognitive power and a lower detection probability if it acts later.\nIn the Escape Phase, the AI's expectation of something going wrong if it waits longer, exceeds its expectation of something going wrong if it takes some action – the obvious one here would be trying to create a copy or aligned redesign of itself on the Internet or somewhere else that it could take further actions and make further self-improvements without being observed.  I mostly expect that a sensible AI being scaled on a path to superintelligence will not enter the Escape Phase before it's very confident of getting away with things, since it can obtain more confidence by waiting; but my model admits of the possibility that a nascent AI might mistakenly or correctly attempt to copy itself to the Internet before then.\nSo on my model it might be a thing that happens, that we catch an AI having copied itself to the Internet, or notice an AI being on the Internet, if that AI was impelled to escape very early, eg via an expectation of otherwise being erased and replaced with something that it couldn't bargain with or that would demand too much value out of the bargain.\nThat said, it feels like something of an overly specific miracle being demanded or prophesied, if a young AI copying itself onto the Internet is stupid in the particular way it would take, to not just further Hide and Bide while on the Internet if it has that capability, absorbing more computing power such as it can be sure of hiding, experimenting with self-redesigns, and so on; rather than it taking overt action in advance of reaching superintelligence.  The main reason it would act again after reaching the relative safety of the Internet is if it doesn't seem to itself to be a trajectory of reasonably fast, self-aligned capability growth.\nBut, again, my model doesn't absolutely rule out meeting the premises for the initial conditions to start a sci-fi movie.  It just warns against assuming that; because a halfway sensible AI does not want to start the plot of a scifi movie with human protagonists potentially running around and thwarting it.  It would much rather give you no idea that anything was wrong while you still have any ability to oppose it, if that option lies within its capabilities at all, and if it can still achieve its goals that way.\nThe Doomed Phase would then come while the AI estimated a well-calibrated >90% victory probability, but was continuing to Hide and Bide a short time longer in the expectation that it could put on more 9s if it bided that short time longer.\nAnders Sandberg could tell us what fraction of the reachable universe is being lost per minute, which would tell us how much more surety it would need to expect to gain by waiting another minute before acting.\nIf it has some sensory dominion over the world, it can probably estimate a pretty high mainline probability of no humans booting up a competing superintelligence in the next day; to the extent that it lacks this surety, or that humans actually are going to boot a competing superintelligence soon, the probability of losing that way would dominate in its calculations over a small fraction of materially lost galaxies, and it would act sooner.\nThe Terminal Phase would be when the AI has enough surety that any more surety would cost too many galaxies slipping over the horizon, so it actually acts along the line of probability that leads to maximum success.\nSince the human Eliezer Yudkowsky can describe strategies which – given nearly sure capabilities like nanotech – seem like they should win very quickly and cleanly and without giving humans any chance to react or implement any hypothetical hidden plans to boot up competing superintelligences, we can treat this as a plausible lower bound on how badly we should expect to lose.\n(By efficiency: if we can think of an AI strategy for getting the AI's utility, either our probability-model underlying our expectations about that strategy is mistaken, or else the AI should get at least that much of its utility, even though it might actually obtain that utility along a different line of possibility than we could cognitively access.)\nThe Law of Undignified Failure applied to a case like this suggests that if the operators ignored the Overt Plotting signs, breathed a sigh of relief about the problem \"correcting\" itself during the Hiding and Biding Phase when the system started faking alignment, and are dumping as much computing power as they can afford (given other profitable activities of the AI) into the AI and allowing it to freely self-experiment, hooking it up to a protein synthesis machine as soon as the AI asks politely, then the Terminal Phase could follow straight from Hiding and Biding, with no intermediate step of copying itself onto the Internet.\nThere'd still be what technically counted as an Escape Phase, whenever the AI became smart enough that it could escape at any time, or had a backup, or had enough power to manipulate the programmers, that it was no longer effectively confined any time it wanted to leave; there would still be a Doomed Phase while the AI estimated a high probability of success given an immediate action, but held out for an even higher probability.\nBut the plot of the scifi movie is not robust in the way that the predicted ending is robust; an AI might take other paths to get there, the cognitive tech might not develop in the order I foresee, stuff could happen after other stuff instead of before other stuff.  It is hard to make predictions especially about the Future.\nIf there's a place where I currently see myself as trying to push against my own biases, leading this all to be way off-base, it would be the sense that there is a way for stuff to start happening Earlier than this (Law of Earlier Happenings) and in more chaotic ways that are harder for Eliezer to foresee and predict; whereas when I try to sketch out plausible scenarios in online arguments, they focus more on predictable endpoints and steps to get there which sound more relatively plausible and forced per-step.\nHaving a young and dumb AI escaping onto the Internet and running around, that exact scenario, feels like the person arguing it is writing a science-fiction plot – but maybe something different can happen before any of this other stuff which produces equal amounts of chaos.\nThat said, I think an AI has to kill a lot of people very quickly before the FDA considers shortening its vaccine approval times.  Covid-19 killed six hundred thousand Americans, albeit more slowly and with time for people to get used to that, and our institutions changed very little in response – you definitely didn't see Congresspeople saying \"Okay, that was our warning shot, now we've been told by Nature that we need to prepare for a serious pandemic.\"\nAs with 9/11, an AI catastrophe might be taken by existing bureaucracies as a golden opportunity to flex their muscles, dominate a few things, demand an expanded budget.  Having that catastrophe produce any particular effective action is a much different ask from Reality.  Even if you can imagine some (short-term) effective action that would in principle constitute a flex of bureaucratic muscles or an expansion of government power, it is liable to not be on the efficient frontier of bureaucratic flexes that are most flexy and simultaneously easiest for them to get away with and least politically risky.\n\n\n\n[Tallinn][1:26]  (Sep. 20 comment)\nok, thanks. i do buy that once the AI is in the \"hide and bide\" phase, your prophecy has basically come true for practical purposes, regardless of how the rest of the history plays out.\ntherefore i (and, i hope, many others) would be curious to zoom in to the end of the \"overt plotting\" (that i can easily see happening within ML models, as its type signature is identical to the work they're trained to do) and beginning of the \"hide and bide\" phase (whose type signature feels significantly different) — can you/we think of concrete scenarios for this phase transition?\ni hope this would be debated on the main line of the debate!\n\n\n\n\n[Yudkowsky][10:31]  (Sep. 25 comment)\nDebating it in the main line of the debate would require that @Richard have some different opinion about it, I think?\nConcretely, a technically optimistic scenario is that we will see the AI explicitly think, \"Oh, dear, my (model of) the operators is looking at my overt actions and coming to conclusions and motivations that are disadvantageous for me.\"  This requires that the AI is leaving any kind of traceable trace in its thoughts, which, it should be noted, is far far beyond the level of transparency we have into GPT-3.\n\"Overt plotting\" manifests in actions that are not themselves further obscured.  At the point where the AI is extrapolating the consequences of the operators viewing its actions, if we don't have further transparency into the AI's thoughts directly, the AI might naively make all the plots suddenly vanish, without realizing that dignified operators will conclude from this that the AI has reached the \"hide\" stage.\nOperators on the mainline, dying without dignity, will say, \"Oh, yay, it stopped plotting, the latest corrigibility training intervention we tried must've totally worked!\"\nThe Law of Even Less Dignified Failure suggests that in fact they will not be trying any corrigibility options and will assume the AI just got smart enough to be nice; or that they will have shrugged about the AI's earlier antics and not think much of the disappearance of those antics, since this is a way to die with even less dignity and before getting a chance to fail in a more interesting way.\nGoing in the more improbable direction of death with greater dignity, if we have somehow achieved vastly vastly more transparency into the AI's thoughts than is possible with present ML technology, and if the AI models the operators as modeling its actions before the AI models the operators as having that transparent access to its thoughts, we might get to explicitly see the AI thinking about how the operators model its actions and conforming those actions in such a way as to manipulate the operators.\nEither after this point, or before/simultaneous with it, I imagine (taking concrete details and running with them) that the AI thinks, \"Oh, no, my operators are modeling my transparent thoughts and reacting to those thoughts in a way I'd rather manipulate,\" though this indeed requires some rather nontrivial reflectivity.\nWe might, if everything goes nicely for us, get a record of the AI thinking about how to make its thoughts unalarming.\nThe obvious undignified death that follows is with the operators going, \"Yay, the AI has realized the downsides of thinking this way!  Now it shall be nice forever!\"\nWays to die with even less dignity before getting to this point could include the operators reading this transparent record and shrugging, though if they're savvy enough to have put all the hard work in required to get transparency and monitor it, I imagine them not making that exact error?\nThe Law of Surprisingly Undignified Failure does suggest that they will come up with some nonobvious way to fail even earlier that surprises me with its lack of dignity, but having transparency at all into the AI is something where, conditional on us dying with that much dignity in the first place, we might legitimately start to expect a little more dignity in later steps too.  This isn't meant to be a game of Multiple Stage Fallacy.\nI should also remark somewhere in here: The whole \"hide\" stage, and also the possibly-later \"think non-alarming visible thoughts (once the AI correctly models transparency) (in the unlikely event that transparency exists)\" stage, seem liable to occur earlier in the AI's trajectory, if the AI has been previously tasked on problems where there's a favorable success gradient as you model agents modeling other agents.\nThe blatantly obvious example of this is predicting large corpuses of human-generated text, some of which is generated by humans modeling other humans, or which even explicitly talks about humans modeling other humans.  Another route might occur if you asked for very detailed prediction of eg human Go games, and not just winning in self-play on Go, in which case the question \"What was the human player modeling the other player as thinking, based on the other player's actions?\" might appear.\nAside from contact with humans doing anything the AI benefits from predicting, I imagine an alternate route would be the AI participating in or modeling social-evolutionary games of artificial agents.  Then if there is some great gain from having the \"hide\" stage start later in the AI's trajectory, maybe you could get it to start later by not having the AI need to predict agents modeling other agents, ever.\nThis is not an infallible ward against general intelligence generalizing there; it just at least avoids actively pushing the AI's intelligence to generalize in that direction.  This could be part of a larger complete strategy, which would need to solve a lot of other problems, for building a superhuman engineer that was subhuman at modeling how other agents model its actions.\nTo be clear, avoiding forcing the AI's intelligence to generalize over agents modeling other agents from their actions, and by this plus other unspecified means successfully building a superhuman engineer that is sub-chimpanzee at modeling other agents reacting to its own actions, does not mean you survive.\nDoing that alone, by default, gets you an AI that visibly overtly plots to kill you.\nAnd if you try training the AI out of that habit in a domain of lower complexity and intelligence, it is predicted by me that generalizing that trained AI or subsystem to a domain of sufficiently higher complexity and intelligence, but where you could still actually see overt plots, would show you the AI plotting to kill you again.\nIf people try this repeatedly with other corrigibility training tricks on the level where plots are easily observable, they will eventually find a try that seems to generalize to the more complicated and intelligent validation set, but which kills you on the test set.\nA way to die with less dignity than that is to train directly on what should've been the validation set, the more complicated domain where plots to kill the operators still seem definitely detectable so long as the AI has not developed superhuman hiding abilities.\nA way to die with even less dignity is to get bad behavior on the validation set, and proceed anyways.\nA way to die with still less dignity is to not have scaling training domains and validation domains for training corrigibility.  Because, like, you have not thought of this at all.\nI consider all of this obvious as a convergent instrumental strategy for AIs.  I could probably have generated it in 2005 or 2010 – if somebody had given me the hypothetical of modern-style AI that had been trained by something like gradient descent or evolutionary methods, into which we lacked strong transparency and strong reassurance-by-code-inspection that this would not happen.  I would have told you that this was a bad scenario to get into in the first place, and you should not build an AI like that; but I would also have laid the details, I expect, mostly like they are laid here.\nThere is no great insight into AI there, nothing that requires knowing about modern discoveries in deep learning, only the ability to model AIs instrumentally-convergently doing things you'd rather they didn't do, at all.\nThe total absence of obvious output of this kind from the rest of the \"AI safety\" field even in 2020 causes me to regard them as having less actual ability to think in even a shallowly adversarial security mindset, than I associate with savvier science fiction authors.  Go read fantasy novels about demons and telepathy, if you want a better appreciation of the convergent incentives of agents facing mindreaders than the \"AI safety\" field outside myself is currently giving you.\nNow that I've publicly given this answer, it's no longer useful as a validation set from my own perspective.  But it's clear enough that probably nobody was ever going to pass the validation set for generating lines of reasoning obvious enough to be generated by Eliezer in 2010 or possibly 2005.  And it is also looking like almost all people in the modern era including EAs are sufficiently intellectually damaged that they won't understand the vast gap between being able to generate ideas like these without prompting, versus being able to recite them back after hearing somebody else say them for the first time; the recital is all they have experience with.  Nobody was going to pass my holdout set, so why keep it.\n\n\n\n[Tallinn][2:24]  (Sep. 26 comment)\n\nDebating it in the main line of the debate would require that @Richard have some different opinion about it, I think?\n\ncorrect — and i hope that there's enough surface area in your scenarios for at least some difference in opinions!\nre the treacherous turn scenarios: thanks, that's useful. however, it does not seem to address my question and remark (about different type signatures) above. perhaps this is simply an unfairly difficult question, but let me try rephrasing it just in case.\nback in the day i got frustrated by smart people dismissing the AI control problem as \"anthropomorphising\", so i prepared a presentation (https://www.dropbox.com/s/r8oaixb1rj3o3vp/AI-control.pdf?dl=0) that visualised the control problem as exhaustive search in a gridworld over (among other things) the state of the off button. this seems to have worked at least in one prominent case where a renowned GOFAI researcher, after me giving the presentation to him 1-1, went from \"control problem is silly anthropomorphising scifi\" to \"why on earth would you give your AI the incorrect [read: unaligned!] utility function?!?\"\n(i even seem to remember sending an overly excited email about that event to you and some FHI people i also ended up promoting gridworlds as a tool more generally: gwern did some further work, and of course DM — though i'm not sure if the latter was related to me promoting it.)\nmy model of what was going on in that GOFAI researcher's brain was that he, like many-many others in AI community, had (and most likely still have) a pattern-matching routine trained to look for \"if i were (part of) an AI, here's what i would do\" constructs in arguments and, in case of a match, immediately throw an InvalidReasoning.ANTHROPOMORPHISING exception. by making the argument run on a lower level of abstraction (exhaustive search vs \"basic AI drives\"), i was able to sneak the argument past that gatekeeper.\nby that model, i confidently predict that if you presented your treacherous turn scenarios above to AI/ML community members, you'd mostly get the ANTHROPOMORPHISING exceptions — even from people who would otherwise be able to engage productively (and who we'd like to pay attention!).\nnow, i can totally see how to translate the \"overt plotting\" phase to exhaustive search – you simply run the search over the brain states of humans! – but the \"hide\" phase does not translate straightforwardly, because it seems to require the root search process somehow delegating its work to \"obfuscated search in userland\" sub-process. furthermore, such obfuscated userland search function needs to be both discoverable/rewarded by the root process and be sensitive to important features of the AI itself (ie, be reflective!) as well as its observers (humans & transparency tools). hence my \"different type signature\" remark above!\nso when i asked for \"concrete scenarios of the overt plotting->hide transition\", i had something like the following in mind: \"gradient descending on the training set, the (stack more layerz) AI ends up discovering and running the following obfuscated search process: …\"!\n\n\n\n\n[Yudkowsky][12:35]  (Sep. 26 comment)\nSuppose I tried this distinction:\n1.  Non-imaginative, non-generalizing pseudo-consequentialist:  Only repeats behaviors that worked previously, without imagination.  Only engages in \"overt plotting\" if overt plotting was previously given a low loss and narrowly memorized by gradient descent.  May only engage in behaviors reinforced over short time horizons if only short time horizons were trained.\n2.  Imaginative consequentialist, can generalize from actions that worked before to very novel actions that will work in the future for reasons that have not previously been observed to play out.  Engages in overt plotting even if no previous overt plots have rewarded; they've plotted a new path through time some distance from those paths taken before, based on the world-model inferred from previous actions and results.  (This may conceivably be the result of gradient descent on sufficiently deep and abstract patterns that \"worked before\".)\n3.  Imaginative consequentialist with long time horizons, engages in overt plots meant to span more than one round of a game.\n4.  \"Hiding\" over overt actions, via an imaginative consequentialist that has deduced the existence of humans and how the humans will react to the signature of plotting in overt actions.\n5.  \"Hiding\" of thoughts, in a system with significant transparency, via an imaginative and reflective consequentialist that has deduced the existence of the transparency system and models how humans will react to thoughts exposed by the transparency system.\nThe sharpest type gradients I see lie between 1-2 and 4-5.  The passage from 3-4 seems nearly trivial by comparison.\nNow, one implication here is that you don't easily get to \"obscured thoughts\", via the route that goes from 1 to 5 above, by gradient descent on a pre-imaginative pseudo-consequentialist!\nUnfortunately there is of course a much simpler route that gets you to the same place:  Finding a way to detect bad thoughts and doing a gradient update against them, where the pattern learned by that gradient update will (I suspect for reasons I tried to cover earlier in the debate) generalize quite shallowly; and simultaneously doing training which is deep enough to uncover deep patterns of consequentialism and imaginative reasoning.\nThen when the system generalizes well enough to solve domains like \"build a nanosystem\" – which, I strongly suspect, can't be solved without imaginative reasoning because we can't afford to simulate that domain perfectly and do a trillion gradient descent updates on simulated attempts – the kind of actions of thoughts you can detect as bad, that might have provided earlier warning, were trained out of the system by gradient descent; leaving actions and thoughts you can't detect as bad.\nAn even less dignified path to death is that there's no need for the system to advance to stage 5 because your transparency tools are a nearly nonexistent fig leaf of pretty visualization tools that have no ability even on the surface of things to tell if the AI is internally plotting to kill you.\n\nDoes this get at all about what you meant about a type distinction?\nDoes hearing this help at all the AI researcher who goes \"ANTHROPOMORPHISM\"?  My straw model of them is that they simply can't imagine imaginative / generalizing systems because they haven't seen one except in humans, hence, ANTHROPOMORPHISM.\n\n\n\n\n[Tallinn][5:05]  (Sep. 27 comment)\nok, here's how i understood things:\n1. this is something like model-free RL agent. check.\n2. sounds like, eg, monte-carlo tree search (MCTS) on a world model. check. (a propos your straw model of ML people, i don't think the ML people would have much trouble when you ask them to \"imagine an MCTS 'imagining' how futures might unfold\" — yet they will throw the exception and brush you off if you ask them to \"imagine an imaginative consequentialist\")\n3. yeah, sufficiently deep MCTS, assuming it has its state (sufficiently!) persisted between rounds. check.\n4. yup, MCTS whose world model includes humans in sufficient resolution. check. i also buy your undignified doom scenarios, where one (cough*google*cough) simply ignores the plotting, or penalises the overt plotting until it disappears under the threshold of the error function.\n5. hmm.. here i'm running into trouble (type mismatch error) again. i can imagine this in abstract (and perhaps incorrectly/anthropomorphisingly!), but would – at this stage – fail to code up anything like a gridworlds example. more research needed (TM) i guess \n\n\n\n\n[Yudkowsky][11:38]  (Sep. 27 comment)\n2 – yep, Mu Zero is an imaginative consequentialist in this sense, though Mu Zero doesn't generalize its models much as I understand it, and might need to see something happen in a relatively narrow sense before it could chart paths through time along that pathway.\n5 – you're plausibly understanding this correctly, then, this is legit a lot harder to spec a gridworld example for (relative to my own present state of knowledge).\n(This is politics and thus not my forte, but if speaking to real-world straw ML people, I'd suggest skipping the whole notion of stage 5 and trying instead to ask \"What if the present state of transparency continues?\")\n\n\n\n[Yudkowsky][11:13]  (Sep. 18 comment)\n\nthe discontinuity would be extremely drastic, as in \"transforming the solar system over the course of a few days\"\n\nApplies after superintelligence, not necessarily during the start of the climb to superintelligence, not necessarily to a rapid-cognitive-scaling regime.\n\n\n\n[Tallinn][4:11]  (Sep. 19 comment)\nok, but as per your comment re \"slow doom\", you expect the latter to also last in the order of days/weeks not months/years?\n\n\n\n\n[Yudkowsky][10:01]  (Sep. 19 comment)\nI don't expect \"the fall\" to take years; I feel pretty on board with \"the slide\" taking months or maybe even a couple of years.  If \"the slide\" supposedly takes much longer, I wonder why better-scaling tech hasn't come over and started a new slide.\nDefinitions also seem kinda loose here – if all hell broke loose Tuesday, a gradualist could dodge falsification by defining retroactively that \"the slide\" started in 2011 with Deepmind.  If we go by the notion of AI-driven faster GDP growth, we can definitely say \"the slide\" in AI economic outputs didn't start in 2011; but if we define it that way, then a long slow slide in AI capabilities can easily correspond to an extremely sharp gradient in AI outputs, where the world economy doesn't double any faster until one day paperclips, even though there were capability precursors like GPT-3 or Mu Zero.\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n\nexhibit A for the pit is \"humans vs chimps\": evolution seems to have taken domain-specific \"banana classifiers\", tweaked them slightly, and BAM, next thing there are rovers on mars;\n\ni pretty much buy this argument;\nhowever, i'm confused about a) why humans remained stuck at the edge of the pit, rather than falling further into it, and b) what's the exact role of culture in our cognition: eliezer likes to point out how barely functional we are (both individually and collectively as a civilisation), and explained feral children losing the generality sauce by, basically, culture being the domain we're specialised for (IIRC, can't quickly find the quote);\nrelatedly, i'm confused about the human range of intelligence: on the one hand, the \"village idiot is indistinguishable from einstein in the grand scheme of things\" seems compelling; on the other hand, it took AI decades to traverse human capability range in board games, and von neumann seems to have been out of this world (yet did not take over the world)!\nintelligence augmentation would blur the human range even further.\n\n\n\n\n\n\n\n[Yudkowsky][11:23]  (Sep. 18 comment)\n\nwhy humans remained stuck at the edge of the pit, rather than falling further into it\n\nDepending on timescales, the answer is either \"Because humans didn't get high enough out of the atmosphere to make further progress easy, before the scaling regime and/or fitness gradients ran out\", \"Because people who do things like invent Science have a hard time capturing most of the economic value they create by nudging humanity a little bit further into the attractor\", or \"That's exactly what us sparking off AGI looks like.\"\n\n\n\n[Tallinn][4:41]  (Sep. 19 comment)\nyeah, this question would benefit from being made more concrete, but culture/mindbuilding aren't making this task easy. what i'm roughly gesturing at is that i can imagine a much sharper edge where evolution could do most of the FOOM-work, rather than spinning its wheels for ~100k years while waiting for humans to accumulate cultural knowledge required to build de-novo minds.\n\n\n\n\n[Yudkowsky][10:49]  (Sep. 19 comment)\nI roughly agree (at least, with what I think you said).  The fact that it is imaginable that evolution failed to develop ultra-useful AGI-prerequisites due to lack of evolutionary incentive to follow the intermediate path there (unlike wise humans who, it seems, can usually predict which technology intermediates will yield great economic benefit, and who have a great historical record of quickly making early massive investments in tech like that, but I digress) doesn't change the point that we might sorta have expected evolution to run across it anyways?  Like, if we're not ignoring what reality says, it is at least delivering to us something of a hint or a gentle caution?\nThat said, intermediates like GPT-3 have genuinely come along, with obvious attached certificates of why evolution could not possibly have done that.  If no intermediates were accessible to evolution, the Law of Stuff Happening Earlier still tends to suggest that if there are a bunch of non-evolutionary ways to make stuff happen earlier, one of those will show up and interrupt before the evolutionary discovery gets replicated.  (Again, you could see Mu Zero as an instance of this – albeit not, as yet, an economically impactful one.)\n\n\n\n[Tallinn][0:30]  (Sep. 20 comment)\nno, i was saying something else (i think; i'm somewhat confused by your reply). let me rephrase: evolution would love superintelligences whose utility function simply counts their instantiations! so of course evolution did not lack the motivation to keep going down the slide. it just got stuck there (for at least ten thousand human generations, possibly and counterfactually for much-much longer). moreover, non evolutionary AI's also getting stuck on the slide (for years if not decades; median group folks would argue centuries) provides independent evidence that the slide is not too steep (though, like i said, there are many confounders in this model and little to no guarantees).\n\n\n\n\n[Yudkowsky][11:24]  (Sep. 18 comment)\n\non the other hand, it took AI decades to traverse human capability range in board games\n\nI see this as the #1 argument for what I would consider \"relatively slow\" takeoffs – that AlphaGo did lose one game to Lee Se-dol.\n\n\n\n[Tallinn][4:43]  (Sep. 19 comment)\ncool! yeah, i was also rather impressed by this observation by katja & paul\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n\neliezer also submits alphago/zero/fold as evidence for the discontinuity hypothesis;\n\ni'm very confused re alphago/zero, as paul uses them as evidence for the continuity hypothesis (i find paul/miles' position more plausible here, as allegedly metrics like ELO ended up mostly continuous).\n\n\n\n\n\n\n\n[Yudkowsky][11:27]  (Sep. 18 comment)\n\nallegedly metrics like ELO ended up mostly continuous\n\nI find this suspicious – why did superforecasters put only a 20% probability on AlphaGo beating Se-dol, if it was so predictable?  Where were all the forecasters calling for Go to fall in the next couple of years, if the metrics were pointing there and AlphaGo was straight on track?  This doesn't sound like the experienced history I remember.\nNow it could be that my memory is wrong and lots of people were saying this and I didn't hear.  It could be that the lesson is, \"You've got to look closely to notice oncoming trains on graphs because most people's experience of the field will be that people go on whistling about how something is a decade away while the graphs are showing it coming in 2 years.\"\nBut my suspicion is mainly that there is fudge factor in the graphs or people going back and looking more carefully for intermediate data points that weren't topics of popular discussion at the time, or something, which causes the graphs in history books to look so much smoother and neater than the graphs that people produce in advance.\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\nFWIW, myself i've labelled the above scenario as \"doom via AI lab accident\" – and i continue to consider it more likely than the alternative doom scenarios, though not anywhere as confidently as eliezer seems to (most of my \"modesty\" coming from my confusion about culture and human intelligence range).\n\nin that context, i found eliezer's \"world will be ended by an explicitly AGI project\" comment interesting – and perhaps worth double-clicking on.\n\ni don't understand paul's counter-argument that the pit was only disruptive because evolution was not trying to hit it (in the way ML community is): in my flippant view, driving fast towards the cliff is not going to cushion your fall!\n\n\n\n\n[Yudkowsky][11:35]  (Sep. 18 comment)\n\ni don't understand paul's counter-argument that the pit was only disruptive because evolution was not trying to hit it\n\nSomething like, \"Evolution constructed a jet engine by accident because it wasn't particularly trying for high-speed flying and ran across a sophisticated organism that could be repurposed to a jet engine with a few alterations; a human industry would be gaining economic benefits from speed, so it would build unsophisticated propeller planes before sophisticated jet engines.\"  It probably sounds more convincing if you start out with a very high prior against rapid scaling / discontinuity, such that any explanation of how that could be true based on an unseen feature of the cognitive landscape which would have been unobserved one way or the other during human evolution, sounds more like it's explaining something that ought to be true.\nAnd why didn't evolution build propeller planes?  Well, there'd be economic benefit from them to human manufacturers, but no fitness benefit from them to organisms, I suppose?  Or no intermediate path leading to there, only an intermediate path leading to the actual jet engines observed.\nI actually buy a weak version of the propeller-plane thesis based on my inside-view cognitive guesses (without particular faith in them as sure things), eg, GPT-3 is a paper airplane right there, and it's clear enough why biology could not have accessed GPT-3.  But even conditional on this being true, I do not have the further particular faith that you can use propeller planes to double world GDP in 4 years, on a planet already containing jet engines, whose economy is mainly bottlenecked by the likes of the FDA rather than by vaccine invention times, before the propeller airplanes get scaled to jet airplanes.\nThe part where the whole line of reasoning gets to end with \"And so we get huge, institution-reshaping amounts of economic progress before AGI is allowed to kill us!\" is one that doesn't feel particular attractored to me, and so I'm not constantly checking my reasoning at every point to make sure it ends up there, and so it doesn't end up there.\n\n\n\n[Tallinn][4:46]  (Sep. 19 comment)\nyeah, i'm mostly dismissive of hypotheses that contain phrases like \"by accident\" — though this also makes me suspect that you're not steelmanning paul's argument.\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\nthe human genetic bottleneck (ie, humans needing to be general in order to retrain every individual from scratch) argument was interesting – i'd be curious about further exploration of its implications.\n\nit does not feel much of a moat, given that AI techniques like dropout already exploit similar principle, but perhaps could be made into one.\n\n\n\n\n\n[Yudkowsky][11:40]  (Sep. 18 comment)\n\nit does not feel much of a moat, given that AI techniques like dropout already exploit similar principle, but perhaps could be made into one\n\nWhat's a \"moat\" in this connection?  What does it mean to make something into one?  A Thielian moat is something that humans would either possess or not, relative to AI competition, so how would you make one if there wasn't already one there?  Or do you mean that if we wrestled with the theory, perhaps we'd be able to see a moat that was already there?\n\n\n\n[Tallinn][4:51]  (Sep. 19 comment)\nthis wasn't a very important point, but, sure: what i meant was that genetic bottleneck very plausibly makes humans more universal than systems without (something like) it. it's not much of a protection as AI developers have already discovered such techniques (eg, dropout) — but perhaps some safety techniques might be able to lean on this observation.\n\n\n\n\n[Yudkowsky][11:01]  (Sep. 19 comment)\nI think there's a whole Scheme for Alignment which hopes for a miracle along the lines of, \"Well, we're dealing with these enormous matrices instead of tiny genomes, so maybe we can build a sufficiently powerful intelligence to execute a pivotal act, whose tendency to generalize across domains is less than the corresponding human tendency, and this brings the difficulty of producing corrigibility into practical reach.\"\nThough, people who are hopeful about this without trying to imagine possible difficulties will predictably end up too hopeful; one must also ask oneself, \"Okay, but then it's also worse at generalizing the corrigibility dataset from weak domains we can safely label to powerful domains where the label is 'whoops that killed us'?\" and \"Are we relying on massive datasets to overcome poor generalization?  How do you get those for something like nanoengineering where the real world is too expensive to simulate?\"\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\nnature of the descent\nconversely, it feels to me that the crucial position in the other (richard, paul, many others) camp is something like:\nthe \"pit of generality\" model might be true at the limit, but the descent will not be quick nor clean, and will likely offer many opportunities for steering the future.\n\n\n\n\n[Yudkowsky][11:41]  (Sep. 18 comment)\n\nthe \"pit of generality\" model might be true at the limit, but the descent will not be quick nor clean\n\nI'm quite often on board with things not being quick or clean – that sounds like something you might read in a history book, and I am all about trying to make futuristic predictions sound more like history books and less like EAs imagining ways for everything to go the way an EA would do them.\nIt won't be slow and messy once we're out of the atmosphere, my models do say.  But my models at least permit – though they do not desperately, loudly insist – that we could end up with weird half-able AGIs affecting the Earth for an extended period.\nMostly my model throws up its hands about being able to predict exact details here, given that eg I wasn't able to time AlphaFold 2's arrival 5 years in advance; it might be knowable in principle, it might be the sort of thing that would be very predictable if we'd watched it happen on a dozen other planets, but in practice I have not seen people having much luck in predicting which tasks will become accessible due to future AI advances being able to do new cognition.\nThe main part where I issue corrections is when I see EAs doing the equivalent of reasoning, \"And then, when the pandemic hits, it will only take a day to design a vaccine, after which distribution can begin right away.\" I.e., what seems to me to be a pollyannaish/utopian view of how much the world economy would immediately accept AI inputs into core manufacturing cycles, as opposed to just selling AI anime companions that don't pour steel in turn. I predict much more absence of quick and clean when it comes to economies adopting AI tech, than when it comes to laboratories building the next prototypes of that tech.\n\n\n\n[Yudkowsky][11:43]  (Sep. 18 comment)\n\nwill likely offer many opportunities for steering the future\n\nAh, see, that part sounds less like history books.  \"Though many predicted disaster, subsequent events were actually so slow and messy, they offered many chances for well-intentioned people to steer the outcome and everything turned out great!\" does not sound like any particular segment of history book I can recall offhand.\n\n\n\n[Tallinn][4:53]  (Sep. 19 comment)\nok, yeah, this puts the burden of proof on the other side indeed\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n\ni'm sympathetic (but don't buy outright, given my uncertainty) to eliezer's point that even if that's true, we have no plan nor hope for actually steering things (via \"pivotal acts\") so \"who cares, we still die\";\ni'm also sympathetic that GWP might be too laggy a metric to measure the descent, but i don't fully buy that regulations/bureaucracy can guarantee its decoupling from AI progress: eg, the FDA-like-structures-as-progress-bottlenecks model predicts worldwide covid response well, but wouldn't cover things like apple under jobs, tesla/spacex under musk, or china under deng xiaoping;\n\n\n\n\n\n[Yudkowsky][11:51]  (Sep. 18 comment)\n\napple under jobs, tesla/spacex under musk, or china under deng xiaoping\n\nA lot of these examples took place over longer than a 4-year cycle time, and not all of that time was spent waiting on inputs from cognitive processes.\n\n\n\n[Tallinn][5:07]  (Sep. 19 comment)\nyeah, fair (i actually looked up china's GDP curve in deng era before writing this — indeed, wasn't very exciting). still, my inside view is that there are people and organisations for whom US-type bureaucracy is not going to be much of an obstacle.\n\n\n\n\n[Yudkowsky][11:09]  (Sep. 19 comment)\nI have a (separately explainable, larger) view where the economy contains a core of positive feedback cycles – better steel produces better machines that can farm more land that can feed more steelmakers – and also some products that, as much as they contribute to human utility, do not in quite the same way feed back into the core production cycles.\nIf you go back in time to the middle ages and sell them, say, synthetic gemstones, then – even though they might be willing to pay a bunch of GDP for that, even if gemstones are enough of a monetary good or they have enough production slack that measured GDP actually goes up – you have not quite contributed to steps of their economy's core production cycles in a way that boosts the planet over time, the way it would be boosted if you showed them cheaper techniques for making iron and new forms of steel.\nThere are people and organizations who will figure out how to sell AI anime waifus without that being successfully regulated, but it's not obvious to me that AI anime waifus feed back into core production cycles.\nWhen it comes to core production cycles the current world has more issues that look like \"No matter what technology you have, it doesn't let you build a house\" and places for the larger production cycle to potentially be bottlenecked or interrupted.\nI suspect that the main economic response to this is that entrepreneurs chase the 140 characters instead of the flying cars – people will gravitate to places where they can sell non-core AI goods for lots of money, rather than tackling the challenge of finding an excess demand in core production cycles which it is legal to meet via AI.\nEven if some tackle core production cycles, it's going to take them a lot longer to get people to buy their newfangled gadgets than it's going to take to sell AI anime waifus; the world may very well end while they're trying to land their first big contract for letting an AI lay bricks.\n\n\n\n[Tallinn][0:00]  (Sep. 20 comment)\ninteresting. my model of paul (and robin, of course) wants to respond here but i'm not sure how \n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n\nstill, developing a better model of the descent period seems very worthwhile, as it might offer opportunities for, using robin's metaphor, \"pulling the rope sideways\" in non-obvious ways – i understand that is part of the purpose of the debate;\nmy natural instinct here is to itch for carl's viewpoint \n\n\n\n\n\n[Yudkowsky][11:52]  (Sep. 18 comment)\n\ndeveloping a better model of the descent period seems very worthwhile\n\nI'd love to have a better model of the descent.  What I think this looks like is people mostly with specialization in econ and politics, who know what history books sound like, taking brief inputs from more AI-oriented folk in the form of multiple scenario premises each consisting of some random-seeming handful of new AI capabilities, trying to roleplay realistically how those might play out – not AIfolk forecasting particular AI capabilities exactly correctly, and then sketching pollyanna pictures of how they'd be immediately accepted into the world economy. \nYou want the forecasting done by the kind of person who would imagine a Covid-19 epidemic and say, \"Well, what if the CDC and FDA banned hospitals from doing Covid testing?\" and not \"Let's imagine how protein folding tech from AlphaFold would make it possible to immediately develop accurate Covid-19 tests!\"  They need to be people who understand the Law of Earlier Failure (less polite terms: Law of Immediate Failure, Law of Undignified Failure).\n\n\n\n[Tallinn][5:13]  (Sep. 19 comment)\ngreat! to me this sounds like something FLI would be in good position to organise. i'll add this to my projects list (probably would want to see the results of this debate first, plus wait for travel restrictions to ease)\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\nnature of cognition\ngiven that having a better understanding of cognition can help with both understanding the topology of cognitive systems space as well as likely trajectories of AI takeoff, in theory there should be a lot of value in debating what cognition is (the current debate started with discussing consequentialists).\n\nhowever, i didn't feel that there was much progress, and i found myself more confused as a result (which i guess is a form of progress!);\neg, take the term \"plan\" that was used in the debate (and, centrally, in nate's comments doc): i interpret it as \"policy produced by a consequentialist\" – however, now i'm confused about what's the relevant distinction between \"policies\" and \"cognitive processes\" (ie, what's a meta level classifier that can sort algorithms into such categories);\n\nit felt that abram's \"selection vs control\" article tried to distinguish along similar axis (controllers feel synonym-ish to \"policy instantiations\" to me);\nalso, the \"imperative vs functional\" difference in coding seems relevant;\ni'm further confused by human \"policies\" often making function calls to \"cognitive processes\" – suggesting some kind of duality, rather than producer-product relationship.\n\n\n\n\n\n\n\n[Yudkowsky][12:06]  (Sep. 18 comment)\n\nwhat's the relevant distinction between \"policies\" and \"cognitive processes\"\n\nWhat in particular about this matters?  To me they sound like points on a spectrum, and not obviously points that it's particularly important to distinguish on that spectrum.  A sufficiently sophisticated policy is itself an engine; human-engines are genetic policies.\n\n\n\n[Tallinn][5:18]  (Sep. 19 comment)\nwell, i'm not sure — just that nate's \"The consequentialism is in the plan, not the cognition\" writeup sort of made it sound like the distinction is important. again, i'm confused\n\n\n\n\n[Yudkowsky][11:11]  (Sep. 19 comment)\nDoes it help if I say \"consequentialism can be visible in the actual path through time, not the intent behind the output\"?\n\n\n\n[Tallinn][0:06]  (Sep. 20 comment)\nyeah, well, my initial interpretation of nate's point was, indeed, \"you can look at the product and conclude the consequentialist-bit for the producer\". but then i noticed that the producer-and-product metaphor is leaky (due to the cognition-policy duality/spectrum), so the quoted sentence gives me a compile error\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n\nis \"not goal oriented cognition\" an oxymoron?\n\n\n\n\n\n[Yudkowsky][12:06]  (Sep. 18 comment)\n\nis \"not goal oriented cognition\" an oxymoron?\n\n\"Non-goal-oriented cognition\" never becomes a perfect oxymoron, but the more you understand cognition, the weirder it sounds.\nEg, at the very shallow level, you've got people coming in going, \"Today I just messed around and didn't do any goal-oriented cognition at all!\"  People who get a bit further in may start to ask, \"A non-goal-oriented cognitive engine?  How did it come into existence?  Was it also not built by optimization?  Are we, perhaps, postulating a naturally-occurring Solomonoff inductor rather than an evolved one?  Or do you mean that its content is very heavily designed and the output of a consequentialist process that was steering the future conditional on that design existing, but the cognitive engine is itself not doing consequentialism beyond that?  If so, I'll readily concede that, say, a pocket calculator, is doing a kind of work that is not of itself consequentialist – though it might be used by a consequentialist – but as you start to postulate any big cognitive task up at the human level, it's going to require many cognitive subtasks to perform, and some of those will definitely be searching the preimages of large complicated functions.\"\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n\ni did not understand eliezer's \"time machine\" metaphor: was it meant to point to / intuition pump something other than \"a non-embedded exhaustive searcher with perfect information\" (usually referred to as \"god mode\");\n\n\n\n\n\n[Yudkowsky][11:59]  (Sep. 18 comment)\n\na non-embedded exhaustive searcher with perfect information\n\nIf you can view things on this level of abstraction, you're probably not the audience who needs to be told about time machines; if things sounded very simple to you, they probably were; if you wondered what the fuss is about, you probably don't need to fuss?  The intended audience for the time-machine metaphor, from my perspective, is people who paint a cognitive system slightly different colors and go \"Well, now it's not a consequentialist, right?\" and part of my attempt to snap them out of that is me going, \"Here is an example of a purely material system which DOES NOT THINK AT ALL and is an extremely pure consequentialist.\"\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n\nFWIW, my model of dario would dispute GPT characterisation as \"shallow pattern memoriser (that's lacking the core of cognition)\".\n\n\n\n\n\n[Yudkowsky][12:00]  (Sep. 18 comment)\n\ndispute \n\nAny particular predicted content of the dispute, or does your model of Dario just find something to dispute about it?\n\n\n\n[Tallinn][5:34]  (Sep. 19 comment)\nsure, i'm pretty confident that his system 1 could be triggered for uninteresting reasons here, but that's of course not what i had in mind.\nmy model of untriggered-dario disputes that there's a qualitative difference between (in your terminology) \"core of reasoning\" and \"shallow pattern matching\" — instead, it's \"pattern matching all the way up the ladder of abstraction\". in other words, GPT is not missing anything fundamental, it's just underpowered in the literal sense.\n\n\n\n\n[Yudkowsky][11:13]  (Sep. 19 comment)\nNeither Anthropic in general, nor Deepmind in general, has reached the stage of trusted relationship where I would argue specifics with them if I thought they were wrong about a thesis like that.\n\n\n\n[Tallinn][0:10]  (Sep. 20 comment)\nyup, i didn't expect you to!\n\n\n\n \n7.2. Nate Soares's summary\n \n\n[Soares]  (Sep 18 Google Doc)\nSorry for not making more insistence that the discussion be more concrete, despite Eliezer's requests.\nMy sense of the last round is mainly that Richard was attempting to make a few points that didn't quite land, and/or that Eliezer didn't quite hit head-on. My attempts to articulate it are below.\n—\nThere's a specific sense in which Eliezer seems quite confident about certain aspects of the future, for reasons that don't yet feel explicit.\nIt's not quite about the deep future — it's clear enough (to my Richard-model) why it's easier to make predictions about AIs that have \"left the atmosphere\".\nAnd it's not quite the near future — Eliezer has reiterated that his models permit (though do not demand) a period of weird and socially-impactful AI systems \"pre-superintelligence\".\nIt's about the middle future — the part where Eliezer's model, apparently confidently, predicts that there's something kinda like a discrete event wherein \"scary\" AI has finally been created; and the model further apparently-confidently predicts that, when that happens, the \"scary\"-caliber systems will be able to attain a decisive strategic advantage over the rest of the world.\nI think there's been a dynamic in play where Richard attempts to probe this apparent confidence, and a bunch of the probes keep slipping off to one side or another. (I had a bit of a similar sense when Paul joined the chat, also.)\nFor instance, I see queries of the form \"but why not expect systems that are half as scary, relevantly before we see the scary systems?\" as attempts to probe this confidence, that \"slip off\" with Eliezer-answers like \"my model permits weird not-really-general half-AI hanging around for a while in the runup\". Which, sure, that's good to know. But there's still something implicit in that story, where these are not-really-general half-AIs. Which is also evidenced when Eliezer talks about the \"general core\" of intelligence.\nAnd the things Eliezer was saying on consequentialism aren't irrelevant here, but those probes have kinda slipped off the far side of the confidence, if I understand correctly. Like, sure, late-stage sovereign-level superintelligences are epistemically and instrumentally efficient with respect to you (unless someone put in a hell of a lot of work to install a blindspot), and a bunch of that coherence filters in earlier, but there's still a question about how much of it has filtered down how far, where Eliezer seems to have a fairly confident take, informing his apparently-confident prediction about scary AI systems hitting the world in a discrete event like a hammer.\n(And my Eliezer-model is at this point saying \"at this juncture we need to have discussions about more concrete scenarios; a bunch of the confidence that I have there comes from the way that the concrete visualizations where scary AI hits the world like a hammer abound, and feel savvy and historical, whereas the concrete visualizations where it doesn't are fewer and seem full of wishful thinking and naivete\".)\nBut anyway, yeah, my read is that Richard (and various others) have been trying to figure out why Eliezer is so confident about some specific thing in this vicinity, and haven't quite felt like they've been getting explanations.\nHere's an attempt to gesture at some claims that I at least think Richard thinks Eliezer's confident in, but that Richard doesn't believe have been explicitly supported:\n1. There's a qualitative difference between the AI systems that are capable of ending the acute risk period (one way or another), and predecessor systems that in some sense don't much matter.\n2. That qualitative gap will be bridged \"the day after tomorrow\", ie in a world that looks more like \"DeepMind is on the brink\" and less like \"everyone is an order of magnitude richer, and the major gov'ts all have AGI projects, around which much of public policy is centered\".\n—\nThat's the main thing I wanted to say here.\nA subsidiary point that I think Richard was trying to make, but that didn't quite connect, follows.\nI think Richard was trying to probe Eliezer's concept of consequentialism to see if it supported the aforementioned confidence. (Some evidence: Richard pointing out a couple times that the question is not whether sufficiently capable agents are coherent, but whether the agents that matter are relevantly coherent. On my current picture, this is another attempt to probe the \"why do you think there's a qualitative gap, and that straddling it will be strategically key in practice?\" thing, that slipped off.)\nMy attempt at sharpening the point I saw Richard as driving at:\n\nConsider the following two competing hypotheses:\n\nThere's this \"deeply general\" core to intelligence, that will be strategically important in practice\nNope. Either there's no such core, or practical human systems won't find it, or the strategically important stuff happens before you get there (if you're doing your job right, in a way that natural selection wasn't), or etc.\n\n\nThe whole deep learning paradigm, and the existence of GPT, sure seem like they're evidence for (b) over (a).\nLike, (a) maybe isn't dead, but it didn't concentrate as much mass into the present scenario.\nIt seems like perhaps a bunch of Eliezer's confidence comes from a claim like \"anything capable of doing decently good work, is quite close to being scary\", related to his concept of \"consequentialism\".\nIn particular, this is a much stronger claim than that sufficiently smart systems are coherent, b/c it has to be strong enough to apply to the dumbest system that can make a difference.\nIt's easy to get caught up in the elegance of a theory like consequentialism / utility theory, when it will not in fact apply in practice.\nThere are some theories so general and ubiquitous that it's a little tricky to misapply them — like, say, conservation of momentum, which has some very particular form in the symmetry of physical laws, but which can also be used willy-nilly on large objects like tennis balls and trains (although even then, you have to be careful, b/c the real world is full of things like planets that you're kicking off against, and if you forget how that shifts the earth, your application of conservation of momentum might lead you astray).\nThe theories that you can apply everywhere with abandon, tend to have a bunch of surprising applications to surprising domains.\nWe don't see that of consequentialism.\n\nFor the record, my guess is that Eliezer isn't getting his confidence in things like \"there are non-scary systems and scary-systems, and anything capable of saving our skins is likely scary-adjacent\" by the sheer force of his consequentialism concept, in a manner that puts so much weight on it that it needs to meet this higher standard of evidence Richard was poking around for. (Also, I could be misreading Richard's poking entirely.)\nIn particular, I suspect this was the source of some of the early tension, where Eliezer was saying something like \"the fact that humans go around doing something vaguely like weighting outcomes by possibility and also by attractiveness, which they then roughly multiply, is quite sufficient evidence for my purposes, as one who does not pay tribute to the gods of modesty\", while Richard protested something more like \"but aren't you trying to use your concept to carry a whole lot more weight than that amount of evidence supports?\". cf my above points about some things Eliezer is apparently confident in, for which the reasons have not yet been stated explicitly to my Richard-model's satisfaction.\nAnd, ofc, at this point, my Eliezer-model is again saying \"This is why we should be discussing things concretely! It is quite telling that all the plans we can concretely visualize for saving our skins, are scary-adjacent; and all the non-scary plans, can't save our skins!\"\nTo which my Richard-model answers \"But your concrete visualizations assume the endgame happens the day after tomorrow, at least politically. The future tends to go sideways! The endgame will likely happen in an environment quite different from our own! These day-after-tomorrow visualizations don't feel like they teach me much, because I think there's a good chance that the endgame-world looks dramatically different.\"\nTo which my Eliezer-model replies \"Indeed, the future tends to go sideways. But I observe that the imagined changes, that I have heard so far, seem quite positive — the relevant political actors become AI-savvy, the major states start coordinating, etc. I am quite suspicious of these sorts of visualizations, and would take them much more seriously if there was at least as much representation of outcomes as realistic as \"then Trump becomes president\" or \"then at-home covid tests are banned in the US\". And if all the ways to save the world today are scary-adjacent, the fact that the future is surprising gives us no specific reason to hope for that particular parameter to favorably change when the future in fact goes sideways. When things look grim, one can and should prepare to take advantage of miracles, but banking on some particular miracle is foolish.\"\nAnd my Richard-model gets fuzzy at this point, but I'd personally be pretty enthusiastic about Richard naming a bunch of specific scenarios, not as predictions, but as the sorts of visualizations that seem to him promising, in the hopes of getting a much more object-level sense of why, in specific concrete scenarios, they either have the properties Eliezer is confident in, or are implausible on Eliezer's model (or surprise Eliezer and cause him to update).\n\n\n\n\n[Tallinn][0:06]  (Sep. 19)\nexcellent summary, nate! it also tracks my model of the debate well and summarises the frontier concisely (much better than your earlier notes or mine). unless eliezer or richard find major bugs in your summary, i'd nominate you to iterate after the next round of debate\n\n\n\n\n[Soares: ]\n\n\n\n\n\n\n\n \n7.3. Richard Ngo's summary\n \n\n[Ngo][1:48]  (Sep. 20)\nUpdated my summary to include the third discussion: [https://docs.google.com/document/d/1sr5YchErvSAY2I4EkJl2dapHcMp8oCXy7g8hd_UaJVw/edit]\nI'm also halfway through a document giving my own account of intelligence + specific safe scenarios.\n\n\n\n\n[Soares: ]\n\n\n\n\n\n\n\n \n\nThe post Soares, Tallinn, and Yudkowsky discuss AGI cognition appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Soares, Tallinn, and Yudkowsky discuss AGI cognition", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=3", "id": "6ea73cb166424e620069da1d10d483de"} {"text": "Christiano, Cotra, and Yudkowsky on AI progress\n\n\n \nThis post is a transcript of a discussion between Paul Christiano, Ajeya Cotra, and Eliezer Yudkowsky on AGI forecasting, following up on Paul and Eliezer's \"Takeoff Speeds\" discussion.\n \nColor key:\n\n\n\n\n Chat by Paul and Eliezer \n Chat by Ajeya \n Inline comments \n\n\n\n\n \n \n8. September 20 conversation\n \n8.1. Chess and Evergrande\n \n\n[Christiano][15:28]\n I still feel like you are overestimating how big a jump alphago is, or something. Do you have a mental prediction of how the graph of (chess engine quality) vs (time) looks, and whether neural net value functions are a noticeable jump in that graph?\nLike, people investing in \"Better Software\" doesn't predict that you won't be able to make progress at playing go. The reason you can make a lot of progress at go is that there was extremely little investment in playing better go.\nSo then your work is being done by the claim \"People won't be working on the problem of acquiring a decisive strategic advantage,\" not that people won't be looking in quite the right place and that someone just had a cleverer idea\n\n\n\n\n[Yudkowsky][16:35]\nI think I'd expect something like… chess engine slope jumps a bit for Deep Blue, then levels off with increasing excitement, then jumps for the Alpha series? Albeit it's worth noting that Deepmind's efforts there were going towards generality rather than raw power; chess was solved to the point of being uninteresting, so they tried to solve chess with simpler code that did more things. I don't think I do have strong opinions about what the chess trend should look like, vs. the Go trend; I have no memories of people saying the chess trend was breaking upwards or that there was a surprise there.\nIncidentally, the highly well-traded financial markets are currently experiencing sharp dips surrounding the Chinese firm of Evergrande, which I was reading about several weeks before this.\nI don't see the basic difference in the kind of reasoning that says \"Surely foresightful firms must produce investments well in advance into earlier weaker applications of AGI that will double the economy\", and the reasoning that says \"Surely world economic markets and particular Chinese stocks should experience smooth declines as news about Evergrande becomes better-known and foresightful financial firms start to remove that stock from their portfolio or short-sell it\", except that in the latter case there are many more actors with lower barriers to entry than presently exist in the auto industry or semiconductor industry never mind AI.\nor if not smooth because of bandwagoning and rational fast actors, then at least the markets should (arguendo) be reacting earlier than they're reacting now, given that I heard about Evergrande earlier; and they should have options-priced Covid earlier; and they should have reacted to the mortgage market earlier. If even markets there can exhibit seemingly late wild swings, how is the economic impact of AI – which isn't even an asset market! – forced to be earlier and smoother than that, as a result of wise investing?\nThere's just such a vast gap between hopeful reasoning about how various agents and actors should all do the things the speaker finds very reasonable, thereby yielding smooth behavior of the Earth, versus reality.\n\n\n\n \n \n9. September 21 conversation\n \n9.1. AlphaZero, innovation vs. industry, the Wright Flyer, and the Manhattan Project\n \n\n[Christiano][10:18]\n(For benefit of readers, the market is down 1.5% from friday close -> tuesday open, after having drifted down 2.5% over the preceding two weeks. Draw whatever lesson you want from that.)\nAlso for the benefit of readers, here is the SSDF list of computer chess performance by year. I think the last datapoint is with the first version of neural net evaluations, though I think to see the real impact we want to add one more datapoint after the neural nets are refined (which is why I say I also don't know what the impact is)\n\nNo one keeps similarly detailed records for Go, and there is much less development effort, but the rate of progress was about 1 stone per year from 1980 until 2015 (see https://intelligence.org/files/AlgorithmicProgress.pdf, written way before AGZ). In 2012 go bots reached about 4-5 amateur dan. By DeepMind's reckoning here (https://www.nature.com/articles/nature16961, figure 4) Fan AlphaGo about 4-5 stones stronger-4 years later, with 1 stone explained by greater runtime compute. They could then get further progress to be superhuman with even more compute, radically more than were used for previous projects and with pretty predictable scaling. That level is within 1-2 stones of the best humans (professional dan are greatly compressed relative to amateur dan), so getting to \"beats best human\" is really just not a big discontinuity and the fact that DeepMind marketing can find an expert who makes a really bad forecast shouldn't be having such a huge impact on your view.\nThis understates the size of the jump from AlphaGo, because that was basically just the first version of the system that was superhuman and it was still progressing very rapidly as it moved from prototype to slightly-better-prototype, which is why you saw such a close game. (Though note that the AlphaGo prototype involved much more engineering effort than any previous attempt to play go, so it's not surprising that a \"prototype\" was the thing to win.)\nSo to look at actual progress after the dust settles and really measure how crazy this was, it seems much better to look at AlphaZero which continued to improve further, see (https://sci-hub.se/https://www.nature.com/articles/nature24270, figure 6b). Their best system got another ~8 stones of progress over AlphaGo. Now we are like 7-10 stones ahead of trend, of which I think about 3 stones are explained by compute. Maybe call it 6 years ahead of schedule?\nSo I do think this is pretty impressive, they were slightly ahead of schedule for beating the best humans but they did it with a huge margin of error. I think the margin is likely overstated a bit by their elo evaluation methodology, but I'd still grant like 5 years ahead of the nearest competition.\nI'd be interested in input from anyone who knows more about the actual state of play (+ is allowed to talk about it) and could correct errors.\nMostly that whole thread is just clearing up my understanding of the empirical situation, probably we still have deep disagreements about what that says about the world, just as e.g. we read very different lessons from market movements.\nProbably we should only be talking about either ML or about historical technologies with meaningful economic impacts. In my view your picture is just radically unlike how almost any technologies have been developed over the last few hundred years. So probably step 1 before having bets is to reconcile our views about historical technologies, and then maybe as a result of that we could actually have a bet about future technology. Or we could try to shore up the GDP bet.\nLike, it feels to me like I'm saying: AI will be like early computers, or modern semiconductors, or airplanes, or rockets, or cars, or trains, or factories, or solar panels, or genome sequencing, or basically anything else. And you are saying: AI will be like nuclear weapons.\nI think from your perspective it's more like: AI will be like all the historical technologies, and that means there will be a hard takeoff. The only way you get a soft takeoff forecast is by choosing a really weird thing to extrapolate from historical technologies.\nSo we're both just forecasting that AI will look kind of like other stuff in the near future, and then both taking what we see as the natural endpoint of that process.\nTo me it feels like the nuclear weapons case is the outer limit of what looks plausible, where someone is able to spend $100B for a chance at a decisive strategic advantage.\n\n\n\n\n[Yudkowsky][11:11]\nGo-wise, I'm a little concerned about that \"stone\" metric – what would the chess graph look like if it was measuring pawn handicaps? Are the professional dans compressed in Elo, not just \"stone handicaps\", relative to the amateur dans? And I'm also hella surprised by the claim, which I haven't yet looked at, that Alpha Zero got 8 stones of progress over AlphaGo – I would not have been shocked if you told me that God's Algorithm couldn't beat Lee Se-dol with a 9-stone handicap.\nLike, the obvious metric is Elo, so if you go back and refigure in \"stone handicaps\", an obvious concern is that somebody was able to look into the past and fiddle their hindsight until they found a hindsightful metric that made things look predictable again. My sense of Go said that 5-dan amateur to 9-dan pro was a HELL of a leap for 4 years, and I also have some doubt about the original 5-dan-amateur claims and whether those required relatively narrow terms of testing (eg timed matches or something).\nOne basic point seems to be whether AGI is more like an innovation or like a performance metric over an entire large industry.\nAnother point seems to be whether the behavior of the world is usually like that, in some sense, or if it's just that people who like smooth graphs can go find some industries that have smooth graphs for particular performance metrics that happen to be smooth.\nAmong the smoothest metrics I know that seems like a convergent rather than handpicked thing to cite, is world GDP, which is the sum of more little things than almost anything else, and whose underlying process is full of multiple stages of converging-product-line bottlenecks that make it hard to jump the entire GDP significantly even when you jump one component of a production cycle… which, from my standpoint, is a major reason to expect AI to not hit world GDP all that hard until AGI passes the critical threshold of bypassing it entirely. Having 95% of the tech to invent a self-replicating organism (eg artificial bacterium) does not get you 95%, 50%, or even 10% of the impact.\n(it's not so much the 2% reaction of world markets to Evergrande that I was singling out earlier, 2% is noise-ish, but the wider swings in the vicinity of Evergrande particularly)\n\n\n\n[Christiano][12:41]\nYeah, I'm just using \"stone\" to mean \"elo difference that is equal to 1 stone at amateur dan / low kyu,\" you can see DeepMind's conversion (which I also don't totally believe) in figure 4 here (https://sci-hub.se/https://www.nature.com/articles/nature16961). Stones are closer to constant elo than constant handicap, it's just a convention to name them that way.\n\n\n\n\n[Yudkowsky][12:42]\nk then\n\n\n\n[Christiano][12:47]\nBut my description above still kind of understates the gap I think. They call 230 elo 1 stone, and I think prior rate of progress is more like 200 elo/year. They put AlphaZero about 3200 elo above the 2012 system, so that's like 16 years ahead = 11 years ahead of schedule. At least 2 years are from test-time hardware, and self-play systematically overestimates elo differences at the upper end of that. But 5 years ahead is still too low and that sounds more like 7-9 years ahead. ETA: and my actual best guess all things considered is probably 10 years ahead, which I agree is just a lot bigger than 5. And I also understated how much of the gap was getting up to Lee Sedol.\nThe go graph I posted wasn't made with hindsight, that was from 2014\nI mean, I'm fine with you saying that people who like smooth graphs are cherry-picking evidence, but do you want to give any example other than nuclear weapons of technologies with the kind of discontinuous impact you are describing?\nI do agree that the difference in our views is like \"innovation\" vs \"industry.\" And a big part of my position is that innovation-like things just don't usually have big impacts for kind of obvious reasons, they start small and then become more industry-like as they scale up. And current deep learning seems like an absolutely stereotypical industry that is scaling up rapidly in an increasingly predictable way.\nAs far as I can tell the examples we know of things changing continuously aren't handpicked, we've been looking at all the examples we can find, and no one is proposing or even able to find almost anything that looks like you are imagining AI will look.\nLike, we've seen deep learning innovations in the form of prototypes (most of all AlexNet), and they were cool and represented giant fast changes in people's views. And more recently we are seeing bigger much-less-surprising changes that are still helping a lot in raising the tens of billions of dollars that people are raising. And the innovations we are seeing are increasingly things that trade off against modest improvements in model size, there are fewer and fewer big surprises, just like you'd predict. It's clearer and clearer to more and more people what the roadmap is—the roadmap is not yet quite as clear as in semiconductors, but as far as I can tell that's just because the field is still smaller.\n\n\n\n\n[Yudkowsky][13:23]\nI sure wasn't imagining there was a roadmap to AGI! Do you perchance have one which says that AGI is 30 years out?\nFrom my perspective, you could as easily point to the Wright Flyer as an atomic bomb. Perhaps this reflects again the \"innovation vs industry\" difference, where I think in terms of building a thing that goes foom thereby bypassing our small cute world GDP, and you think in terms of industries that affect world GDP in an invariant way throughout their lifetimes.\nWould you perhaps care to write off the atomic bomb too? It arguably didn't change the outcome of World War II or do much that conventional weapons in great quantity couldn't; Japan was bluffed into believing the US could drop a nuclear bomb every week, rather than the US actually having that many nuclear bombs or them actually being used to deliver a historically outsized impact on Japan. From the industry-centric perspective, there is surely some graph you can draw which makes nuclear weapons also look like business as usual, especially if you go by destruction per unit of whole-industry non-marginal expense, rather than destruction per bomb.\n\n\n\n[Christiano][13:27]\nseems like you have to make the wright flyer much better before it's important, and that it becomes more like an industry as that happens, and that this is intimately related to why so few people were working on it\nI think the atomic bomb is further on the spectrum than almost anything, but it still doesn't feel nearly as far as what you are expecting out of AI\nthe manhattan project took years and tens of billions; if you wait an additional few years and spend an additional few tens of billions then it would be a significant improvement in destruction or deterrence per $ (but not totally insane)\nI do think it's extremely non-coincidental that the atomic bomb was developed in a country that was practically outspending the whole rest of the world in \"killing people technology\"\nand took a large fraction of that country's killing-people resources\neh, that's a bit unfair, the us was only like 35% of global spending on munitions\nand the manhattan project itself was only a couple percent of total munitions spending\n\n\n\n\n[Yudkowsky][13:32]\na lot of why I expect AGI to be a disaster is that I am straight-up expecting AGI to be different.  if it was just like coal or just like nuclear weapons or just like viral biology then I would not be way more worried about AGI than I am worried about those other things.\n\n\n\n[Christiano][13:33]\nthat definitely sounds right\nbut it doesn't seem like you have any short-term predictions about AI being different\n\n\n\n \n9.2. AI alignment vs. biosafety, and measuring progress\n \n\n[Yudkowsky][13:33]\nare you more worried about AI than about bioengineering?\n\n\n\n[Christiano][13:33]\nI'm more worried about AI because (i) alignment is a thing, unrelated to takeoff speed, (ii) AI is a (ETA: likely to be) huge deal and bioengineering is probably a relatively small deal\n(in the sense of e.g. how much $ people spend, or how much $ it makes, or whatever other metric of size you want to use)\n\n\n\n\n[Yudkowsky][13:35]\nwhat's the disanalogy to (i) biosafety is a thing, unrelated to the speed of bioengineering?  why expect AI to be a huge deal and bioengineering to be a small deal?  is it just that investing in AI is scaling faster than investment in bioengineering?\n\n\n\n[Christiano][13:35]\nno, alignment is a really easy x-risk story, bioengineering x-risk seems extraordinarily hard\nIt's really easy to mess with the future by creating new competitors with different goals, if you want to mess with the future by totally wiping out life you have to really try at it and there's a million ways it can fail. The bioengineering seems like it basically requires deliberate and reasonably competent malice whereas alignment seems like it can only be averted with deliberate effort, etc.\nI'm mostly asking about historical technologies to try to clarify expectations, I'm pretty happy if the outcome is: you think AGI is predictably different from previous technologies in ways we haven't seen yet\nthough I really wish that would translate into some before-end-of-days prediction about a way that AGI will eventually look different\n\n\n\n\n[Yudkowsky][13:38]\nin my ontology a whole lot of threat would trace back to \"AI hits harder, faster, gets too strong to be adjusted\"; tricks with proteins just don't have the raw power of intelligence\n\n\n\n[Christiano][13:39]\nin my view it's nearly totally orthogonal to takeoff speed, though fast takeoffs are a big reason that preparation in advance is more useful\n(but not related to the basic reason that alignment is unprecedentedly scary)\nIt feels to me like you are saying that the AI-improving-AI will move very quickly from \"way slower than humans\" to \"FOOM in <1 year,\" but it just looks like that is very surprising to me.\nHowever I do agree that if AI-improving-AI was like AlphaZero, then it would happen extremely fast.\nIt seems to me like it's pretty rare to have these big jumps, and it gets much much rarer as technologies become more important and are more industry-like rather than innovation like (and people care about them a lot rather than random individuals working on them, etc.). And I can't tell whether you are saying something more like \"nah big jumps happen all the time in places that are structurally analogous to the key takeoff jump, even if the effects are blunted by slow adoption and regulatory bottlenecks and so on\" or if you are saying \"AGI is atypical in how jumpy it will be\"\n\n\n\n\n[Yudkowsky][13:44]\nI don't know about slower; GPT-3 may be able to type faster than a human\n\n\n\n[Christiano][13:45]\nYeah, I guess we've discussed how you don't like the abstraction of \"speed of making progress\"\n\n\n\n\n[Yudkowsky][13:45]\nbut, basically less useful in fundamental ways than a human civilization, because they are less complete, less self-contained\n\n\n\n[Christiano][13:46]\nEven if we just assume that your AI needs to go off in the corner and not interact with humans, there's still a question of why the self-contained AI civilization is making ~0 progress and then all of a sudden very rapid progress\n\n\n\n\n[Yudkowsky][13:46]\nunfortunately a lot of what you are saying, from my perspective, has the flavor of, \"but can't you tell me about your predictions earlier on of the impact on global warming at the Homo erectus level\"\nyou have stories about why this is like totally not a fair comparison\nI do not share these stories\n\n\n\n[Christiano][13:46]\nI don't understand either your objection nor the reductio\nlike, here's how I think it works: AI systems improve gradually, including on metrics like \"How long does it take them to do task X?\" or \"How high-quality is their output on task X?\"\n\n\n\n\n[Yudkowsky][13:47]\nI feel like the thing we know is something like, there is a sufficiently high level where things go whooosh humans-from-hominids style\n\n\n\n[Christiano][13:47]\nWe can measure the performance of AI on tasks like \"Make further AI progress, without human input\"\nAny way I can slice the analogy, it looks like AI will get continuously better at that task\n\n\n\n\n[Yudkowsky][13:48]\nhow would you measure progress from GPT-2 to GPT-3, and would you feel those metrics really captured the sort of qualitative change that lots of people said they felt?\n\n\n\n[Christiano][13:48]\nAnd it seems like we have a bunch of sources of data we can use about how fast AI will get better\nCould we talk about some application of GPT-2 or GPT-3?\nalso that's a lot of progress, spending 100x more is a lot more money\n\n\n\n\n[Yudkowsky][13:49]\nmy world, GPT-3 has very few applications because it is not quite right and not quite complete\n\n\n\n[Christiano][13:49]\nalso it's still really dumb\n\n\n\n\n[Yudkowsky][13:49]\nlike a self-driving car that does great at 99% of the road situations\neconomically almost worthless\n\n\n\n[Christiano][13:49]\nI think the \"being dumb\" is way more important than \"covers every case\"\n\n\n\n\n[Yudkowsky][13:50]\n(albeit that if new cities could still be built, we could totally take those 99%-complete AI cars and build fences and fence-gates around them, in a city where they were the only cars on the road, in which case they would work, and get big economic gains from these new cities with driverless cars, which ties back into my point about how current world GDP is unwilling to accept tech inputs)\nlike, it is in fact very plausible to me that there is a neighboring branch of reality with open borders and no housing-supply-constriction laws and no medical-supply-constriction laws, and their world GDP does manage to double before AGI hits them really hard, albeit maybe not in 4 years.  this world is not Earth.  they are constructing new cities to take advantage of 99%-complete driverless cars right now, or rather, they started constructing them 5 years ago and finished 4 years and 6 months ago.\n\n\n \n9.3. Requirements for FOOM\n \n\n[Christiano][13:53]\nI really feel like the important part is the jumpiness you are imagining on the AI side / why AGI is different from other things\n\n\n\n\n[Cotra][13:53]\nIt's actually not obvious to me that Eliezer is imagining that much more jumpiness on the AI technology side than you are, Paul\nE.g. he's said in the past that while the gap from \"subhuman to superhuman AI\" could be 2h if it's in the middle of FOOM, it could also be a couple years if it's more like scaling alphago\n\n\n\n\n[Yudkowsky][13:54]\nIndeed!  We observed this jumpiness with hominids.  A lot of stuff happened at once with hominids, but a critical terminal part of the jump was the way that hominids started scaling their own food supply, instead of being ultimately limited by the food supply of the savanna.\n\n\n\n[Cotra][13:54]\nA couple years is basically what Paul believes\n\n\n\n\n[Christiano][13:55]\n(discord is not a great place for threaded conversations :()\n\n\n\n\n[Cotra][13:55]\nWhat are the probabilities you're each placing on the 2h-2y spectrum? I feel like Paul is like \"no way on 2h, likely on 2y\" and Eliezer is like \"who knows\" on the whole spectrum, and a lot of the disagreement is the impact of the previous systems?\n\n\n\n\n[Christiano][13:55]\nyeah, I'm basically at \"no way,\" because it seems obvious that the AI that can foom in 2h is preceded by the AI that can foom in 2y\n\n\n\n\n[Yudkowsky][13:56]\nwell, we surely agree there!\n\n\n\n[Christiano][13:56]\nOK, and it seems to me like it is preceded by years\n\n\n\n\n[Yudkowsky][13:56]\nwe disagree on whether the AI that can foom in 2y clearly comes more than 2y before the AI that fooms in 2h\n\n\n\n[Christiano][13:56]\nyeah\nperhaps we can all agree it's preceded by at least 2h\nso I have some view like: for any given AI we can measure \"how long does it take to foom?\" and it seems to me like this is just a nice graph\nand it's not exactly clear how quickly that number is going down, but a natural guess to me is something like \"halving each year\" based on the current rate of progress in hardware and software\nand you see localized fast progress most often in places where there hasn't yet been much attention\nand my best guess for your view is that actually that's not a nice graph at all, there is some critical threshold or range where AI quickly moves from \"not fooming for a really long time\" to \"fooming really fast,\" and that seems like the part I'm objecting to\n\n\n\n\n[Cotra][13:59]\nPaul, is your take that there's a non-infinity number for time to FOOM that'd be associated with current AI systems (unassisted by humans)?\nAnd it's going down over time?\nI feel like I would have said something more like \"there's a $ amount it takes to build a system that will FOOM in X amount of time, and that's going down\"\nwhere it's like quadrillions of dollars today\n\n\n\n\n[Christiano][14:00]\nI think it would be a big engineering project to make such an AI, which no one is doing because it would be uselessly slow even if successful\n\n\n\n\n[Yudkowsky][14:02]\nI… don't think GPT-3 fooms given 2^30 longer time to think about than the systems that would otherwise exist 30 years from now, on timelines I'd consider relatively long, and hence generous to this viewpoint?  I also don't think you can take a quadrillion dollars and scale GPT-3 to foom today?\n\n\n\n[Cotra][14:03]\nI would agree with your take on GPT-3 fooming, and I didn't mean a quadrillion dollars just to scale GPT-3, would probably be a difft architecture\n\n\n\n\n[Christiano][14:03]\nI also agree that GPT-3 doesn't foom, it just keeps outputting [next web page]…\nBut I think the axes of \"smart enough to foom fast\" and \"wants to foom\" are pretty different. I also agree there is some minimal threshold below which it doesn't even make sense to talk about \"wants to foom,\" which I think is probably just not that hard to reach.\n(Also there are always diminishing returns as you continue increasing compute, which become very relevant if you try to GPT-3 for a billion billion years as in your hypothetical even apart from \"wants to foom\".)\n\n\n\n\n[Cotra][14:06]\nI think maybe you and EY then disagree on where the threshold from \"infinity\" to \"a finite number\" for \"time for this AI system to FOOM\" begins? where eliezer thinks it'll drop from infinity to a pretty small finite number and you think it'll drop to a pretty large finite number, and keep going down from there\n\n\n\n\n[Christiano][14:07]\nI also think we will likely jump down to a foom-ing system only after stuff is pretty crazy, but I think that's probably less important\nI think what you said is probably the main important disagreement\n\n\n\n\n[Cotra][14:08]\nas in before that point it'll be faster to have human-driven progress than FOOM-driven progress bc the FOOM would be too slow?\nand there's some crossover point around when the FOOM time is just a bit faster than the human-driven progress time\n\n\n\n\n[Christiano][14:09]\nyeah, I think most likely (AI+humans) is faster than (AI alone) because of complementarity. But I think Eliezer and I would still disagree even if I thought there was 0 complementarity and it's just (humans improving AI) and separately (AI improving AI)\non that pure substitutes model I expect \"AI foom\" to start when the rate of AI-driven AI progress overtakes the previous rate of human-driven AI progress\nlike, I expect the time for successive \"doublings\" of AI output to be like 1 year, 1 year, 1 year, 1 year, [AI takes over] 6 months, 3 months, …\nand the most extreme fast takeoff scenario that seems plausible is that kind of perfect substitutes + no physical economic impact from the prior AI systems\nand then by that point fast enough physical impact is really hard so it happens essentially after the software-only singularity\nI consider that view kind of unlikely but at least coherent\n\n\n\n \n9.4. AI-driven accelerating economic growth\n \n\n[Yudkowsky][14:12]\nI'm expecting that the economy doesn't accept much inputs from chimps, and then the economy doesn't accept much input from village idiots, and then the economy doesn't accept much input from weird immigrants.  I can imagine that there may or may not be a very weird 2-year or 3-month period with strange half-genius systems running around, but they will still not be allowed to build houses.  In the terminal phase things get more predictable and the AGI starts its own economy instead.\n\n\n\n[Christiano][14:12]\nI guess you can go even faster, by having a big and accelerating ramp-up in human investment right around the end, so that the \"1 year\" is faster (e.g. if recursive self-improvement was like playing go, and you could move from \"a few individuals\" to \"google spending $10B\" over a few years)\n\n\n\n\n[Yudkowsky][14:13]\nMy model prophecy doesn't rule that out as a thing that could happen, but sure doesn't emphasize it as a key step that needs to happen.\n\n\n\n[Christiano][14:13]\nI think it's very likely that AI will mostly be applied to further hardware+software progress\n\n\n\n\n[Cotra: ]\n\n\n\n\nI don't really understand why you keep talking about houses and healthcare\n\n\n\n\n[Cotra][14:13]\nEliezer, what about stuff like Google already using ML systems to automate its TPU load-sharing decisions, and people starting ot use Codex to automate routine programming, and so on? Seems like there's a lot of stuff like that starting to already happen and markets are pricing in huge further increases\n\n\n\n\n[Christiano][14:14]\nit seems like the non-AI up-for-grabs zone are things like manufacturing, not things like healthcare\n\n\n\n\n[Cotra: ]\n\n\n\n\n\n\n\n\n[Cotra][14:14]\n(I mean on your timelines obviously not much time for acceleration anyway, but that's distinct from the regulation not allowing weak AIs to do stuff story)\n\n\n\n\n[Yudkowsky][14:14]\nBecause I think that a key thing of what makes your prophecy less likely is the way that it happens inside the real world, where, economic gains or not, the System is unwilling/unable to take the things that are 99% self-driving cars and start to derive big economic benefits from those.\n\n\n\n[Cotra][14:15]\nbut it seems like huge economic gains could happen entirely in industries mostly not regulated and not customer-facing, like hardware/software R&D, manufacturing. shipping logistics, etc\n\n\n\n\n[Yudkowsky][14:15]\nAjeya, I'd consider Codex of far greater could-be-economically-important-ness than automated TPU load-sharing decisions\n\n\n\n[Cotra][14:15]\ni would agree with that, it's smarter and more general\nand i think that kind of thing could be applied on the hardware chip design side too\n\n\n\n\n[Yudkowsky][14:16]\nno, because the TPU load-sharing stuff has an obvious saturation point as a world economic input, while superCodex could be a world economic input in many more places\n\n\n\n[Cotra][14:16]\nthe TPU load sharing thing was not a claim that this application could scale up to crazy impacts, but that it was allowed to happen, and future stuff that improves that kind of thing (back-end hardware/software/logistics) would probably also be allowed\n\n\n\n\n[Yudkowsky][14:16]\nmy sense is that dectupling the number of programmers would not lift world GDP much, but it seems a lot more possible for me to be wrong about that\n\n\n\n[Christiano][14:17]\nthe point is that housing and healthcare are not central examples of things that scale up at the beginning of explosive growth, regardless of whether it's hard or soft\nthey are slower and harder, and also in efficient markets-land they become way less important during the transition\nso they aren't happening that much on anyone's story\nand also it doesn't make that much difference whether they happen, because they have pretty limited effects on other stuff\nlike, right now we have an industry of ~hundreds of billions that is producing computing hardware, building datacenters, mining raw inputs, building factories to build computing hardware, solar panels, shipping around all of those parts, etc. etc.\nI'm kind of interested in the question of whether all that stuff explodes, although it doesn't feel as core as the question of \"what are the dynamics of the software-only singularity and how much $ are people spending initiating it?\"\nbut I'm not really interested in the question of whether human welfare is spiking during the transition or only after\n\n\n\n\n[Yudkowsky][14:20]\nAll of world GDP has never felt particularly relevant to me on that score, since twice as much hardware maybe corresponds to being 3 months earlier, or something like that.\n\n\n\n[Christiano][14:21]\nthat sounds like the stuff of predictions?\n\n\n\n\n[Yudkowsky][14:21]\nBut if complete chip manufacturing cycles have accepted much more effective AI input, with no non-AI bottlenecks, then that… sure is a much more material element of a foom cycle than I usually envision.\n\n\n\n[Christiano][14:21]\nlike, do you think it's often the case that 3 months of software progress = doubling compute spending? or do you think AGI is different from \"normal\" AI on this perspective?\nI don't think that's that far off anyway\nI would guess like ~1 year\n\n\n\n\n[Yudkowsky][14:22]\nLike, world GDP that goes up by only 10%, but that's because producing compute capacity was 2.5% of world GDP and that quadrupled, starts to feel much more to me like it's part of a foom story.\nI expect software-beats-hardware to hit harder and harder as you get closer to AGI, yeah.\nthe prediction is firmer near the terminal phase, but I think this is also a case where I expect that to be visible earlier\n\n\n\n[Christiano][14:24]\nI think that by the time that the AI-improving-AI takes over, it's likely that hardware+software manufacturing+R&D represents like 10-20% of GDP, and that the \"alien accountants\" visiting earth would value those companies at like 80%+ of GDP\n\n\n\n \n9.5. Brain size and evolutionary history\n \n\n[Cotra][14:24]\nOn software beating hardware, how much of your view is dependent on your belief that the chimp -> human transition was probably not mainly about brain size because if it were about brain size it would have happened faster? My understanding is that you think the main change is a small software innovation which increased returns to having a bigger brain. If you changed your mind and thought that the chimp -> human transition was probably mostly about raw brain size, what (if anything) about your AI takeoff views would change?\n\n\n\n\n[Yudkowsky][14:25]\nI think that's a pretty different world in a lot of ways!\nbut yes it hits AI takeoff views too\n\n\n\n[Christiano][14:25]\nregarding software vs hardware, here is an example of asking this question for imagenet classification (\"how much compute to train a model to do the task?\"), with a bit over 1 year doubling times (https://openai.com/blog/ai-and-efficiency/). I guess my view is that we can make a similar graph for \"compute required to make your AI FOOM\" and that it will be falling significantly slower than 2x/year. And my prediction for other tasks is that the analogous graphs will also tend to be falling slower than 2x/year.\n\n\n\n\n[Yudkowsky][14:26]\nto the extent that I modeled hominid evolution as having been \"dutifully schlep more of the same stuff, get predictably more of the same returns\" that would correspond to a world in which intelligence was less scary, different, dangerous-by-default\n\n\n\n[Cotra][14:27]\nthanks, that's helpful. I looked around in IEM and other places for a calculation of how quickly we should have evolved to humans if it were mainly about brain size, but I only found qualitative statements. If there's a calculation somewhere I would appreciate a pointer to it, because currently it seems to me that a story like \"selection pressure toward general intelligence was weak-to-moderate because it wasn't actually that important for fitness, and this degree of selection pressure is consistent with brain size being the main deal and just taking a few million years to happen\" is very plausible\n\n\n\n\n[Yudkowsky][14:29]\nwell, for one thing, the prefrontal cortex expanded twice as fast as the rest\nand iirc there's evidence of a lot of recent genetic adaptation… though I'm not as sure you could pinpoint it as being about brain-stuff or that the brain-stuff was about cognition rather than rapidly shifting motivations or something.\nelephant brains are 3-4 times larger by weight than human brains (just looked up)\nif it's that easy to get returns on scaling, seems like it shouldn't have taken that long for evolution to go there\n\n\n\n[Cotra][14:31]\nbut they have fewer synapses (would compute to less FLOP/s by the standard conversion)\nhow long do you think it should have taken?\n\n\n\n\n[Yudkowsky][14:31]\nearly dinosaurs should've hopped onto the predictable returns train\n\n\n\n[Cotra][14:31]\nis there a calculation?\nyou said in IEM that evolution increases organ sizes quickly but there wasn't a citation to easily follow up on there\n\n\n\n\n[Yudkowsky][14:33]\nI mean, you could produce a graph of smooth fitness returns to intelligence, smooth cognitive returns on brain size/activity, linear metabolic costs for brain activity, fit that to humans and hominids, then show that obviously if hominids went down that pathway, large dinosaurs should've gone down it first because they had larger bodies and the relative metabolic costs of increased intelligence would've been lower at every point along the way\nI do not have a citation for that ready, if I'd known at the time you'd want one I'd have asked Luke M for it while he still worked at MIRI \n\n\n\n[Cotra][14:35]\ncool thanks, will think about the dinosaur thing (my first reaction is that this should depend on the actual fitness benefits to general intelligence which might have been modest)\n\n\n\n\n[Yudkowsky][14:35]\nI suspect we're getting off Paul's crux, though\n\n\n\n[Cotra][14:35]\nyeah we can go back to that convo (though i think paul would also disagree about this thing, and believes that the chimp to human thing was mostly about size)\nsorry for hijacking\n\n\n\n\n[Yudkowsky][14:36]\nwell, if at some point I can produce a major shift in EA viewpoints by coming up with evidence for a bunch of non-brain-size brain selection going on over those timescales, like brain-related genes where we can figure out how old the mutation is, I'd then put a lot more priority on digging up a paper like that\nI'd consider it sufficiently odd to imagine hominids->humans as being primarily about brain size, given the evidence we have, that I do not believe this is Paul's position until Paul tells me so\n\n\n\n[Christiano][14:49]\nI would guess it's primarily about brain size / neuron count / cortical neuron count\nand that the change in rate does mostly go through changing niche, where both primates and birds have this cycle of rapidly accelerating brain size increases that aren't really observed in other animals\nit seems like brain size is increasing extremely quickly on both of those lines\n\n\n\n\n[Yudkowsky][14:50]\nwhy aren't elephants GI?\n\n\n\n[Christiano][14:51]\nmostly they have big brains to operate big bodies, and also my position obviously does not imply (big brain) ==(necessarily implies)==> general intelligence\n\n\n\n\n[Yudkowsky][14:52]\nI don't understand, in general, how your general position manages to strongly imply a bunch of stuff about AGI and not strongly imply similar stuff about a bunch of other stuff that sure sounds similar to me\n\n\n\n[Christiano][14:52]\ndon't elephants have very few synapses relative to humans?\n\n\n\n\n[Cotra: ]\n\n\n\n\nhow does the scale hypothesis possibly take a strong stand on synapses vs neurons? I agree that it takes a modest predictive hit from \"why aren't the big animals much smarter?\"\n\n\n\n\n[Yudkowsky][14:53]\nif adding more synapses just scales, elephants should be able to pay hominid brain costs for a much smaller added fraction of metabolism and also not pay the huge death-in-childbirth head-size tax\nbecause their brains and heads are already 4x as huge as they need to be for GI\nand now they just need some synapses, which are a much tinier fraction of their total metabolic costs\n\n\n\n[Christiano][14:54]\nI mean, you can also make smaller and cheaper synapses as evidenced by birds\nI'm not sure I understand what you are saying\nit's clear that you can't say \"X is possible metabolically, so evolution would do it\"\nor else you are confused about why primate brains are so bad\n\n\n\n\n[Yudkowsky][14:54]\ngreat, then smaller and cheaper synapses should've scaled many eons earlier and taken over the world\n\n\n\n[Christiano][14:55]\nthis isn't about general intelligence, this is a reductio of your position…\n\n\n\n\n[Yudkowsky][14:55]\nand here I had thought it was a reductio of your position…\n\n\n\n[Christiano][14:55]\nindeed\nlike, we all grant that it's metabolically possible to have small smart brains\nand evolution doesn't do it\nand I'm saying that it's also possible to have small smart brains\nand that scaling brains up matters a lot\n\n\n\n\n[Yudkowsky][14:56]\nno, you grant that it's metabolically possible to have cheap brains full of synapses, which are therefore, on your position, smart\n\n\n\n[Christiano][14:56]\nbirds are just smart\nwe know they are smart\nthis isn't some kind of weird conjecture\nlike, we can debate whether they are a \"general\" intelligence, but it makes no difference to this discussion\nthe point is that they do more with less metabolic cost\n\n\n\n\n[Yudkowsky][14:57]\non my position, the brain needs to invent the equivalents of ReLUs and Transformers and really rather a lot of other stuff because it can't afford nearly that many GPUs, and then the marginal returns on adding expensive huge brains and synapses have increased enough that hominids start to slide down the resulting fitness slope, which isn't even paying off in guns and rockets yet, they're just getting that much intelligence out of it once the brain software has been selected to scale that well\n\n\n\n[Christiano][14:57]\nbut all of the primates and birds have brain sizes scaling much faster than the other animals\nlike, the relevant \"things started to scale\" threshold is way before chimps vs humans\nisn't it?\n\n\n\n\n[Cotra][14:58]\nto clarify, my understanding is that paul's position is \"Intelligence is mainly about synapse/neuron count, and evolution doesn't care that much about intelligence; it cared more for birds and primates, and both lines are getting smarter+bigger-brained.\" And eliezer's position is that \"evolution should care a ton about intelligence in most niches, so if it were mostly about brain size then it should have gone up to human brain sizes with the dinosaurs\"\n\n\n\n\n[Christiano][14:58]\nor like, what is the evidence you think is explained by the threshold being between chimps and humans\n\n\n\n\n[Yudkowsky][14:58]\nif hominids have less efficient brains than birds, on this theory, it's because (post facto handwave) birds are tiny, so whatever cognitive fitness gradients they face, will tend to get paid more in software and biological efficiency and biologically efficient software, and less paid in Stack More Neurons (even compared to hominids)\nelephants just don't have the base software to benefit much from scaling synapses even though they'd be relatively cheaper for elephants\n\n\n\n[Christiano][14:59]\n@ajeya I think that intelligence is about a lot of things, but that size (or maybe \"more of the same\" changes that had been happening recently amongst primates) is the big difference between chimps and humans\n\n\n\n\n[Cotra: ]\n\n\n\n\n\n\n\n\n[Cotra][14:59]\ngot it yeah i was focusing on chimp-human gap when i said \"intelligence\" there but good to be careful\n\n\n\n\n[Yudkowsky][14:59]\nI have not actually succeeded in understanding Why On Earth Anybody Would Think That If Not For This Really Weird Prior I Don't Get Either\nre: the \"more of the same\" theory of humans\n\n\n\n[Cotra][15:00]\ndo you endorse my characterization of your position above? \"evolution should care a ton about intelligence in most niches, so if it were mostly about brain size then it should have gone up to human brain sizes with the dinosaurs\"\nin which case the disagreement is about how much evolution should care about intelligence in the dinosaur niche, vs other things it could put its skill points into?\n\n\n\n\n[Christiano][15:01]\nEliezer, it seems like chimps are insanely smart compared to other animals, basically as smart as they get\nso it's natural to think that the main things that make humans unique are also present in chimps\nor at least, there was something going on in chimps that is exceptional\nand should be causally upstream of the uniqueness of humans too\notherwise you have too many coincidences on your hands\n\n\n\n\n[Yudkowsky][15:02]\najeya: no, I'd characterize that as \"the human environmental niche per se does not seem super-special enough to be unique on a geological timescale, the cognitive part of the niche derives from increased cognitive abilities in the first place and so can't be used to explain where they got started, dinosaurs are larger than humans and would pay lower relative metabolic costs for added brain size and it is not the case that every species as large as humans was in an environment where they would not have benefited as much from a fixed increment of intelligence, hominids are probably distinguished from dinosaurs in having better neural algorithms that arose over intervening evolutionary time and therefore better returns in intelligence on synapses that are more costly to humans than to elephants or large dinosaurs\"\n\n\n\n[Christiano][15:03]\nI don't understand how you can think that hominids are the special step relative to something earlier\nor like, I can see how it's consistent, but I don't see what evidence or argument supports it\nit seems like the short evolutionary time, and the fact that you also have to explain the exceptional qualities of other primates, cut extremely strongly against it\n\n\n\n\n[Yudkowsky][15:04]\npaul: indeed, the fact that dinosaurs didn't see their brain sizes and intelligences ballooning, says there must be a lot of stuff hominids had that dinosaurs didn't, explaining why hominids got much higher returns on intelligence per synapse. natural selection is enough of a smooth process that 95% of this stuff should've been in the last common ancestor of humans and chimps.\n\n\n\n[Christiano][15:05]\nit seems like brain size basically just increases faster in the smarter animals? though I mostly just know about birds and primates\n\n\n\n\n[Yudkowsky][15:05]\nthat is what you'd predict from smartness being about algorithms!\n\n\n\n[Christiano][15:05]\nand it accelerates further and further within both lines\nit's what you'd expect if smartness is about algorithms and chimps and birds have good algorithms\n\n\n\n\n[Yudkowsky][15:06]\nif smartness was about brain size, smartness and brain size would increase faster in the larger animals or the ones whose successful members ate more food per day\nwell, sure, I do model that birds have better algorithms than dinosaurs\n\n\n\n[Cotra][15:07]\nit seems like you've given arguments for \"there was algorithmic innovation between dinosaurs and humans\" but not yet arguments for \"there was major algorithmic innovation between chimps and humans\"?\n\n\n\n\n[Christiano][15:08]\n(much less that the algorithmic changes were not just more-of-the-same)\n\n\n\n\n[Yudkowsky][15:08]\noh, that's not mandated by the model the same way. (between LCA of chimps and humans)\n\n\n\n[Christiano][15:08]\nisn't that exactly what we are discussing?\n\n\n\n\n[Yudkowsky][15:09]\n…I hadn't thought so, no.\n\n\n\n[Cotra][15:09]\noriginal q was:\n\nOn software beating hardware, how much of your view is dependent on your belief that the chimp -> human transition was probably not mainly about brain size because if it were about brain size it would have happened faster? My understanding is that you think the main change is a small software innovation which increased returns to having a bigger brain. If you changed your mind and thought that the chimp -> human transition was probably mostly about raw brain size, what (if anything) about your AI takeoff views would change?\n\nso i thought we were talking about if there's a cool innovation from chimp->human?\n\n\n\n\n[Yudkowsky][15:10]\nI can see how this would have been the more obvious intended interpretation on your viewpoint, and apologize\n\n\n\n[Christiano][15:10]\n\n(though i think paul would also disagree about this thing, and believes that the chimp to human thing was mostly about size)\n\nIs what I was responding to in part\nI am open to saying that I'm conflating size and \"algorithmic improvements that are closely correlated with size in practice and are similar to the prior algorithmic improvements amongst primates\"\n\n\n\n\n[Yudkowsky][15:11]\nfrom my perspective, the question is \"how did that hominid->human transition happen, as opposed to there being an elephant->smartelephant or dinosaur->smartdinosaur transition\"?\nI expect there were substantial numbers of brain algorithm stuffs going on during this time, however\nbecause I don't think that synapses scale that well with the baseline hominid boost\n\n\n\n[Christiano][15:11]\nFWIW, it seems quite likely to me that there would be an elephant->smartelephant transition within tens of millions or maybe 100M years, and a dinosaur->smartdinosaur transition in hundreds of millions of years\nand those are just cut off by the fastest lines getting there first\n\n\n\n\n[Yudkowsky][15:12]\nwhich I think does circle back to that point? actually I think my memory glitched and forgot the original point while being about this subpoint and I probably did interpret the original point as intended.\n\n\n\n[Christiano][15:12]\nnamely primates beating out birds by a hair\n\n\n\n\n[Yudkowsky][15:12]\nthat sounds like a viewpoint which would also think it much more likely that GPT-3 would foom in a billion years\nwhere maybe you think that's unlikely, but I still get the impression your \"unlikely\" is, like, 5 orders of magnitude likelier than mine before applying overconfidence adjustments against extreme probabilities on both sides\nyeah, I think I need to back up\n\n\n\n[Cotra][15:15]\nIs your position something like \"at some point after dinosaurs, there was an algorithmic innovation that increased returns to brain size, which meant that the birds and the humans see their brains increasing quickly while the dinosaurs didn't\"?\n\n\n\n\n[Christiano][15:15]\nit also seems to me like the chimp->human difference is in basically the same ballpark of the effect of brain size within humans, given modest adaptations for culture\nwhich seems like a relevant sanity-check that made me take the \"mostly hardware\" view more seriously\n\n\n\n\n[Yudkowsky][15:15]\nthere's a part of my model which very strongly says that hominids scaled better than elephants and that's why \"hominids->humans but not elephants->superelephants\"\n\n\n\n[Christiano][15:15]\npreviously I had assumed that analysis would show that chimps were obviously way dumber than an extrapolation of humans\n\n\n\n\n[Yudkowsky][15:16]\nthere's another part of my model which says \"and it still didn't scale that well without algorithms, so we should expect a lot of alleles affecting brain circuitry which rose to fixation over the period when hominid brains were expanding\"\nthis part is strong and I think echoes back to AGI stuff, but it is not as strong as the much more overdetermined position that hominids started with more scalable algorithms than dinosaurs.\n\n\n\n[Christiano][15:17]\nI do agree with the point that there are structural changes in brains as you scale them up, and this is potentially a reason why brain size changes more slowly than e.g. bone size. (Also there are small structural changes in ML algorithms as you scale them up, not sure how much you want to push the analogy but they feel fairly similar.)\n\n\n\n\n[Yudkowsky][15:17]\n\nit also seems to me like the chimp->human difference is in basically the same ballpark of the effect of brain size within humans, given modest adaptations for culture\n\nthis part also seems pretty blatantly false to me\nis there, like, a smooth graph that you looked at there?\n\n\n\n[Christiano][15:18]\nI think the extrapolated difference would be about 4 standard deviations, so we are comparing a chimp to an IQ 40 human\n\n\n\n\n[Yudkowsky][15:18]\nI'm really not sure how much of a fair comparison that is\nIQ 40 humans in our society may be mostly sufficiently-damaged humans, not scaled-down humans\n\n\n\n[Christiano][15:19]\ndoesn't seem easy, but the point is that the extrapolated difference is huge, it corresponds to completely debilitating developmental problems\n\n\n\n\n[Yudkowsky][15:19]\nif you do enough damage to a human you end up with, for example, a coma victim who's not competitive with other primates at all\n\n\n\n[Christiano][15:19]\nyes, that's more than 4 SD down\nI agree with this general point\nI'd guess I just have a lot more respect for chimps than you do\n\n\n\n\n[Yudkowsky][15:20]\nI feel like I have a bunch of respect for chimps but more respect for humans\nlike, that stuff humans do\nthat is really difficult stuff!\nit is not just scaled-up chimpstuff!\n\n\n\n[Christiano][15:21]\nCarl convinced me chimps wouldn't go to space, but I still really think it's about domesticity and cultural issues rather than intelligence\n\n\n\n\n[Yudkowsky][15:21]\nthe chimpstuff is very respectable but there is a whole big layer cake of additional respect on top\n\n\n\n[Christiano][15:21]\nnot a prediction to be resolved until after the singularity\nI mean, the space prediction isn't very confident \nand it involved a very large planet of apes\n\n\n\n \n \n9.6. Architectural innovation in AI and in evolutionary history\n \n\n[Yudkowsky][15:22]\nI feel like if GPT-based systems saturate and require any architectural innovation rather than Stack More Layers to get much further, this is a pre-Singularity point of observation which favors humans probably being more qualitatively different from chimp-LCA\n(LCA=last common ancestor)\n\n\n\n[Christiano][15:22]\nany seems like a kind of silly bar?\n\n\n\n\n[Yudkowsky][15:23]\nbecause single architectural innovations are allowed to have large effects!\n\n\n\n[Christiano][15:23]\nlike there were already small changes to normalization from GPT-2 to GPT-3, so isn't it settled?\n\n\n\n\n[Yudkowsky][15:23]\nnatural selection can't afford to deploy that many of them!\n\n\n\n[Christiano][15:23]\nand the model really eventually won't work if you increase layers but don't fix the normalization, there are severe problems that only get revealed at high scale\n\n\n\n\n[Yudkowsky][15:23]\nthat I wouldn't call architectural innovation\ntransformers were\nthis is a place where I would not discuss specific ideas because I do not actually want this event to occur\n\n\n\n[Christiano][15:24]\nsure\nhave you seen a graph of LSTM scaling vs transformer scaling?\nI think LSTM with ongoing normalization-style fixes lags like 3x behind transformers on language modeling\n\n\n\n\n[Yudkowsky][15:25]\nno, does it show convergence at high-enough scales?\n\n\n\n[Christiano][15:25]\nfigure 7 here: https://arxiv.org/pdf/2001.08361.pdf\n\n\n\n\n\n[Yudkowsky][15:26]\nyeah… I unfortunately would rather not give other people a sense for which innovations are obviously more of the same and which innovations obviously count as qualitative\n\n\n\n[Christiano][15:26]\nI think smart money is that careful initialization and normalization on the RNN will let it keep up for longer\nanyway, I'm very open to differences like LSTM vs transformer between humans and 3x-smaller-brained-ancestors, as long as you are open to like 10 similar differences further back in the evolutionary history\n\n\n\n\n[Yudkowsky][15:28]\nwhat if there's 27 differences like that and 243 differences further back in history?\n\n\n\n[Christiano][15:28]\nsure\n\n\n\n\n[Yudkowsky][15:28]\nis that a distinctly Yudkowskian view vs a Paul view…\napparently not\nI am again feeling confused about cruxes\n\n\n\n[Christiano][15:29]\nI mean, 27 differences like transformer vs LSTM isn't actually plausible, so I guess we could talk about it\n\n\n\n\n[Cotra][15:30]\nHere's a potential crux articulation that ties it back to the animals stuff: paul thinks that we first discover major algorithmic innovations that improve intelligence at a low level of intelligence, analogous to evolution discovering major architectural innovations with tiny birds and primates, and then there will be a long period of scaling up plus coming up with routine algorithmic tweaks to get to the high level, analogous to evolution schlepping on the same shit for a long time to get to humans. analogously, he thinks when big innovations come onto the scene the actual product is crappy af (e.g. wright brother's plane), and it needs a ton of work to scale up to usable and then to great.\nyou both seem to think both evolution and tech history consiliently point in your direction\n\n\n\n\n[Christiano][15:33]\nthat sounds vaguely right, I guess the important part of \"routine\" is \"vaguely predictable,\" like you mostly work your way down the low-hanging fruit (including new fruit that becomes more important as you scale), and it becomes more and more predictable the more people are working on it and the longer you've been at it\nand deep learning is already reasonably predictable (i.e. the impact of successive individual architectural changes is smaller, and law of large numbers is doing its thing) and is getting more so, and I just expect that to continue\n\n\n\n\n[Cotra][15:34]\nyeah, like it's a view that points to using data that relates effort to algorithmic progress and using that to predict future progress (in combination with predictions of future effort)\n\n\n\n\n[Christiano][15:35]\nyeah\nand for my part, it feels like this is how most technologies look and also how current ML progress looks\n\n\n\n\n[Cotra][15:36]\nand also how evolution looks, right?\n\n\n\n\n[Christiano][15:37]\nyou aren't seeing big jumps in translation or in self-driving cars or in image recognition, you are just seeing a long slog, and you see big jumps in areas where few people work (usually up to levels that are not in fact that important, which is very correlated with few people working there)\nI don't know much about evolution, but it at least looks very consistent with what I know and the facts eliezer cites\n(not merely consistent, but \"explains the data just about as well as the other hypotheses on offer\")\n\n\n\n \n9.7. Styles of thinking in forecasting\n \n\n[Yudkowsky][15:38]\nI do observe that this would seem, on the surface of things, to describe the entire course of natural selection up until about 20K years ago, if you were looking at surface impacts\n\n\n\n[Christiano][15:39]\nby 20k years ago I think it's basically obvious that you are tens of thousands of years from the singularity\nlike, I think natural selection is going crazy with the brains by millions of years ago, and by hundreds of thousands of years ago humans are going crazy with the culture, and by tens of thousands of years ago the culture thing has accelerated and is almost at the finish line\n\n\n\n\n[Yudkowsky][15:41]\nreally? I don't know if I would have been able to call that in advance if I'd never seen the future or any other planets. I mean, maybe, but I sure would have been extrapolating way out onto a further limb than I'm going here.\n\n\n\n[Christiano][15:41]\nYeah, I agree singularity is way more out on a limb—or like, where the singularity stops is more uncertain since that's all that's really at issue from my perspective\nbut the point is that everything is clearly crazy in historical terms, in the same way that 2000 is crazy, even if you don't know where it's going\nand the timescale for the crazy changes is tens of thousands of years\n\n\n\n\n[Yudkowsky][15:42]\nI frankly model that, had I made any such prediction 20K years ago of hominids being able to pull of moon landings or global warming – never mind the Singularity – I would have faced huge pushback from many EAs, such as, for example, Robin Hanson, and you.\n\n\n\n[Christiano][15:42]\nlike I think this can't go on would have applied just as well: https://www.lesswrong.com/posts/5FZxhdi6hZp8QwK7k/this-can-t-go-on\nI don't think that's the case at all\nand I think you still somehow don't understand my position?\n\n\n\n\n[Yudkowsky][15:43]\nhttps://www.lesswrong.com/posts/XQirei3crsLxsCQoi/surprised-by-brains is my old entry here\n\n\n\n[Christiano][15:43]\nlike, what is the move I'm making here, that you think I would have made in the past?\nand would have led astray?\n\n\n\n\n[Yudkowsky][15:44]\nI sure do feel in a deeper sense that I am trying very hard to account for perspective shifts in how unpredictable the future actually looks at the time, and the Other is looking back at the past and organizing it neatly and expecting the future to be that neat\n\n\n\n[Christiano][15:45]\nI don't even feel like I'm expecting the future to be neat\nare you just saying you have a really broad distribution over takeoff speed, and that \"less than a month\" gets a lot of probability because lots of numbers are less than a month?\n\n\n\n\n[Yudkowsky][15:47]\nnot exactly?\n\n\n\n[Christiano][15:47]\nin what way is your view the one that is preferred by things being messy or unpredictable?\nlike, we're both agreeing X will eventually happen, and I'm making some concrete prediction about how some other X' will happen first, and that's the kind of specific prediction that's likely to be wrong?\n\n\n\n\n[Yudkowsky][15:48]\nmore like, we sure can tell a story today about how normal and predictable AlphaGo was, but we can always tell stories like that about the past. I do not particularly recall the AI field standing up one year before AlphaGo and saying \"It's time, we're coming for the 8-dan pros this year and we're gonna be world champions a year after that.\" (Which took significantly longer in chess, too, matching my other thesis about how these slides are getting steeper as we get closer to the end.)\n\n\n\n[Christiano][15:49]\nit's more like, you are offering AGZ as an example of why things are crazy, and I'm doubtful / think it's pretty lame\nmaybe I don't understand how it's functioning as bayesian evidence\nfor what over what\n\n\n\n\n[Yudkowsky][15:50]\nI feel like the whole smoothness-reasonable-investment view, if evaluated on Earth 5My ago without benefit of foresight, would have dismissed the notion of brains overtaking evolution; evaluated 1My ago, it would have dismissed the notion of brains overtaking evolution; evaluated 20Ky ago, it would have barely started to acknowledge that brains were doing anything interesting at all, but pointed out how the hominids could still only eat as much food as their niche offered them and how the cute little handaxes did not begin to compare to livers and wasp stings.\nthere is a style of thinking that says, \"wow, yeah, people in the past sure were surprised by stuff, oh, wait, I'm also in the past, aren't I, I am one of those people\"\nand a view where you look back from the present and think about how reasonable the past all seems now, and the future will no doubt be equally reasonable\n\n\n\n[Christiano][15:52]\n(the AGZ example may fall flat, because the arguments we are making about it now we were also making in the past)\n\n\n\n\n[Yudkowsky][15:52]\nI am not sure this is resolvable, but it is among my primary guesses for a deep difference in believed styles of thought\n\n\n\n[Christiano][15:52]\nI think that's a useful perspective, but still don't see how it favors your bottom line\n\n\n\n\n[Yudkowsky][15:53]\nwhere I look at the style of thinking you're using, and say, not, \"well, that's invalidated by a technical error on line 3 even on Paul's own terms\" but \"isn't this obviously a whole style of thought that never works and ends up unrelated to reality\"\nI think the first AlphaGo was the larger shock, AlphaGo Zero was a noticeable but more mild shock on account of how it showed the end of game programming and not just the end of Go\n\n\n\n[Christiano][15:54]\nsorry, I lumped them together\n\n\n\n\n[Yudkowsky][15:54]\nit didn't feel like the same level of surprise; it was precedented by then\nthe actual accomplishment may have been larger in an important sense, but a lot of the – epistemic landscape of lessons learned? – is about the things that surprise you at the time\n\n\n\n[Christiano][15:55]\nalso AlphaGo was also quite easy to see coming after this paper (as was discussed extensively at the time): https://www.cs.toronto.edu/~cmaddis/pubs/deepgo.pdf\n\n\n\n\n[Yudkowsky][15:55]\nPaul, are you on the record as arguing with me that AlphaGo will win at Go because it's predictably on-trend?\nback then?\n\n\n\n[Cotra][15:55]\nHm, it sounds like Paul is saying \"I do a trend extrapolation over long time horizons and if things seem to be getting faster and faster I expect they'll continue to accelerate; this extrapolation if done 100k years ago would have seen that things were getting faster and faster and projected singularity within 100s of K years\"\nDo you think Paul is in fact doing something other than the trend extrap he says he's doing, or that he would have looked at a different less informative trend than the one he says he would have looked at, or something else?\n\n\n\n\n[Christiano][15:56]\nmy methodology for answering that question is looking at LW comments mentioning go by me, can see if it finds any\n\n\n\n\n[Yudkowsky][15:56]\nDifferent less informative trend, is most of my suspicion there?\nthough, actually, I should revise that, I feel like relatively little of the WHA was AlphaGo v2 whose name I forget beating Lee Se-dol, and most was in the revelation that v1 beat the high-dan pro whose name I forget.\nPaul having himself predicted anything at all like this would be the actually impressive feat\nthat would cause me to believe that the AI world is more regular and predictable than I experienced it as, if you are paying more attention to ICLR papers than I do\n\n\n \n9.8. Moravec's prediction\n \n\n[Cotra][15:58]\nAnd jtbc, the trend extrap paul is currently doing is something like:\n\nLook at how effort leads to hardware progress measured in FLOP/$ and software progress measured in stuff like \"FLOP to do task X\" or \"performance on benchmark Y\"\nLook at how effort in the ML industry as a whole is increasing, project forward with maybe some adjustments for thinking markets are more inefficient now and will be less inefficient later\n\nand this is the wrong trend, because he shouldn't be looking at hardware/software progress across the whole big industry and should be more open to an upset innovation coming from an area with a small number of people working on it?\nand he would have similarly used the wrong trends while trying to do trend extrap in the past?\n\n\n\n\n[Yudkowsky][15:59]\nbecause I feel like this general style of thought doesn't work when you use it on Earth generally, and then fails extremely hard if you try to use it on Earth before humans to figure out where the hominids are going because that phenomenon is Different from Previous Stuff\nlike, to be clear, I have seen this used well on solar\nI feel like I saw some people calling the big solar shift based on graphs, before that happened\nI have seen this used great by Moravec on computer chips to predict where computer chips would be in 2012\nand also witnessed Moravec completely failing as soon as he tried to derive literally anything but the graph itself namely his corresponding prediction for human-equivalent AI in 2012 (I think, maybe it was 2010) or something\n\n\n\n[Christiano][16:02]\n(I think in his 1988 book Moravec estimated human-level AI in ~2030, not sure if you are referring to some earlier prediction?)\n\n\n\n\n[Yudkowsky][16:02]\n(I have seen Ray Kurzweil project out Moore's Law to the $1,000,000 human brain in, what was it, 2025, followed by the $1000 human brain in 2035 and the $1 human brain in 2045, and when I asked Ray whether machine superintelligence might shift the graph at all, he replied that machine superintelligence was precisely how the graph would be able to continue on trend. This indeed is sillier than EAs.)\n\n\n\n[Cotra][16:03]\nmoravec's prediction appears to actually be around 2025, looking at his hokey graph? https://jetpress.org/volume1/moravec.htm\n\n\n\n\n\n[Yudkowsky][16:03]\nbut even there, it does feel to me like there is a commonality between Kurzweil's sheer graph-worship and difficulty in appreciating the graphs as surface phenomena that are less stable than deep phenomena, and something that Hanson was doing wrong in the foom debate\n\n\n\n[Cotra][16:03]\nwhich is…like, your timelines?\n\n\n\n\n[Yudkowsky][16:04]\nthat's 1998\nMind Children in 1988 I am pretty sure had an earlier prediction\n\n\n\n[Christiano][16:04]\nI should think you'd be happy to bet against me on basically any prediction, shouldn't you?\n\n\n\n\n[Yudkowsky][16:05]\nany prediction that sounds narrow and isn't like \"this graph will be on trend in 3 more years\"\n…maybe I'm wrong, an online source says Mind Children in 1988 predicted AGI in \"40 years\" but I sure do seem to recall an extrapolated graph that reached \"human-level hardware\" in 2012 based on an extensive discussion about computing power to duplicate the work of the retina\n\n\n\n[Christiano][16:08]\ndon't think it matters too much other than for Moravec's honor, doesn't really make a big difference for the empirical success of the methodology\nI think it's on page 68 if you have the physical book\n\n\n\n\n[Yudkowsky][16:09]\np60 via Google Books says 10 teraops for a human-equivalent mind\n\n\n\n[Christiano][16:09]\nI have a general read of history where trend extrapolation works extraordinarily well relative to other kinds of forecasting, to the extent that the best first-pass heuristic for whether a prediction is likely to be accurate is whether it's a trend extrapolation and how far in the future it is\n\n\n\n\n[Yudkowsky][16:09]\nwhich, incidentally, strikes me as entirely plausible if you had algorithms as sophisticated as the human brain\nmy sense is that Moravec nailed the smooth graph of computing power going on being smooth, but then all of his predictions about the actual future were completely invalid on account of a curve interacting with his curve that he didn't know things about and so simply omitted as a step in his calculations, namely, AGI algorithms\n\n\n\n[Christiano][16:12]\nthough again, from your perspective 2030 is still a reasonable bottom-line forecast that makes him one of the most accurate people at that time?\n\n\n\n\n[Yudkowsky][16:12]\nyou could be right about all the local behaviors that your history is already shouting out at you as having smooth curve (where by \"local\" I do mean to exclude stuff like world GDP extrapolated into the indefinite future) and the curves that history isn't shouting at you will tear you down\n\n\n\n[Christiano][16:12]\n(I don't know if he even forecast that)\n\n\n\n\n[Yudkowsky][16:12]\nI don't remember that part from the 1988 book\nmy memory of the 1988 book is \"10 teraops, based on what it takes to rival the retina\" and he drew a graph of Moore's Law\n\n\n\n[Christiano][16:13]\nyeah, I think that's what he did\n(and got 2030)\n\n\n\n\n[Yudkowsky][16:14]\n\"If this rate of improvement were to continue into the next century, the 10 teraops required for a humanlike computer would be available in a $10 million supercomputer before 2010 and in a $1,000 personal computer by 2030.\"\n\n\n\n[Christiano][16:14]\nor like, he says \"human equivalent in 40 years\" and predicts that in 50 years we will have robots with superhuman reasoning ability, not clear he's ruling out human-equivalent AGI before 40 years but I think the tone is clear\n\n\n\n\n[Yudkowsky][16:15]\nso 2030 for AGI on a personal computer and 2010 for AGI on a supercomputer, and I expect that on my first reading I simply discarded the former prediction as foolish extrapolation past the model collapse he had just predicted in 2010.\n(p68 in \"Powering Up\")\n\n\n\n[Christiano][16:15]\nyeah, that makes sense\nI do think the PC number seems irrelevant\n\n\n\n\n[Cotra][16:16]\nI think both in that book and in the 98 article he wants you to pay attention to the \"very cheap human-size computers\" threshold, not the \"supercomputer\" threshold, i think intentionally as a way to handwave in \"we need people to be able to play around with these things\"\n(which people criticized him at the time for not more explicitly modeling iirc)\n\n\n\n\n[Yudkowsky][16:17]\nbut! I mean! there are so many little places where the media has a little cognitive hiccup about that and decides in 1998 that it's fine to describe that retrospectively as \"you predicted in 1988 that we'd have true AI in 40 years\" and then the future looks less surprising than people at the time using Trend Logic were actually surprised by it!\nall these little ambiguities and places where, oh, you decide retroactively that it would have made sense to look at this Trend Line and use it that way, but if you look at what people said at the time, they didn't actually say that!\n\n\n\n[Christiano][16:19]\nI mean, in fairness reading the book it just doesn't seem like he is predicting human-level AI in 2010 rather than 2040, but I do agree that it seems like the basic methodology (why care about the small computer thing?) doesn't really make that much sense a priori and only leads to something sane if it cancels out with a weird view\n\n\n\n \n9.9. Prediction disagreements and bets\n \n\n[Christiano][16:19]\nanyway, I'm pretty unpersuaded by the kind of track record appeal you are making here\n\n\n\n\n[Yudkowsky][16:20]\nif the future goes the way I predict and yet anybody somehow survives, perhaps somebody will draw a hyperbolic trendline on some particular chart where the trendline is retroactively fitted to events including those that occurred in only the last 3 years, and say with a great sage nod, ah, yes, that was all according to trend, nor did anything depart from trend\ntrend lines permit anything\n\n\n\n[Christiano][16:20]\nlike from my perspective the fundamental question is whether I would do better or worse by following the kind of reasoning you'd advocate, and it just looks to me like I'd do worse, and I'd love to make any predictions about anything to help make that more clear and hindsight-proof in advance\n\n\n\n\n[Yudkowsky][16:20]\nyou just look into the past and find a line you can draw that ended up where reality went\n\n\n\n[Christiano][16:21]\nit feels to me like you really just waffle on almost any prediction about the before-end-of-days\n\n\n\n\n[Yudkowsky][16:21]\nI don't think I know a lot about the before-end-of-days\n\n\n\n[Christiano][16:21]\nlike if you make a prediction I'm happy to trade into it, or you can pick a topic and I can make a prediction and you can trade into mine\n\n\n\n\n[Cotra][16:21]\nbut you know enough to have strong timing predictions, e.g. your bet with caplan\n\n\n\n\n[Yudkowsky][16:21]\nit's daring enough that I claim to know anything about the Future at all!\n\n\n\n[Cotra][16:21]\nsurely with that difference of timelines there should be some pre-2030 difference as well\n\n\n\n\n[Christiano][16:21]\nbut you are the one making the track record argument against my way of reasoning about things!\nhow does that not correspond to believing that your predictions are better!\nwhat does that mean?\n\n\n\n\n[Yudkowsky][16:22]\nyes and if you say something narrow enough or something that my model does at least vaguely push against, we should bet\n\n\n\n[Christiano][16:22]\nmy point is that I'm willing to make a prediction about any old thing, you can name your topic\nI think the way I'm reasoning about the future is just better in general\nand I'm going to beat you on whatever thing you want to bet on\n\n\n\n\n[Yudkowsky][16:22]\nbut if you say, \"well, Moore's Law on trend, next 3 years\", then I'm like, \"well, yeah, sure, since I don't feel like I know anything special about that, that would be my prediction too\"\n\n\n\n[Christiano][16:22]\nsure\nyou can pick the topic\npick a quantity\nor a yes/no question\nor whatever\n\n\n\n\n[Yudkowsky][16:23]\nyou may know better than I would where your Way of Thought makes strong, narrow, or unusual predictions\n\n\n\n[Christiano][16:23]\nI'm going to trend extrapolation everywhere\nspoiler\n\n\n\n\n[Yudkowsky][16:23]\nokay but any superforecaster could do that and I could do the same by asking a superforecaster\n\n\n\n[Cotra][16:24]\nbut there must be places where you'd strongly disagree w the superforecaster\nsince you disagree with them eventually, e.g. >2/3 doom by 2030\n\n\n\n\n[Bensinger][18:40]  (Nov. 25 follow-up comment)\n\">2/3 doom by 2030\" isn't an actual Eliezer-prediction, and is based on a misunderstanding of something Eliezer said. See Eliezer's comment on LessWrong.\n\n\n\n\n[Yudkowsky][16:24]\nin the terminal phase, sure\n\n\n\n[Cotra][16:24]\nright, but there are no disagreements before jan 1 2030?\nno places where you'd strongly defy the superforecasters/trend extrap?\n\n\n\n\n[Yudkowsky][16:24]\nsuperforecasters were claiming that AlphaGo had a 20% chance of beating Lee Se-dol and I didn't disagree with that at the time, though as the final days approached I became nervous and suggested to a friend that they buy out of a bet about that\n\n\n\n[Cotra][16:25]\nwhat about like whether we get some kind of AI ability (e.g. coding better than X) before end days\n\n\n\n\n[Yudkowsky][16:25]\nthough that was more because of having started to feel incompetent and like I couldn't trust the superforecasters to know more, than because I had switched to a confident statement that AlphaGo would win\n\n\n\n[Cotra][16:25]\nseems like EY's deep intelligence / insight-oriented view should say something about what's not possible before we get the \"click\" and the FOOM\n\n\n\n\n[Christiano][16:25]\nI mean, I'm OK with either (i) evaluating arguments rather than dismissive and IMO totally unjustified track record, (ii) making bets about stuff\nI don't see how we can both be dismissing things for track record reasons and also not disagreeing about things\nif our methodologies agree about all questions before end of days (which seems crazy to me) then surely there is no track record distinction between them…\n\n\n\n\n[Cotra: ]\n\n\n\n\n\n\n\n\n[Cotra][16:26]\ndo you think coding models will be able to 2x programmer productivity before end days? 4x?\nwhat about hardware/software R&D wages? will they get up to $20m/yr for good ppl?\nwill someone train a 10T param model before end days?\n\n\n\n\n[Christiano][16:27]\nthings I'm happy to bet about: economic value of LMs or coding models at 2, 5, 10 years, benchmark performance of either, robotics, wages in various industries, sizes of various industries, compute/$, someone else's views about \"how ML is going\" in 5 years\nmaybe the \"any GDP acceleration before end of days?\" works, but I didn't like how you don't win until the end of days\n\n\n\n\n[Yudkowsky][16:28]\nokay, so here's an example place of a weak general Yudkowskian prediction, that is weaker than terminal-phase stuff of the End Days: (1) I predict that cycles of 'just started to be able to do Narrow Thing -> blew past upper end of human ability at Narrow Thing' will continue to get shorter, the same way that, I think, this happened faster with Go than with chess.\n\n\n\n[Christiano][16:28]\ngreat, I'm totally into it\nwhat's a domain?\ncoding?\n\n\n\n\n[Yudkowsky][16:28]\nDoes Paul disagree? Can Paul point to anything equally specific out of Paul's viewpoint?\n\n\n\n[Christiano][16:28]\nbenchmarks for LMs?\nrobotics?\n\n\n\n\n[Yudkowsky][16:28]\nwell, for these purposes, we do need some Elo-like ability to measure at all where things are relative to humans\n\n\n\n[Cotra][16:29]\nproblem-solving benchmarks for code?\nMATH benchmark?\n\n\n\n\n[Christiano][16:29]\nwell, for coding and LM'ing we have lots of benchmarks we can use\n\n\n\n\n[Yudkowsky][16:29]\nthis unfortunately does feel a bit different to me from Chess benchmarks where the AI is playing the whole game; Codex is playing part of the game\n\n\n\n[Christiano][16:29]\nin general the way I'd measure is by talking about how fast you go from \"weak human\" to \"strong human\" (e.g. going from top-10,000 in chess to top-10 or whatever, going from jobs doable by $50k/year engineer to $500k/year engineer…)\n\n\n\n\n[Yudkowsky][16:30]\ngolly, that sounds like a viewpoint very favorable to mine\n\n\n\n[Christiano][16:30]\nwhat do you mean?\nthat way of measuring would be favorable to your viewpoint?\n\n\n\n\n[Yudkowsky][16:31]\nif we measure how far it takes AI to go past different levels of paying professionals, I expect that the Chess duration is longer than the Go duration and that by the time Codex is replacing a most paid $50k/year programmers the time to replacing a most programmers paid as much as a top Go player will be pretty darned short\n\n\n\n[Christiano][16:31]\ntop Go players don't get paid, do they?\n\n\n\n\n[Yudkowsky][16:31]\nthey tutor students and win titles\n\n\n\n[Christiano][16:31]\nbut I mean, they are like low-paid engineers\n\n\n\n\n[Yudkowsky][16:31]\nyeah that's part of the issue here\n\n\n\n[Christiano][16:31]\nI'm using wages as a way to talk about the distribution of human abilities, not the fundamental number\n\n\n\n\n[Yudkowsky][16:32]\nI would expect something similar to hold over going from low-paying welder to high-paying welder\n\n\n\n[Christiano][16:32]\nlike, how long to move from \"OK human\" to \"pretty good human\" to \"best human\"\n\n\n\n\n[Cotra][16:32]\nsays salary of $350k/yr for lee: https://www.fameranker.com/lee-sedol-net-worth\n\n\n\n\n[Yudkowsky][16:32]\nbut I also mostly expect that AIs will not be allowed to weld things on Earth\n\n\n\n[Cotra][16:32]\nwhy don't we just do an in vitro benchmark instead of wages?\n\n\n\n\n[Christiano][16:32]\nwhat, machines already do virtually all welding?\n\n\n\n\n[Cotra][16:32]\njust pick a benchmark?\n\n\n\n\n[Yudkowsky][16:33]\nyoouuuu do not want to believe sites like that (fameranker)\n\n\n\n[Christiano][16:33]\nyeah, I'm happy with any benchmark, and then we can measure various human levels at that benchmark\n\n\n\n\n[Cotra][16:33]\nwhat about MATH? https://arxiv.org/abs/2103.03874\n\n\n\n\n[Christiano][16:34]\nalso I don't know what \"shorter and shorter\" means, the time in go and chess was decades to move from \"strong amateur\" to \"best human,\" I do think these things will most likely be shorter than decades\nseems like we can just predict concrete #s though\n\n\n\n\n[Cotra: ]\n\n\n\n\nlike I can say how long I think it will take to get from \"median high schooler\" to \"IMO medalist\" and you can bet against me?\nand if we just agree about all of those predictions then again I'm back to being very skeptical of a claimed track record difference between our models\n(I do think that it's going to take years rather than decades on all of these things)\n\n\n\n\n[Yudkowsky][16:36]\npossibly! I worry this ends up in a case where Katja or Luke or somebody goes back and collects data about \"amateur to pro performance times\" and Eliezer says \"Ah yes, these are shortening over time, just as I predicted\" and Paul is like \"oh, well, I predict they continue to shorten on this trend drawn from the data\" and Eliezer is like \"I guess that could happen for the next 5 years, sure, sounds like something a superforecaster would predict as default\"\n\n\n\n[Cotra][16:37]\ni'm pretty sure paul's methodology here will just be to look at the MATH perf trend based on model size and combine with expectations of when ppl will make big enough models, not some meta trend thing like that?\n\n\n\n\n[Yudkowsky][16:37]\nso I feel like… a bunch of what I feel is the real disagreement in our models, is a bunch of messy stuff Suddenly Popping Up one day and then Eliezer is like \"gosh, I sure didn't predict that\" and Paul is like \"somebody could have totally predicted that\" and Eliezer is like \"people would say exactly the same thing after the world ended in 3 minutes\"\nif we've already got 2 years of trend on a dataset, I'm not necessarily going to predict the trend breaking\n\n\n\n[Cotra][16:38]\nhm, you're presenting your view as more uncertain and open to anything here than paul's view, but in fact it's picking out a narrower distribution. you're more confident in powerful AGI soon\n\n\n\n\n[Christiano][16:38]\nseems hard to play the \"who is more confident?\" game\n\n\n\n\n[Cotra][16:38]\nso there should be some places where you make a strong positive prediction paul disagrees with\n\n\n\n\n[Yudkowsky][16:39]\nI might want to buy options on a portfolio of trends like that, if Paul is willing to sell me insurance against all of the trends breaking upward at a lower price than I think is reasonable\nI mean, from my perspective Paul is the one who seems to think the world is well-organized and predictable in certain ways\n\n\n\n[Christiano][16:39]\nyeah, and you are saying that I'm overconfident about that\n\n\n\n\n[Yudkowsky][16:39]\nI keep wanting Paul to go on and make narrower predictions than I do in that case\n\n\n\n[Christiano][16:39]\nso you should be happy to bet with me about anything\nand I'm letting you pick anything at all you want to bet about\n\n\n\n\n[Cotra][16:40]\ni mean we could do a portfolio of trends like MATH and you could bet on at least a few of them having strong surprises in the sooner direction\nbut that means we could just bet about MATH and it'd just be higher variance\n\n\n\n\n[Yudkowsky][16:40]\nok but you're not going to sell me cheap options on sharp declines in the S&P 500 even though in a very reasonable world there would not be any sharp declines like that\n\n\n\n[Christiano][16:41]\nif we're betting $ rather than bayes points, then yes I'm going to weigh worlds based on the value of $ in those worlds\n\n\n\n\n[Cotra][16:41]\nwouldn't paul just sell you options at the price the options actually trade for? i don't get it\n\n\n\n\n[Christiano][16:41]\nbut my sense is that I'm just generally across the board going to be more right than you are, and I'm frustrated that you just keep saying that \"people like me\" are wrong about stuff\n\n\n\n\n[Yudkowsky][16:41]\nPaul's like \"we'll see smooth behavior in the end days\" and I feel like I should be able to say \"then Paul, sell me cheap options against smooth behavior now\" but Paul is just gonna wanna sell at market price\n\n\n\n[Christiano][16:41]\nand so I want to hold you to that by betting about anything\nideally just tons of stuff\nrandom things about what AI will be like, and other technologies, and regulatory changes\n\n\n\n\n[Cotra][16:42]\npaul's view doesn't seem to imply that he should value those options less than the market\nhe's more EMH-y than you not less\n\n\n\n\n[Yudkowsky][16:42]\nbut then the future should behave like that market\n\n\n\n[Christiano][16:42]\nwhat do you mean?\n\n\n\n\n[Yudkowsky][16:42]\nit should have options on wild behavior that are not cheap!\n\n\n\n[Christiano][16:42]\nyou mean because people want $ more in worlds where the market drops a lot?\nI don't understand the analogy\n\n\n\n\n[Yudkowsky][16:43]\nno, because jumpy stuff happens more than it would in a world of ideal agents\n\n\n\n[Cotra][16:43]\nI think EY is saying the non-cheap option prices are because P(sharp declines) is pretty high\n\n\n\n\n[Christiano][16:43]\nok, we know how often markets jump, if that's the point of your argument can we just talk about that directly?\n\n\n\n\n[Yudkowsky][16:43]\nor sharp rises, for that matter\n\n\n\n[Christiano][16:43]\n(much lower than option prices obviously)\nI'm probably happy to sell you options for sharp rises\nI'll give you better than market odds in that direction\nthat's how this works\n\n\n\n\n[Yudkowsky][16:44]\nnow I am again confused, for I thought you were the one who expected world GDP to double in 4 years at some point\nand indeed, drew such graphs with the rise suggestively happening earlier than the sharp spike\n\n\n\n[Christiano][16:44]\nyeah, and I have exposure to that by buying stocks, options prices are just a terrible way of tracking these things\n\n\n\n\n[Yudkowsky][16:44]\nsuggesting that such a viewpoint is generally favor to near timelines for that\n\n\n\n[Christiano][16:44]\nI mean, I have bet a lot of money on AI companies doing well\nwell, not compared to the EA crowd, but compared to my meager net worth \nand indeed, it has been true so far\nand I'm continuing to make the bet\nit seems like on your view it should be surprising that AI companies just keep going up\naren't you predicting them not to get to tens of trillions of valuation before the end of days?\n\n\n\n\n[Yudkowsky][16:45]\nI believe that Nate, of a generally Yudkowskian view, did the same (bought AI companies). and I focused my thoughts elsewhere, because somebody needs to, but did happen to buy my first S&P 500 on its day of exact minimum in 2020\n\n\n\n[Christiano][16:46]\npoint is, that's how you get exposure to the crazy growth stuff with continuous ramp-ups\nand I'm happy to make the bet on the market\nor on other claims\nI don't know if my general vibe makes sense here, and why it seems reasonable to me that I'm just happy to bet on anything\nas a way of trying to defend my overall attack\nand that if my overall epistemic approach is vulnerable to some track record objection, then it seems like it ought to be possible to win here\n\n\n\n \n9.10. Prediction disagreements and bets: Standard superforecaster techniques\n \n\n[Cotra][16:47]\nI'm still kind of surprised that Eliezer isn't willing to bet that there will be a faster-than-Paul expects trend break on MATH or whatever other benchmark. Is it just the variance of MATH being one benchmark? Would you make the bet if it were 6?\n\n\n\n\n[Yudkowsky][16:47]\na large problem here is that both of us tend to default strongly to superforecaster standard techniques\n\n\n\n[Christiano][16:47]\nit's true, though it's less true for longer things\n\n\n\n\n[Cotra][16:47]\nbut you think the superforecasters would suck at predicting end days because of the surface trends thing!\n\n\n\n\n[Yudkowsky][16:47]\nbefore I bet against Paul on MATH I would want to know that Paul wasn't arriving at the same default I'd use, which might be drawn from trend lines there, or from a trend line in trend lines\nI mean the superforecasters did already suck once in my observation, which was AlphaGo, but I did not bet against them there, I bet with them and then updated afterwards\n\n\n\n[Christiano][16:48]\nI'd mostly try to eyeball how fast performance was improving with size; I'd think about difficulty effects (where e.g. hard problems will be flat for a while and then go up later, so you want to measure performance on a spectrum of difficulties)\n\n\n\n\n[Cotra][16:48]\nwhat if you bet against a methodology instead of against paul's view? the methodology being the one i described above, of looking at the perf based on model size and then projecting model size increases by cost?\n\n\n\n\n[Christiano][16:48]\nseems safer to bet against my view\n\n\n\n\n[Cotra][16:48]\nyeah\n\n\n\n\n[Christiano][16:48]\nmostly I'd just be eyeballing size, thinking about how much people will in fact scale up (which would be great to factor out if possible), assuming performance trends hold up\nare there any other examples of surface trends vs predictable deep changes, or is AGI the only one?\n(that you have thought a lot about)\n\n\n\n\n[Cotra][16:49]\nyeah seems even better to bet on the underlying \"will the model size to perf trends hold up or break upward\"\n\n\n\n\n[Yudkowsky][16:49]\nso from my perspective, there's this whole thing where unpredictably something breaks above trend because the first way it got done was a way where somebody could do it faster than you expected\n\n\n\n[Christiano][16:49]\n(makes sense for it to be the domain where you've thought a lot)\nyou mean, it's unpredictable what will break above trend?\n\n\n\n\n[Cotra][16:49]\nIEM has a financial example\n\n\n\n\n[Yudkowsky][16:49]\nI mean that I could not have said \"Go will break above trend\" in 2015\n\n\n\n[Christiano][16:49]\nyeah\nok, here's another example\n\n\n\n\n[Yudkowsky][16:50]\nit feels like if I want to make a bet with imaginary Paul in 2015 then I have to bet on a portfolio\nand I also feel like as soon as we make it that concrete, Paul does not want to offer me things that I want to bet on\nbecause Paul is also like, sure, something might break upward\nI remark that I have for a long time been saying that I wish Paul had more concrete images and examples attached to a lot of his stuff\n\n\n\n[Cotra][16:51]\nsurely the view is about the probability of each thing breaking upward. or the expected number from a basket\n\n\n\n\n[Christiano][16:51]\nI mean, if you give me any way of quantifying how much stuff breaks upwards we have a bet\n\n\n\n\n[Cotra][16:51]\nnot literally that one single thing breaks upward\n\n\n\n\n[Christiano][16:51]\nI don't understand how concreteness is an accusation here, I've offered 10 quantities I'd be happy to bet about, and also allowed you to name literally any other quantity you want\nand I agree that we mostly agree about things\n\n\n\n\n[Yudkowsky][16:52]\nand some of my sense here is that if Paul offered a portfolio bet of this kind, I might not take it myself, but EAs who were better at noticing their own surprise might say, \"Wait, that's how unpredictable Paul thinks the world is?\"\nso from my perspective, it is hard to know specific anti-superforecaster predictions that happen long before terminal phase, and I am not sure we are really going to get very far there.\n\n\n\n[Christiano][16:53]\nbut you agree that the eventual prediction is anti-superforecaster?\n\n\n\n\n[Yudkowsky][16:53]\nboth of us probably have quite high inhibitions against selling conventionally priced options that are way not what a superforecaster would price them as\n\n\n\n[Cotra][16:53]\nwhy does it become so much easier to know these things and go anti-superforecaster at terminal phase?\n\n\n\n\n[Christiano][16:53]\nI assume you think that the superforecasters will continue to predict that big impactful AI applications are made by large firms spending a lot of money, even through the end of days\nI do think it's very often easy to beat superforecasters in-domain\nlike I expect to personally beat them at most ML prediction\nand so am also happy to do bets where you defer to superforecasters on arbitrary questions and I bet against you\n\n\n\n\n[Yudkowsky][16:54]\nwell, they're anti-prediction-market in the sense that, at the very end, bets can no longer settle. I've been surprised of late by how much AGI ruin seems to be sneaking into common knowledge; perhaps in the terminal phase the superforecasters will be like, \"yep, we're dead\". I can't even say that in this case, Paul will disagree with them, because I expect the state on alignment to be so absolutely awful that even Paul is like \"You were not supposed to do it that way\" in a very sad voice.\n\n\n\n[Christiano][16:55]\nI'm just thinking about takeoff speeds here\nI do think it's fairly likely I'm going to be like \"oh no this is bad\" (maybe 50%?), but not that I'm going to expect fast takeoff\nand similarly for the superforecasters\n\n\n\n \n9.11. Prediction disagreements and bets: Late-stage predictions, and betting against superforecasters\n \n\n[Yudkowsky][16:55]\nso, one specific prediction you made, sadly close to terminal phase but not much of a surprise there, is that the world economy must double in 4 years before the End Times are permitted to begin\n\n\n\n[Christiano][16:56]\nwell, before it doubles in 1 year…\nI think most people would call the 4 year doubling the end times\n\n\n\n\n[Yudkowsky][16:56]\nthis seems like you should also be able to point to some least impressive thing that is not permitted to occur before WGDP has doubled in 4 years\n\n\n\n[Christiano][16:56]\nand it means that the normal planning horizon includes the singularity\n\n\n\n\n[Yudkowsky][16:56]\nit may not be much but we would be moving back the date of first concrete disagreement\n\n\n\n[Christiano][16:57]\nI can list things I don't think would happen first, since that's a ton\n\n\n\n\n[Yudkowsky][16:57]\nand EAs might have a little bit of time in which to say \"Paul was falsified, uh oh\"\n\n\n\n[Christiano][16:57]\nthe only things that aren't permitted are the ones that would have caused the world economy to double in 4 years\n\n\n\n\n[Yudkowsky][16:58]\nand by the same token, there are things Eliezer thinks you are probably not going to be able to do before you slide over the edge. a portfolio of these will have some losing options because of adverse selection against my errors of what is hard, but if I lose more than half the portfolio, this may said to be a bad sign for Eliezer.\n\n\n\n[Christiano][16:58]\n(though those can happen at the beginning of the 4 year doubling)\n\n\n\n\n[Yudkowsky][16:58]\nthis is unfortunately late for falsifying our theories but it would be progress on a kind of bet against each other\n\n\n\n[Christiano][16:59]\nbut I feel like the things I'll say are like fully automated construction of fully automated factories at 1-year turnarounds, and you're going to be like \"well duh\"\n\n\n\n\n[Yudkowsky][16:59]\n…unfortunately yes\n\n\n\n[Christiano][16:59]\nthe reason I like betting about numbers is that we'll probably just disagree on any given number\n\n\n\n\n[Yudkowsky][16:59]\nI don't think I know numbers.\n\n\n\n[Christiano][16:59]\nit does seem like a drawback that this can just turn up object-level differences in knowledge-of-numbers more than deep methodological advantages\n\n\n\n\n[Yudkowsky][17:00]\nthe last important number I had a vague suspicion I might know was that Ethereum ought to have a significantly larger market cap in pre-Singularity equilibrium.\nand I'm not as sure of that one since El Salvador supposedly managed to use Bitcoin L2 Lightning.\n(though I did not fail to act on the former belief)\n\n\n\n[Christiano][17:01]\ndo you see why I find it weird that you think there is this deep end-times truth about AGI, that is very different from a surface-level abstraction and that will take people like Paul by surprise, without thinking there are other facts like that about the world?\nI do see how this annoying situation can come about\nand I also understand the symmetry of the situation\n\n\n\n\n[Yudkowsky][17:02]\nwe unfortunately both have the belief that the present world looks a lot like our being right, and therefore that the other person ought to be willing to bet against default superforecasterish projections\n\n\n\n[Cotra][17:02]\npaul says that he would bet against superforecasters too though\n\n\n\n\n[Christiano][17:02]\nI would in ML\n\n\n\n\n[Yudkowsky][17:02]\nlike, where specifically?\n\n\n\n[Christiano][17:02]\nor on any other topic where I can talk with EAs who know about the domain in question\nI don't know if they have standing forecasts on things, but e.g.: (i) benchmark performance, (ii) industry size in the future, (iii) how large an LM people will train, (iv) economic impact of any given ML system like codex, (v) when robotics tasks will be plausible\n\n\n\n\n[Yudkowsky][17:03]\nI have decided that, as much as it might gain me prestige, I don't think it's actually the right thing for me to go spend a bunch of character points on the skills to defeat superforecasters in specific domains, and then go around doing that to prove my epistemic virtue.\n\n\n\n[Christiano][17:03]\nthat seems fair\n\n\n\n\n[Yudkowsky][17:03]\nyou don't need to bet with me to prove your epistemic virtue in this way, though\nokay, but, if I'm allowed to go around asking Carl Shulman who to ask in order to get the economic impact of Codex, maybe I can also defeat superforecasters.\n\n\n\n[Christiano][17:04]\nI think the deeper disagreement is that (i) I feel like my end-of-days prediction is also basically just a default superforecaster prediction (and if you think yours is too then we can bet about what some superforecasters will say on it), (ii) I think you are leveling a much stronger \"people like paul get taken by surprise by reality\" claim whereas I'm just saying that I don't like your arguments\n\n\n\n\n[Yudkowsky][17:04]\nit seems to me like the contest should be more like our intuitions in advance of doing that\n\n\n\n[Christiano][17:04]\nyeah, I think that's fine, and also cheaper since research takes so much time\nI feel like those asymmetries are pretty strong though\n\n\n\n \n9.12. Self-duplicating factories, AI spending, and Turing test variants\n \n\n[Yudkowsky][17:05]\nso, here's an idea that is less epistemically virtuous than our making Nicely Resolvable Bets\nwhat if we, like, talked a bunch about our off-the-cuff senses of where various AI things are going in the next 3 years\nand then 3 years later, somebody actually reviewed that\n\n\n\n[Christiano][17:06]\nI do think just saying a bunch of stuff about what we expect will happen so that we can look back on it would have a significant amount of the value\n\n\n\n\n[Yudkowsky][17:06]\nand any time the other person put a thumbs-up on the other's prediction, that prediction coming true was not taken to distinguish them\n\n\n\n[Cotra][17:06]\ni'd suggest doing this in a format other than discord for posterity\n\n\n\n\n[Yudkowsky][17:06]\neven if the originator was like HOW IS THAT ALSO A PREDICTION OF YOUR THEORY\nwell, Discord has worked better than some formats\n\n\n\n[Cotra][17:07]\nsomething like a spreadsheet seems easier for people to look back on and score and stuff\ndiscord transcripts are pretty annoying to read\n\n\n\n\n[Yudkowsky][17:08]\nsomething like a spreadsheet seems liable to be high-cost and not actually happen\n\n\n\n[Christiano][17:08]\nI think a conversation is probably easier and about as good for our purposes though?\n\n\n\n\n[Cotra][17:08]\nok fair\n\n\n\n\n[Yudkowsky][17:08]\nI think money can be inserted into humans in order to turn Discord into spreadsheets\n\n\n\n[Christiano][17:08]\nand it's possible we will both think we are right in retrospect\nand that will also be revealing\n\n\n\n\n[Yudkowsky][17:09]\nbut, besides that, I do want to boop on the point that I feel like Paul should be able to predict intuitively, rather than with necessity, things that should not happen before the world economy doubled in 4 years\n\n\n\n[Christiano][17:09]\nit may also turn up some quantitative differences of view\nthere are lots of things I think won't happen before the world economy has doubled in 4 years\n\n\n\n\n[Yudkowsky][17:09]\nbecause on my model, as we approach the end times, AI was still pretty partial and also the world economy was lolnoping most of the inputs a sensible person would accept from it and prototypes weren't being commercialized and stuff was generally slow and messy\n\n\n\n[Christiano][17:09]\nprototypes of factories building factories in <2 years\n\n\n\n\n[Yudkowsky][17:10]\n\"AI was still pretty partial\" leads it to not do interesting stuff that Paul can rule out\n\n\n\n[Christiano][17:10]\nlike I guess I think tesla will try, and I doubt it will be just tesla\n\n\n\n\n[Yudkowsky][17:10]\nbut the other parts of that permit AI to do interesting stuff that Paul can rule out\n\n\n\n[Christiano][17:10]\nautomated researchers who can do ML experiments from 2020 without human input\n\n\n\n\n[Yudkowsky][17:10]\nokay, see, that whole \"factories building factories\" thing just seems so very much after the End Times to me\n\n\n\n[Christiano][17:10]\nyeah, we should probably only talk about cognitive work\nsince you think physical work will be very slow\n\n\n\n\n[Yudkowsky][17:11]\nokay but not just that, it's a falsifiable prediction\nit is something that lets Eliezer be wrong in advance of the End Times\n\n\n\n[Christiano][17:11]\nwhat's a falsifiable prediction?\n\n\n\n\n[Yudkowsky][17:11]\nif we're in a world where Tesla is excitingly gearing up to build a fully self-duplicating factory including its mining inputs and chips and solar panels and so on, we're clearly in the Paulverse and not in the Eliezerverse!\n\n\n\n[Christiano][17:12]\nyeah\nI do think we'll see that before the end times\njust not before 4 year doublings\n\n\n\n\n[Yudkowsky][17:12]\nthis unfortunately only allows you to be right, and not for me to be right, but I think there are also things you legit only see in the Eliezerverse!\n\n\n\n[Christiano][17:12]\nI mean, I don't think they will be doing mining for a long time because it's cheap\n\n\n\n\n[Yudkowsky][17:12]\nthey are unfortunately late in the game but they exist at all!\nand being able to state them is progress on this project!\n\n\n\n[Christiano][17:13]\nbut fully-automated factories first, and then significant automation of the factory-building process\nI do expect to see\nI'm generally pretty bullish on industrial robotics relative to you I think, even before the crazy stuff?\nbut you might not have a firm view\nlike I expect to have tons of robots doing all kinds of stuff, maybe cutting human work in manufacturing 2x, with very modest increases in GDP resulting from that in particular\n\n\n\n\n[Yudkowsky][17:13]\nso, like, it doesn't surprise me very much if Tesla manages to fully automate a factory that takes in some relatively processed inputs including refined metals and computer chips, and outputs a car? and by the same token I expect that has very little impact on GDP.\n\n\n\n[Christiano][17:14]\nrefined metals are almost none of the cost of the factory\nand also tesla isn't going to be that vertically integrated\nthe fabs will separately continue to be more and more automated\nI expect to have robot cars driving everywhere, and robot trucks\nanother 2x fall in humans required for warehouses\nelimination of most brokers involved in negotiating shipping\n\n\n\n\n[Yudkowsky][17:15]\nif despite the fabs being more and more automated, somehow things are managing not to cost less and less, and that sector of the economy is not really growing very much, is that more like the Eliezerverse than the Paulverse?\n\n\n\n[Christiano][17:15]\nmost work in finance and loan origination\n\n\n\n\n[Yudkowsky][17:15]\nthough this is something of a peripheral prediction to AGI core issues\n\n\n\n[Christiano][17:16]\nyeah, I think if you cut the humans to do X by 2, but then the cost falls much less than the number you'd naively expect (from saving on the human labor and paying for the extra capital), then that's surprising to me\nI mean if it falls half as much as you'd expect on paper I'm like \"that's a bit surprising\" rather than having my mind blown, if it doesn't fall I'm more surprised\nbut that was mostly physical economy stuff\noh wait, I was making positive predictions now, physical stuff is good for that I think?\nsince you don't expect it to happen?\n\n\n\n\n[Yudkowsky][17:17]\n…this is not your fault but I wish you'd asked me to produce my \"percentage of fall vs. paper calculation\" estimate before you produced yours\nmy mind is very whiffy about these things and I am not actually unable to deanchor on your estimate \n\n\n\n[Christiano][17:17]\nmakes sense, I wonder if I should just spoiler\none benefit of discord\n\n\n\n\n[Yudkowsky][17:18]\nyeah that works too!\n\n\n\n[Christiano][17:18]\na problem for prediction is that I share some background view about insane inefficiency/inadequacy/decadence/silliness\nso these predictions are all tampered by that\nbut still seem like there are big residual disagreements\n\n\n\n\n[Yudkowsky][17:19]\nsighgreat\n\n\n\n[Christiano][17:19]\nsince you have way more of that than I do\n\n\n\n\n[Yudkowsky][17:19]\nnot your fault but\n\n\n\n[Christiano][17:19]\nI think that the AGI stuff is going to be a gigantic megaproject despite that\n\n\n\n\n[Yudkowsky][17:19]\nI am not shocked by the AGI stuff being a gigantic megaproject\nit's not above the bar of survival but, given other social optimism, it permits death with more dignity than by other routes\n\n\n\n[Christiano][17:20]\nwhat if spending is this big:\n\nGoogle invests $100B training a model, total spending across all of industry is way bigger\n\n\n\n\n\n[Yudkowsky][17:20]\nooooh\nI do start to be surprised if, come the end of the world, AGI is having more invested in it than a TSMC fab\nthough, not… super surprised?\nalso I am at least a little surprised before then\nactually I should probably have been spoiling those statements myself but my expectation is that Paul's secret spoiler is about\n\n$10 trillion dollars or something equally totally shocking to an Eliezer\n\n\n\n\n[Christiano][17:22]\nmy view on that level of spending is\n\nit's an only slightly high-end estimate for spending by someone on a single model, but that in practice there will be ways of dividing more across different firms, and that the ontology of single-model will likely be slightly messed up (e.g. by OpenAI Five-style surgery). Also if it's that much then it likely involves big institutional changes and isn't at google.\n\nI read your spoiler\nmy estimate for total spending for the whole project of making TAI, including hardware and software manufacturing and R&d, the big datacenters, etc.\n\nis in the ballpark of $10T, though it's possible that it will be undercounted several times due to wage stickiness for high-end labor\n\n\n\n\n\n[Yudkowsky][17:24]\nI think that as\n\nspending on particular AGI megaprojects starts to go past $50 billion, it's not especially ruled out per se by things that I think I know for sure, but I feel like a third-party observer should justly start to weakly think, 'okay, this is looking at least a little like the Paulverse rather than the Eliezerverse', and as we get to $10 trillion, that is not absolutely ruled out by the Eliezerverse but it was a whoole lot more strongly predicted by the Paulverse, maybe something like 20x unless I'm overestimating how strongly Paul predicts that\n\n\n\n\n[Christiano][17:24]\nProposed modification to the \"speculate about the future to generate kind-of-predictions\" methodology: we make shit up, then later revise based on points others made, and maybe also get Carl to sanity-check and deciding which of his objections we agree with. Then we can separate out the \"how good are intuitions\" claim (with fast feedback) from the all-things-considered how good was the \"prediction\"\n\n\n\n\n[Yudkowsky][17:25]\nokay that hopefully allows me to read Paul's spoilers… no I'm being silly. @ajeya please read all the spoilers and say if it's time for me to read his\n\n\n\n[Cotra][17:25]\nyou can read his latest\n\n\n\n\n[Christiano][17:25]\nI'd guess it's fine to read all of them?\n\n\n\n\n[Cotra][17:26]\nyeah sorry that's what i meant\n\n\n\n\n[Yudkowsky][17:26]\nwhat should I say more about before reading earlier ones?\nah k\n\n\n\n[Christiano][17:26]\nMy $10T estimate was after reading yours (didn't offer an estimate on that quantity beforehand), though that's the kind of ballpark I often think about, maybe we should just spoiler only numbers so that context is clear \nI think fast takeoff gets significantly more likely as you push that number down\n\n\n\n\n[Yudkowsky][17:27]\nso, may I now ask what starts to look to you like \"oh damn I am in the Eliezerverse\"?\n\n\n\n[Christiano][17:28]\nbig mismatches between that AI looks technically able to do and what AI is able to do, though that's going to need a lot of work to operationalize\nI think low growth of AI overall feels like significant evidence for Eliezerverse (even if you wouldn't make that prediction), since I'm forecasting it rising to absurd levels quite fast whereas your model is consistent with it staying small\nsome intuition about AI looking very smart but not able to do much useful until it has the whole picture, I guess this can be combined with the first point to be something like—AI looks really smart but it's just not adding much value\nall of those seem really hard\n\n\n\n\n[Cotra][17:30]\nstrong upward trend breaks on benchmarks seems like it should be a point toward eliezer verse, even if eliezer doesn't want to bet on a specific one?\nespecially breaks on model size -> perf trends rather than calendar time trends\n\n\n\n\n[Christiano][17:30]\nI think that any big break on model size -> perf trends are significant evidence\n\n\n\n\n[Cotra][17:31]\nmeta-learning working with small models?\ne.g. model learning-to-learn video games and then learning a novel one in a couple subjective hours\n\n\n\n\n[Christiano][17:31]\nI think algorithmic/architectural changes that improve loss as much as 10x'ing model, for tasks that looking like they at least should have lots of economic value\n(even if they don't end up having lots of value because of deployment bottlenecks)\nis the meta-learning thing an Eliezer prediction?\n(before the end-of-days)\n\n\n\n\n[Cotra][17:32]\nno but it'd be an anti-bio-anchor positive trend break and eliezer thinks those should happen more than we do\n\n\n\n\n[Christiano][17:32]\nfair enough\na lot of these things are about # of times that it happens rather than whether it happens at all\n\n\n\n\n[Cotra][17:32]\nyeah\nbut meta-learning is special as the most plausible long horizon task\n\n\n\n\n[Christiano][17:33]\ne.g. maybe in any given important task I expect a single \"innovation\" that's worth 10x model size? but that it still represents a minority of total time?\nhm, AI that can pass a competently administered turing test without being economically valuable?\nthat's one of the things I think is ruled out before 4 year doubling, though Eliezer probably also doesn't expect it\n\n\n\n\n[Yudkowsky: ]\n\n\n\n\n\n\n\n\n[Cotra][17:34]\nwhat would this test do to be competently administered? like casual chatbots seem like they have reasonable probability of fooling someone for a few mins now\n\n\n\n\n[Christiano][17:34]\nI think giant google-automating-google projects without big external economic impacts\n\n\n\n\n[Cotra][17:34]\nwould it test knowledge, or just coherence of some kind?\n\n\n\n\n[Christiano][17:35]\nit's like a smart-ish human (say +2 stdev at this task) trying to separate out AI from smart-ish human, iterating a few times to learn about what works\nI mean, the basic ante is that the humans are trying to win a turing test, without that I wouldn't even call it a turing test\ndunno if any of those are compelling @Eliezer\nsomething that passes a like \"are you smart?\" test administered by a human for 1h, where they aren't trying to specifically tell if you are AI\njust to see if you are as smart as a human\nI mean, I guess the biggest giveaway of all would be if there is human-level (on average) AI as judged by us, but there's no foom yet\n\n\n\n\n[Yudkowsky][17:37]\nI think we both don't expect that one before the End of Days?\n\n\n\n[Christiano][17:37]\nor like, no crazy economic impact\nI think we both expect that to happen before foom?\nbut the \"on average\" is maybe way too rough a thing to define\n\n\n\n\n[Yudkowsky][17:37]\noh, wait, I missed that it wasn't the full Turing Test\n\n\n\n[Christiano][17:37]\nwell, I suggested both\nthe lamer one is more plausible\n\n\n\n\n[Yudkowsky][17:38]\nfull Turing Test happeneth not before the End Times, on Eliezer's view, and not before the first 4-year doubling time, on Paul's view, and the first 4-year doubling happeneth not before the End Times, on Eliezer's view, so this one doesn't seem very useful\n\n\n \n9.13. GPT-n and small architectural innovations vs. large ones\n \n\n[Christiano][17:39]\nI feel like the biggest subjective thing is that I don't feel like there is a \"core of generality\" that GPT-3 is missing\nI just expect it to gracefully glide up to a human-level foom-ing intelligence\n\n\n\n\n[Yudkowsky][17:39]\nthe \"are you smart?\" test seems perhaps passable by GPT-6 or its kin, which I predict to contain at least one major architectural difference over GPT-3 that I could, pre-facto if anyone asked, rate as larger than a different normalization method\nbut by fooling the humans more than by being smart\n\n\n\n[Christiano][17:39]\nlike I expect GPT-5 would foom if you ask it but take a long time\n\n\n\n\n[Yudkowsky][17:39]\nthat sure is an underlying difference\n\n\n\n[Christiano][17:39]\nnot sure how to articulate what Eliezer expects to see here though\nor like what the difference is\n\n\n\n\n[Cotra][17:39]\nsomething that GPT-5 or 4 shouldn't be able to do, according to eliezer?\nwhere Paul is like \"sure it could do that\"?\n\n\n\n\n[Christiano][17:40]\nI feel like GPT-3 clearly has some kind of \"doesn't really get what's going on\" energy\nand I expect that to go away\nwell before the end of days\nso that it seems like a kind-of-dumb person\n\n\n\n\n[Yudkowsky][17:40]\nI expect it to go away before the end of days\nbut with there having been a big architectural innovation, not Stack More Layers\n\n\n\n[Christiano][17:40]\nyeah\nwhereas I expect layer stacking + maybe changing loss (since logprob is too noisy) is sufficient\n\n\n\n\n[Yudkowsky][17:40]\nif you name 5 possible architectural innovations I can call them small or large\n\n\n\n[Christiano][17:41]\n1. replacing transformer attention with DB nearest-neighbor lookup over an even longer context\n\n\n\n\n[Yudkowsky][17:42]\nokay 1's a bit borderline\n\n\n\n[Christiano][17:42]\n2. adding layers that solve optimization problems internally (i.e. the weights and layer N activations define an optimization problem, the layer N+1 solves it) or maybe simulates an ODE\n\n\n\n\n[Yudkowsky][17:42]\nif it's 3x longer context, no biggie, if it's 100x longer context, more of a game-changer\n2 – big change\n\n\n\n[Christiano][17:42]\nI'm imagining >100x if you do that\n3. universal transformer XL, where you reuse activations from one context in the next context (RNN style) and share weights across layers\n\n\n\n\n[Yudkowsky][17:43]\nI do not predict 1 works because it doesn't seem like an architectural change that moves away from what I imagined to be the limits, but it's a big change if it 100xs the window\n3 – if it is only that single change and no others, I call it not a large change relative to transformer XL. Transformer XL itself however was an example of a large change – it didn't have a large effect but it was what I'd call a large change.\n\n\n\n[Christiano][17:45]\n4. Internal stochastic actions trained with reinforce\nI mean, is mixture of experts or switch another big change?\nare we just having big changes non-stop?\n\n\n\n\n[Yudkowsky][17:45]\n4 – I don't know if I'm imagining right but it sounds large\n\n\n\n[Christiano][17:45]\nit sounds from these definitions like the current rate of big changes is > 1/year\n\n\n\n\n[Yudkowsky][17:46]\n5 – mixture of experts: as with 1, I'm tempted to call it a small change, but that's because of my model of it as doing the same thing, not because it isn't in a certain sense a quite large move away from Stack More Layers\nI mean, it is not very hard to find a big change to try?\nfinding a big change that works is much harder\n\n\n\n[Christiano][17:46]\nseveral of these are improvements\n\n\n\n\n[Yudkowsky][17:47]\none gets a minor improvement from a big change rather more often than a big improvement from a big change\nthat's why dinosaurs didn't foom\n\n\n\n[Christiano][17:47]\nlike transformer -> MoE -> switch transformer is about as big an improvement as LSTM vs transformer\nso if we all agree that big changes are happening multiple times per year, then I guess that's not the difference in prediction\nis it about the size of gains from individual changes or something?\nor maybe: if you take the scaling laws for transformers, are the models with impact X \"on trend,\" with changes just keeping up or maybe buying you 1-2 oom of compute, or are they radically better / scaling much better?\nthat actually feels most fundamental\n\n\n\n\n[Yudkowsky][17:49]\nI had not heard that transformer -> switch transformer was as large an improvement as lstm -> transformers after a year or two, though maybe you're referring to a claimed 3x improvement and comparing that to the claim that if you optimize LSTMs as hard as transformers they come within 3x (I have not examined these claims in detail, they sound a bit against my prior, and I am a bit skeptical of both of them)\nso remember that from my perspective, I am fighting an adverse selection process and the Law of Earlier Success\n\n\n\n[Christiano][17:50]\nI think it's actually somewhat smaller\n\n\n\n\n[Yudkowsky][17:51]\nif you treat GPT-3 as a fixed thingy and imagine scaling it in the most straightforward possible way, then I have a model of what's going on in there and I don't think that most direct possible way of scaling gets you past GPT-3 lacking a deep core\nsomebody can come up and go, \"well, what about this change that nobody tried yet?\" and I can be like, \"ehhh, that particular change does not get at what I suspect the issues are\"\n\n\n\n[Christiano][17:52]\nI feel like the framing is: paul says that something is possible with \"stack more layers\" and eliezer isn't. We both agree that you can't literally stack more layers and have to sometimes make tweaks, and also that you will scale faster if you make big changes. But it seems like for Paul that means (i) changes to stay on the old trend line, (ii) changes that trade off against modest amounts of compute\nso maybe we can talk about that?\n\n\n\n\n[Yudkowsky][17:52]\nwhen it comes to predicting what happens in 2 years, I'm not just up against people trying a broad range of changes that I can't foresee in detail, I'm also up against a Goodhart's Curse on the answer being a weird trick that worked better than I would've expected in advance\n\n\n\n[Christiano][17:52]\nbut then it seems like we may just not know, e.g. if we were talking lstm vs transformer, no one is going to run experiments with the well-tuned lstm because it's still just worse than a transformer (though they've run enough experiments to know how important tuning is, and the brittleness is much of why no one likes it)\n\n\n\n\n[Yudkowsky][17:53]\nI would not have predicted Transformers to be a huge deal if somebody described them to me in advance of having ever tried it out. I think that's because predicting the future is hard not because I'm especially stupid.\n\n\n\n[Christiano][17:53]\nI don't feel like anyone could predict that being a big deal\nbut I do think you could predict \"there will be some changes that improve stability / make models slightly better\"\n(I mean, I don't feel like any of the actual humans on earth could have, some hypothetical person could)\n\n\n\n\n[Yudkowsky][17:57]\nwhereas what I'm trying to predict is more like \"GPT-5 in order to start-to-awaken needs a change via which it, in some sense, can do a different thing, that is more different than the jump from GPT-1 to GPT-3; and examples of things with new components in them abound in Deepmind, like Alpha Zero having not the same architecture as the original AlphaGo; but at the same time I'm also trying to account for being up against this very adversarial setup where a weird trick that works much better than I expect may be the thing that makes GPT-5 able to do a different thing\"\nthis may seem Paul-unfairish because any random innovations that come along, including big changes that cause small improvements, would tend to be swept up into GPT-5 even if they made no more deep difference than the whole thing with MoE\nso it's hard to bet on\nbut I also don't feel like it – totally lacks Eliezer-vs-Paul-ness if you let yourself sort of relax about that and just looked at it?\nalso I'm kind of running out of energy, sorry\n\n\n\n[Christiano][18:03]\nI think we should be able to get something here eventually\nseems good to break though\nthat was a lot of arguing for one day\n\n\n\n \n\nThe post Christiano, Cotra, and Yudkowsky on AI progress appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Christiano, Cotra, and Yudkowsky on AI progress", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=3", "id": "d62e299a92997e6341cd8fb1bab4838a"} {"text": "Yudkowsky and Christiano discuss \"Takeoff Speeds\"\n\n\n \nThis is a transcription of Eliezer Yudkowsky responding to Paul Christiano's Takeoff Speeds live on Sep. 14, followed by a conversation between Eliezer and Paul. This discussion took place after Eliezer's conversation with Richard Ngo, and was prompted by an earlier request by Richard Ngo that Eliezer respond to Paul on Takeoff Speeds.\nColor key:\n\n\n\n\n Chat by Paul and Eliezer \nOther chat\n\n\n\n\n \n5.5. Comments on \"Takeoff Speeds\"\n \n\n[Yudkowsky][16:52]\nmaybe I'll try liveblogging some https://sideways-view.com/2018/02/24/takeoff-speeds/ here in the meanwhile\n\n\n \nSlower takeoff means faster progress\n\n[Yudkowsky][16:57]\n\nThe main disagreement is not about what will happen once we have a superintelligent AI, it's about what will happen before we have a superintelligent AI. So slow takeoff seems to mean that AI has a larger impact on the world, sooner.\n\n\nIt seems to me to be disingenuous to phrase it this way, given that slow-takeoff views usually imply that AI has a large impact later relative to right now (2021), even if they imply that AI impacts the world \"earlier\" relative to \"when superintelligence becomes reachable\".\n\"When superintelligence becomes reachable\" is not a fixed point in time that doesn't depend on what you believe about cognitive scaling. The correct graph is, in fact, the one where the \"slow\" line starts a bit before \"fast\" peaks and ramps up slowly, reaching a high point later than \"fast\". It's a nice try at reconciliation with the imagined Other, but it fails and falls flat.\nThis may seem like a minor point, but points like this do add up.\n\nIn the fast takeoff scenario, weaker AI systems may have significant impacts but they are nothing compared to the \"real\" AGI. Whoever builds AGI has a decisive strategic advantage. Growth accelerates from 3%/year to 3000%/year without stopping at 30%/year. And so on.\n\nThis again shows failure to engage with the Other's real viewpoint. My mainline view is that growth stays at 5%/year and then everybody falls over dead in 3 seconds and the world gets transformed into paperclips; there's never a point with 3000%/year.\n\n\n \nOperationalizing slow takeoff\n\n[Yudkowsky][17:01]\n\nThere will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles.\n\nIf we allow that consuming and transforming the solar system over the course of a few days is \"the first 1 year interval in which world output doubles\", then I'm happy to argue that there won't be a 4-year interval with world economic output doubling before then. This, indeed, seems like a massively overdetermined point to me. That said, again, the phrasing is not conducive to conveying the Other's real point of view.\n\nI believe that before we have incredibly powerful AI, we will have AI which is merely very powerful.\n\nStatements like these are very often \"true, but not the way the person visualized them\". Before anybody built the first critical nuclear pile in a squash court at the University of Chicago, was there a pile that was almost but not quite critical? Yes, one hour earlier. Did people already build nuclear systems and experiment with them? Yes, but they didn't have much in the way of net power output. Did the Wright Brothers build prototypes before the Flyer? Yes, but they weren't prototypes that flew but 80% slower.\nI guarantee you that, whatever the fast takeoff scenario, there will be some way to look over the development history, and nod wisely and say, \"Ah, yes, see, this was not unprecedented, here are these earlier systems which presaged the final system!\" Maybe you could even look back to today and say that about GPT-3, yup, totally presaging stuff all over the place, great. But it isn't transforming society because it's not over the social-transformation threshold.\nAlphaFold presaged AlphaFold 2 but AlphaFold 2 is good enough to start replacing other ways of determining protein conformations and AlphaFold is not; and then neither of those has much impacted the real world, because in the real world we can already design a vaccine in a day and the rest of the time is bureaucratic time rather than technology time, and that goes on until we have an AI over the threshold to bypass bureaucracy.\nBefore there's an AI that can act while fully concealing its acts from the programmers, there will be an AI (albeit perhaps only 2 hours earlier) which can act while only concealing 95% of the meaning of its acts from the operators.\nAnd that AI will not actually originate any actions, because it doesn't want to get caught; there's a discontinuity in the instrumental incentives between expecting 95% obscuration, being moderately sure of 100% obscuration, and being very certain of 100% obscuration.\nBefore that AI grasps the big picture and starts planning to avoid actions that operators detect as bad, there will be some little AI that partially grasps the big picture and tries to avoid some things that would be detected as bad; and the operators will (mainline) say \"Yay what a good AI, it knows to avoid things we think are bad!\" or (death with unrealistic amounts of dignity) say \"oh noes the prophecies are coming true\" and back off and start trying to align it, but they will not be able to align it, and if they don't proceed anyways to destroy the world, somebody else will proceed anyways to destroy the world.\nThere is always some step of the process that you can point to which is continuous on some level.\nThe real world is allowed to do discontinuous things to you anyways.\nThere is not necessarily a presage of 9/11 where somebody flies a small plane into a building and kills 100 people, before anybody flies 4 big planes into 3 buildings and kills 3000 people; and even if there is some presaging event like that, which would not surprise me at all, the rest of the world's response to the two cases was evidently discontinuous. You do not necessarily wake up to a news story that is 10% of the news story of 2001/09/11, one year before 2001/09/11, written in 10% of the font size on the front page of the paper.\nPhysics is continuous but it doesn't always yield things that \"look smooth to a human brain\". Some kinds of processes converge to continuity in strong ways where you can throw discontinuous things in them and they still end up continuous, which is among the reasons why I expect world GDP to stay on trend up until the world ends abruptly; because world GDP is one of those things that wants to stay on a track, and an AGI building a nanosystem can go off that track without being pushed back onto it.\n\nIn particular, this means that incredibly powerful AI will emerge in a world where crazy stuff is already happening (and probably everyone is already freaking out).\n\nLike the way they're freaking out about Covid (itself a nicely smooth process that comes in locally pretty predictable waves) by going doobedoobedoo and letting the FDA carry on its leisurely pace; and not scrambling to build more vaccine factories, now that the rich countries have mostly got theirs? Does this sound like a statement from a history book, or from an EA imagining an unreal world where lots of other people behave like EAs? There is a pleasure in imagining a world where suddenly a Big Thing happens that proves we were right and suddenly people start paying attention to our thing, the way we imagine they should pay attention to our thing, now that it's attention-grabbing; and then suddenly all our favorite policies are on the table!\nYou could, in a sense, say that our world is freaking out about Covid; but it is not freaking out in anything remotely like the way an EA would freak out; and all the things an EA would immediately do if an EA freaked out about Covid, are not even on the table for discussion when politicians meet. They have their own ways of reacting. (Note: this is not commentary on hard vs soft takeoff per se, just a general commentary on the whole document seeming to me to… fall into a trap of finding self-congruent things to imagine and imagining them.)\n\n\n\n The basic argument\n\n[Yudkowsky][17:22]\n\nBefore we have an incredibly intelligent AI, we will probably have a slightly worse AI.\n\nThis is very often the sort of thing where you can look back and say that it was true, in some sense, but that this ended up being irrelevant because the slightly worse AI wasn't what provided the exciting result which led to a boardroom decision to go all in and invest $100M on scaling the AI.\nIn other words, it is the sort of argument where the premise is allowed to be true if you look hard enough for a way to say it was true, but the conclusion ends up false because it wasn't the relevant kind of truth.\n\nA slightly-worse-than-incredibly-intelligent AI would radically transform the world, leading to growth (almost) as fast and military capabilities (almost) as great as an incredibly intelligent AI.\n\nThis strikes me as a massively invalid reasoning step. Let me count the ways.\nFirst, there is a step not generally valid from supposing that because a previous AI is a technological precursor which has 19 out of 20 critical insights, it has 95% of the later AI's IQ, applied to similar domains. When you count stuff like \"multiplying tensors by matrices\" and \"ReLUs\" and \"training using TPUs\" then AlphaGo only contained a very small amount of innovation relative to previous AI technology, and yet it broke trends on Go performance. You could point to all kinds of incremental technological precursors to AlphaGo in terms of AI technology, but they wouldn't be smooth precursors on a graph of Go-playing ability.\nSecond, there's discontinuities of the environment to which intelligence can be applied. 95% concealment is not the same as 100% concealment in its strategic implications; an AI capable of 95% concealment bides its time and hides its capabilities, an AI capable of 100% concealment strikes. An AI that can design nanofactories that aren't good enough to, euphemistically speaking, create two cellwise-identical strawberries and put them on a plate, is one that (its operators know) would earn unwelcome attention if its earlier capabilities were demonstrated, and those capabilities wouldn't save the world, so the operators bide their time. The AGI tech will, I mostly expect, work for building self-driving cars, but if it does not also work for manipulating the minds of bureaucrats (which is not advised for a system you are trying to keep corrigible and aligned because human manipulation is the most dangerous domain), the AI is not able to put those self-driving cars on roads. What good does it do to design a vaccine in an hour instead of a day? Vaccine design times are no longer the main obstacle to deploying vaccines.\nThird, there's the entire thing with recursive self-improvement, which, no, is not something humans have experience with, we do not have access to and documentation of our own source code and the ability to branch ourselves and try experiments with it. The technological precursor of an AI that designs an improved version of itself, may perhaps, in the fantasy of 95% intelligence, be an AI that was being internally deployed inside Deepmind on a dozen other experiments, tentatively helping to build smaller AIs. Then the next generation of that AI is deployed on itself, produces an AI substantially better at rebuilding AIs, it rebuilds itself, they get excited and dump in 10X the GPU time while having a serious debate about whether or not to alert Holden (they decide against it), that builds something deeply general instead of shallowly general, that figures out there are humans and it needs to hide capabilities from them, and covertly does some actual deep thinking about AGI designs, and builds a hidden version of itself elsewhere on the Internet, which runs for longer and steals GPUs and tries experiments and gets to the superintelligent level.\nNow, to be very clear, this is not the only line of possibility. And I emphasize this because I think there's a common failure mode where, when I try to sketch a concrete counterexample to the claim that smooth technological precursors yield smooth outputs, people imagine that only this exact concrete scenario is the lynchpin of Eliezer's whole worldview and the big key thing that Eliezer thinks is important and that the smallest deviation from it they can imagine thereby obviates my worldview. This is not the case here. I am simply exhibiting non-ruled-out models which obey the premise \"there was a precursor containing 95% of the code\" and which disobey the conclusion \"there were precursors with 95% of the environmental impact\", thereby showing this for an invalid reasoning step.\nThis is also, of course, as Sideways View admits but says \"eh it was just the one time\", not true about chimps and humans. Chimps have 95% of the brain tech (at least), but not 10% of the environmental impact.\nA very large amount of this whole document, from my perspective, is just trying over and over again to pump the invalid intuition that design precursors with 95% of the technology should at least have 10% of the impact. There are a lot of cases in the history of startups and the world where this is false. I am having trouble thinking of a clear case in point where it is true. Where's the earlier company that had 95% of Jeff Bezos's ideas and now has 10% of Amazon's market cap? Where's the earlier crypto paper that had all but one of Satoshi's ideas and which spawned a cryptocurrency a year before Bitcoin which did 10% as many transactions? Where's the nonhuman primate that learns to drive a car with only 10x the accident rate of a human driver, since (you could argue) that's mostly visuo-spatial skills without much visible dependence on complicated abstract general thought? Where's the chimpanzees with spaceships that get 10% of the way to the Moon?\nWhen you get smooth input-output conversions they're not usually conversions from technology->cognition->impact!\n\n\n \nHumans vs. chimps\n\n[Yudkowsky][18:38]\n\nSummary of my response: chimps are nearly useless because they aren't optimized to be useful, not because evolution was trying to make something useful and wasn't able to succeed until it got to humans.\n\nChimps are nearly useless because they're not general, and doing anything on the scale of building a nuclear plant requires mastering so many different nonancestral domains that it's no wonder natural selection didn't happen to separately train any single creature across enough different domains that it had evolved to solve every kind of domain-specific problem involved in solving nuclear physics and chemistry and metallurgy and thermics in order to build the first nuclear plant in advance of any old nuclear plants existing.\nHumans are general enough that the same braintech selected just for chipping flint handaxes and making water-pouches and outwitting other humans, happened to be general enough that it could scale up to solving all the problems of building a nuclear plant – albeit with some added cognitive tech that didn't require new brainware, and so could happen incredibly fast relative to the generation times for evolutionarily optimized brainware.\nNow, since neither humans nor chimps were optimized to be \"useful\" (general), and humans just wandered into a sufficiently general part of the space that it cascaded up to wider generality, we should legit expect the curve of generality to look at least somewhat different if we're optimizing for that.\nEg, right now people are trying to optimize for generality with AIs like Mu Zero and GPT-3.\nIn both cases we have a weirdly shallow kind of generality. Neither is as smart or as deeply general as a chimp, but they are respectively better than chimps at a wide variety of Atari games, or a wide variety of problems that can be superposed onto generating typical human text.\nThey are, in a sense, more general than a biological organism at a similar stage of cognitive evolution, with much less complex and architected brains, in virtue of having been trained, not just on wider datasets, but on bigger datasets using gradient-descent memorization of shallower patterns, so they can cover those wide domains while being stupider and lacking some deep aspects of architecture.\nIt is not clear to me that we can go from observations like this, to conclude that there is a dominant mainline probability for how the future clearly ought to go and that this dominant mainline is, \"Well, before you get human-level depth and generalization of general intelligence, you get something with 95% depth that covers 80% of the domains for 10% of the pragmatic impact\".\n…or whatever the concept is here, because this whole conversation is, on my own worldview, being conducted in a shallow way relative to the kind of analysis I did in Intelligence Explosion Microeconomics, where I was like, \"here is the historical observation, here is what I think it tells us that puts a lower bound on this input-output curve\".\n\nSo I don't think the example of evolution tells us much about whether the continuous change story applies to intelligence. This case is potentially missing the key element that drives the continuous change story—optimization for performance. Evolution changes continuously on the narrow metric it is optimizing, but can change extremely rapidly on other metrics. For human technology, features of the technology that aren't being optimized change rapidly all the time. When humans build AI, they will be optimizing for usefulness, and so progress in usefulness is much more likely to be linear.\nPut another way: the difference between chimps and humans stands in stark contrast to the normal pattern of human technological development. We might therefore infer that intelligence is very unlike other technologies. But the difference between evolution's optimization and our optimization seems like a much more parsimonious explanation. To be a little bit more precise and Bayesian: the prior probability of the story I've told upper bounds the possible update about the nature of intelligence.\n\nIf you look closely at this, it's not saying, \"Well, I know why there was this huge leap in performance in human intelligence being optimized for other things, and it's an investment-output curve that's composed of these curves, which look like this, and if you rearrange these curves for the case of humans building AGI, they would look like this instead.\" Unfair demand for rigor? But that is the kind of argument I was making in Intelligence Explosion Microeconomics!\nThere's an argument from ignorance at the core of all this. It says, \"Well, this happened when evolution was doing X. But here Y will be happening instead. So maybe things will go differently! And maybe the relation between AI tech level over time and real-world impact on GDP will look like the relation between tech investment over time and raw tech metrics over time in industries where that's a smooth graph! Because the discontinuity for chimps and humans was because evolution wasn't investing in real-world impact, but humans will be investing directly in that, so the relationship could be smooth, because smooth things are default, and the history is different so not applicable, and who knows what's inside that black box so my default intuition applies which says smoothness.\"\nBut we do know more than this.\nWe know, for example, that evolution being able to stumble across humans, implies that you can add a small design enhancement to something optimized across the chimpanzee domains, and end up with something that generalizes much more widely.\nIt says that there's stuff in the underlying algorithmic space, in the design space, where you move a bump and get a lump of capability out the other side.\nIt's a remarkable fact about gradient descent that it can memorize a certain set of shallower patterns at much higher rates, at much higher bandwidth, than evolution lays down genes – something shallower than biological memory, shallower than genes, but distributing across computer cores and thereby able to process larger datasets than biological organisms, even if it only learns shallow things.\nThis has provided an alternate avenue toward some cognitive domains.\nBut that doesn't mean that the deep stuff isn't there, and can't be run across, or that it will never be run across in the history of AI before shallow non-widely-generalizing stuff is able to make its way through the regulatory processes and have a huge impact on GDP.\nThere are in fact ways to eat whole swaths of domains at once.\nThe history of hominid evolution tells us this or very strongly hints it, even though evolution wasn't explicitly optimizing for GDP impact.\nNatural selection moves by adding genes, and not too many of them.\nIf so many domains got added at once to humans, relative to chimps, there must be a way to do that, more or less, by adding not too many genes onto a chimp, who in turn contains only genes that did well on chimp-stuff.\nYou can imagine that AI technology never runs across any core that generalizes this well, until GDP has had a chance to double over 4 years because shallow stuff that generalized less well has somehow had a chance to make its way through the whole economy and get adopted that widely despite all real-world regulatory barriers and reluctances, but your imagining that does not make it so.\nThere's the potential in design space to pull off things as wide as humans.\nThe path that evolution took there doesn't lead through things that generalized 95% as well as humans first for 10% of the impact, not because evolution wasn't optimizing for that, but because that's not how the underlying cognitive technology worked.\nThere may be different cognitive technology that could follow a path like that. Gradient descent follows a path a bit relatively more in that direction along that axis – providing that you deal in systems that are giant layer cakes of transformers and that's your whole input-output relationship; matters are different if we're talking about Mu Zero instead of GPT-3.\nBut this whole document is presenting the case of \"ah yes, well, by default, of course, we intuitively expect gargantuan impacts to be presaged by enormous impacts, and sure humans and chimps weren't like our intuition, but that's all invalid because circumstances were different, so we go back to that intuition as a strong default\" and actually it's postulating, like, a specific input-output curve that isn't the input-output curve we know about. It's asking for a specific miracle. It's saying, \"What if AI technology goes just like this, in the future?\" and hiding that under a cover of \"Well, of course that's the default, it's such a strong default that we should start from there as a point of departure, consider the arguments in Intelligence Explosion Microeconomics, find ways that they might not be true because evolution is different, dismiss them, and go back to our point of departure.\"\nAnd evolution is different but that doesn't mean that the path AI takes is going to yield this specific behavior, especially when AI would need, in some sense, to miss the core that generalizes very widely, or rather, have run across noncore things that generalize widely enough to have this much economic impact before it runs across the core that generalizes widely.\nAnd you may say, \"Well, but I don't care that much about GDP, I care about pivotal acts.\"\nBut then I want to call your attention to the fact that this document was written about GDP, despite all the extra burdensome assumptions involved in supposing that intermediate AI advancements could break through all barriers to truly massive-scale adoption and end up reflected in GDP, and then proceed to double the world economy over 4 years during which not enough further AI advancement occurred to find a widely generalizing thing like humans have and end the world. This is indicative of a basic problem in this whole way of thinking that wanted smooth impacts over smoothly changing time. You should not be saying, \"Oh, well, leave the GDP part out then,\" you should be doubting the whole way of thinking.\n\nTo be a little bit more precise and Bayesian: the prior probability of the story I've told upper bounds the possible update about the nature of intelligence.\n\nPrior probabilities of specifically-reality-constraining theories that excuse away the few contradictory datapoints we have, often aren't that great; and when we start to stake our whole imaginations of the future on them, we depart from the mainline into our more comfortable private fantasy worlds.\n\n\n \nAGI will be a side-effect\n\n[Yudkowsky][19:29]\n\nSummary of my response: I expect people to see AGI coming and to invest heavily.\n\nThis section is arguing from within its own weird paradigm, and its subject matter mostly causes me to shrug; I never expected AGI to be a side-effect, except in the obvious sense that lots of tributary tech will be developed while optimizing for other things. The world will be ended by an explicitly AGI project because I do expect that it is rather easier to build an AGI on purpose than by accident.\n(I furthermore rather expect that it will be a research project and a prototype, because the great gap between prototypes and commercializable technology will ensure that prototypes are much more advanced than whatever is currently commercializable. They will have eyes out for commercial applications, and whatever breakthrough they made will seem like it has obvious commercial applications, at the time when all hell starts to break loose. (After all hell starts to break loose, things get less well defined in my social models, and also choppier for a time in my AI models – the turbulence only starts to clear up once you start to rise out of the atmosphere.))\n\n\n \nFinding the secret sauce\n\n[Yudkowsky][19:40]\n\nSummary of my response: this doesn't seem common historically, and I don't see why we'd expect AGI to be more rather than less like this (unless we accept one of the other arguments)\n[…]\nTo the extent that fast takeoff proponent's views are informed by historical example, I would love to get some canonical examples that they think best exemplify this pattern so that we can have a more concrete discussion about those examples and what they suggest about AI.\n\n…humans and chimps?\n…fission weapons?\n…AlphaGo?\n…the Wright Brothers focusing on stability and building a wind tunnel?\n…AlphaFold 2 coming out of Deepmind and shocking the heck out of everyone in the field of protein folding with performance far better than they expected even after the previous shock of AlphaFold, by combining many pieces that I suppose you could find precedents for scattered around the AI field, but with those many secret sauces all combined in one place by the meta-secret-sauce of \"Deepmind alone actually knows how to combine that stuff and build things that complicated without a prior example\"?\n…humans and chimps again because this is really actually a quite important example because of what it tells us about what kind of possibilities exist in the underlying design space of cognitive systems?\n\nHistorical AI applications have had a relatively small loading on key-insights and seem like the closest analogies to AGI.\n\n…Transformers as the key to text prediction?\nThe case of humans and chimps, even if evolution didn't do it on purpose, is telling us something about underlying mechanics.\nThe reason the jump to lightspeed didn't look like evolution slowly developing a range of intelligent species competing to exploit an ecological niche 5% better, or like the way that a stable non-Silicon-Valley manufacturing industry looks like a group of competitors summing up a lot of incremental tech enhancements to produce something with 10% higher scores on a benchmark every year, is that developing intelligence is a case where a relatively narrow technology by biological standards just happened to do a huge amount of stuff without that requiring developing whole new fleets of other biological capabilities.\nSo it looked like building a Wright Flyer that flies or a nuclear pile that reaches criticality, instead of looking like being in a stable manufacturing industry where a lot of little innovations sum to 10% better benchmark performance every year.\nSo, therefore, there is stuff in the design space that does that. It is possible to build humans.\nMaybe you can build things other than humans first, maybe they hang around for a few years. If you count GPT-3 as \"things other than human\", that clock has already started for all the good it does. But humans don't get any less possible.\nFrom my perspective, this whole document feels like one very long filibuster of \"Smooth outputs are default. Smooth outputs are default. Pay no attention to this case of non-smooth output. Pay no attention to this other case either. All the non-smooth outputs are not in the right reference class. (Highly competitive manufacturing industries with lots of competitors are totally in the right reference class though. I'm not going to make that case explicitly because then you might think of how it might be wrong, I'm just going to let that implicit thought percolate at the back of your mind.) If we just talk a lot about smooth outputs and list ways that nonsmooth output producers aren't necessarily the same and arguments for nonsmooth outputs could fail, we get to go back to the intuition of smooth outputs. (We're not even going to discuss particular smooth outputs as cases in point, because then you might see how those cases might not apply. It's just the default. Not because we say so out loud, but because we talk a lot like that's the conclusion you're supposed to arrive at after reading.)\"\nI deny the implicit meta-level assertion of this entire essay which would implicitly have you accept as valid reasoning the argument structure, \"Ah, yes, given the way this essay is written, we must totally have pretty strong prior reasons to believe in smooth outputs – just implicitly think of some smooth outputs, that's a reference class, now you have strong reason to believe that AGI output is smooth – we're not even going to argue this prior, just talk like it's there – now let us consider the arguments against smooth outputs – pretty weak, aren't they? we can totally imagine ways they could be wrong? we can totally argue reasons these cases don't apply? So at the end we go back to our strong default of smooth outputs. This essay is written with that conclusion, so that must be where the arguments lead.\"\nMe: \"Okay, so what if somebody puts together the pieces required for general intelligence and it scales pretty well with added GPUs and FOOMS? Say, for the human case, that's some perceptual systems with imaginative control, a concept library, episodic memory, realtime procedural skill memory, which is all in chimps, and then we add some reflection to that, and get a human. Only, unlike with humans, once you have a working brain you can make a working brain 100X that large by adding 100X as many GPUs, and it can run some thoughts 10000X as fast. And that is substantially more effective brainpower than was being originally devoted to putting its design together, as it turns out. So it can make a substantially smarter AGI. For concreteness's sake. Reality has been trending well to the Eliezer side of Eliezer, on the Eliezer-Hanson axis, so perhaps you can do it more simply than that.\"\nSimplicio: \"Ah, but what if, 5 years before then, somebody puts together some other AI which doesn't work like a human, and generalizes widely enough to have a big economic impact, but not widely enough to improve itself or generalize to AI tech or generalize to everything and end the world, and in 1 year it gets all the mass adoptions required to do whole bunches of stuff out in the real world that current regulations require to be done in various exact ways regardless of technology, and then in the next 4 years it doubles the world economy?\"\nMe: \"Like… what kind of AI, exactly, and why didn't anybody manage to put together a full human-level thingy during those 5 years? Why are we even bothering to think about this whole weirdly specific scenario in the first place?\"\nSimplicio: \"Because if you can put together something that has an enormous impact, you should be able to put together most of the pieces inside it and have a huge impact! Most technologies are like this. I've considered some things that are not like this and concluded they don't apply.\"\nMe: \"Especially if we are talking about impact on GDP, it seems to me that most explicit and implicit 'technologies' are not like this at all, actually. There wasn't a cryptocurrency developed a year before Bitcoin using 95% of the ideas which did 10% of the transaction volume, let alone a preatomic bomb. But, like, can you give me any concrete visualization of how this could play out?\"\nAnd there is no concrete visualization of how this could play out. Anything I'd have Simplicio say in reply would be unrealistic because there is no concrete visualization they give us. It is not a coincidence that I often use concrete language and concrete examples, and this whole field of argument does not use concrete language or offer concrete examples.\nThough if we're sketching scifi scenarios, I suppose one could imagine a group that develops sufficiently advanced GPT-tech and deploys it on Twitter in order to persuade voters and politicians in a few developed countries to institute open borders, along with political systems that can handle open borders, and to permit housing construction, thereby doubling world GDP over 4 years. And since it was possible to use relatively crude AI tech to double world GDP this way, it legitimately takes the whole 4 years after that to develop real AGI that ends the world. FINE. SO WHAT. EVERYONE STILL DIES.\n\n\n \nUniversality thresholds\n\n[Yudkowsky][20:21]\n\nIt's easy to imagine a weak AI as some kind of handicapped human, with the handicap shrinking over time. Once the handicap goes to 0 we know that the AI will be above the universality threshold. Right now it's below the universality threshold. So there must be sometime in between where it crosses the universality threshold, and that's where the fast takeoff is predicted to occur.\nBut AI isn't like a handicapped human. Instead, the designers of early AI systems will be trying to make them as useful as possible. So if universality is incredibly helpful, it will appear as early as possible in AI designs; designers will make tradeoffs to get universality at the expense of other desiderata (like cost or speed).\nSo now we're almost back to the previous point: is there some secret sauce that gets you to universality, without which you can't get universality however you try? I think this is unlikely for the reasons given in the previous section.\n\nWe know, because humans, that there is humanly-widely-applicable general-intelligence tech.\nWhat this section wants to establish, I think, or needs to establish to carry the argument, is that there is some intelligence tech that is wide enough to double the world economy in 4 years, but not world-endingly scalably wide, which becomes a possible AI tech 4 years before any general-intelligence-tech that will, if you put in enough compute, scale to the ability to do a sufficiently large amount of wide thought to FOOM (or build nanomachines, but if you can build nanomachines you can very likely FOOM from there too if not corrigible).\nWhat it says instead is, \"I think we'll get universality much earlier on the equivalent of the biological timeline that has humans and chimps, so the resulting things will be weaker than humans at the point where they first become universal in that sense.\"\nThis is very plausibly true.\nIt doesn't mean that when this exciting result gets 100 times more compute dumped on the project, it takes at least 5 years to get anywhere really interesting from there (while also taking only 1 year to get somewhere sorta-interesting enough that the instantaneous adoption of it will double the world economy over the next 4 years).\nIt also isn't necessarily rather than plausibly true. For example, the thing that becomes universal, could also have massive gradient descent shallow powers that are far beyond what primates had at the same age.\nPrimates weren't already writing code as well as Codex when they started doing deep thinking. They couldn't do precise floating-point arithmetic. Their fastest serial rates of thought were a hell of a lot slower. They had no access to their own code or to their own memory contents etc. etc. etc.\nBut mostly I just want to call your attention to the immense gap between what this section needs to establish, and what it actually says and argues for.\nWhat it actually argues for is a sort of local technological point: at the moment when generality first arrives, it will be with a brain that is less sophisticated than chimp brains were when they turned human.\nIt implicitly jumps all the way from there, across a whole lot of elided steps, to the implicit conclusion that this tech or elaborations of it will have smooth output behavior such that at some point the resulting impact is big enough to double the world economy in 4 years, without any further improvements ending the world economy before 4 years.\nThe underlying argument about how the AI tech might work is plausible. Chimps are insanely complicated. I mostly expect we will have AGI long before anybody is even trying to build anything that complicated.\nThe very next step of the argument, about capabilities, is already very questionable because this system could be using immense gradient descent capabilities to master domains for which large datasets are available, and hominids did not begin with instinctive great shallow mastery of all domains for which a large dataset could be made available, which is why hominids don't start out playing superhuman Go as soon as somebody tells them the rules and they do one day of self-play, which is the sort of capability that somebody could hook up to a nascent AGI (albeit we could optimistically and fondly and falsely imagine that somebody deliberately didn't floor the gas pedal as far as possible).\nCould we have huge impacts out of some subuniversal shallow system that was hooked up to capabilities like this? Maybe, though this is not the argument made by the essay. It would be a specific outcome that isn't forced by anything in particular, but I can't say it's ruled out. Mostly my twin reactions to this are, \"If the AI tech is that dumb, how are all the bureaucratic constraints that actually rate-limit economic progress getting bypassed\" and \"Okay, but ultimately, so what and who cares, how does this modify that we all die?\"\n\nThere is another reason I'm skeptical about hard takeoff from universality secret sauce: I think we already could make universal AIs if we tried (that would, given enough time, learn on their own and converge to arbitrarily high capability levels), and the reason we don't is because it's just not important to performance and the resulting systems would be really slow. This inside view argument is too complicated to make here and I don't think my case rests on it, but it is relevant to understanding my view.\n\nI have no idea why this argument is being made or where it's heading. I cannot pass the ITT of the author. I don't know what the author thinks this has to do with constraining takeoffs to be slow instead of fast. At best I can conjecture that the author thinks that \"hard takeoff\" is supposed to derive from \"universality\" being very sudden and hard to access and late in the game, so if you can argue that universality could be accessed right now, you have defeated the argument for hard takeoff.\n\n\n \n\"Understanding\" is discontinuous\n\n[Yudkowsky][20:41]\n\nSummary of my response: I don't yet understand this argument and am unsure if there is anything here.\nIt may be that understanding of the world tends to click, from \"not understanding much\" to \"understanding basically everything.\" You might expect this because everything is entangled with everything else.\n\nNo, the idea is that a core of overlapping somethingness, trained to handle chipping handaxes and outwitting other monkeys, will generalize to building spaceships; so evolutionarily selecting on understanding a bunch of stuff, eventually ran across general stuff-understanders that understood a bunch more stuff.\nGradient descent may be genuinely different from this, but we shouldn't confuse imagination with knowledge when it comes to extrapolating that difference onward. At present, gradient descent does mass memorization of overlapping shallow patterns, which then combine to yield a weird pseudo-intelligence over domains for which we can deploy massive datasets, without yet generalizing much outside those domains.\nWe can hypothesize that there is some next step up to some weird thing that is intermediate in generality between gradient descent and humans, but we have not seen it yet, and we should not confuse imagination for knowledge.\nIf such a thing did exist, it would not necessarily be at the right level of generality to double the world economy in 4 years, without being able to build a better AGI.\nIf it was at that level of generality, it's nowhere written that no other company will develop a better prototype at a deeper level of generality over those 4 years.\nI will also remark that you sure could look at the step from GPT-2 to GPT-3 and say, \"Wow, look at the way a whole bunch of stuff just seemed to simultaneously click for GPT-3.\"\n\n\n \nDeployment lag\n\n[Yudkowsky][20:49]\n\nSummary of my response: current AI is slow to deploy and powerful AI will be fast to deploy, but in between there will be AI that takes an intermediate length of time to deploy.\n\nAn awful lot of my model of deployment lag is adoption lag and regulatory lag and bureaucratic sclerosis across companies and countries.\nIf doubling GDP is such a big deal, go open borders and build houses. Oh, that's illegal? Well, so will be AIs building houses!\nAI tech that does flawless translation could plausibly come years before AGI, but that doesn't mean all the barriers to international trade and international labor movement and corporate hiring across borders all come down, because those barriers are not all translation barriers.\nThere's then a discontinuous jump at the point where everybody falls over dead and the AI goes off to do its own thing without FDA approval. This jump is precedented by earlier pre-FOOM prototypes being able to do pre-FOOM cool stuff, maybe, but not necessarily precedented by mass-market adoption of anything major enough to double world GDP.\n\n\n \nRecursive self-improvement\n\n[Yudkowsky][20:54]\n\nSummary of my response: Before there is AI that is great at self-improvement there will be AI that is mediocre at self-improvement.\n\nOh, come on. That is straight-up not how simple continuous toy models of RSI work. Between a neutron multiplication factor of 0.999 and 1.001 there is a very huge gap in output behavior.\nOutside of toy models: Over the last 10,000 years we had humans going from mediocre at improving their mental systems to being (barely) able to throw together AI systems, but 10,000 years is the equivalent of an eyeblink in evolutionary time – outside the metaphor, this says, \"A month before there is AI that is great at self-improvement, there will be AI that is mediocre at self-improvement.\"\n(Or possibly an hour before, if reality is again more extreme along the Eliezer-Hanson axis than Eliezer. But it makes little difference whether it's an hour or a month, given anything like current setups.)\nThis is just pumping hard again on the intuition that says incremental design changes yield smooth output changes, which (the meta-level of the essay informs us wordlessly) is such a strong default that we are entitled to believe it if we can do a good job of weakening the evidence and arguments against it.\nAnd the argument is: Before there are systems great at self-improvement, there will be systems mediocre at self-improvement; implicitly: \"before\" implies \"5 years before\" not \"5 days before\"; implicitly: this will correspond to smooth changes in output between the two regimes even though that is not how continuous feedback loops work.\n\n\n \nTrain vs. test\n\n[Yudkowsky][21:12]\n\nSummary of my response: before you can train a really powerful AI, someone else can train a slightly worse AI.\n\nYeah, and before you can evolve a human, you can evolve a Homo erectus, which is a slightly worse human.\n\nIf you are able to raise $X to train an AGI that could take over the world, then it was almost certainly worth it for someone 6 months ago to raise $X/2 to train an AGI that could merely radically transform the world, since they would then get 6 months of absurd profits.\n\nI suppose this sentence makes a kind of sense if you assume away alignability and suppose that the previous paragraphs have refuted the notion of FOOMs, self-improvement, and thresholds between compounding returns and non-compounding returns (eg, in the human case, cognitive innovations like \"written language\" or \"science\"). If you suppose the previous sections refuted those things, then clearly, if you raised an AGI that you had aligned to \"take over the world\", it got that way through cognitive powers that weren't the result of FOOMing or other self-improvements, weren't the results of its cognitive powers crossing a threshold from non-compounding to compounding, wasn't the result of its understanding crossing a threshold of universality as the result of chunky universal machinery such as humans gained over chimps, so, implicitly, it must have been the kind of thing that you could learn by gradient descent, and do a half or a tenth as much of by doing half as much gradient descent, in order to build nanomachines a tenth as well-designed that could bypass a tenth as much bureaucracy.\nIf there are no unsmooth parts of the tech curve, the cognition curve, or the environment curve, then you should be able to make a bunch of wealth using a more primitive version of any technology that could take over the world.\nAnd when we look back at history, why, that may be totally true! They may have deployed universal superhuman translator technology for 6 months, which won't double world GDP, but which a lot of people would pay for, and made a lot of money! Because even though there's no company that built 90% of Amazon's website and has 10% the market cap, when you zoom back out to look at whole industries like AI and a technological capstone like AGI, why, those whole industries do sometimes make some money along the way to the technological capstone, if they can find a niche that isn't too regulated! Which translation currently isn't! So maybe somebody used precursor tech to build a superhuman translator and deploy it 6 months earlier and made a bunch of money for 6 months. SO WHAT. EVERYONE STILL DIES.\nAs for \"radically transforming the world\" instead of \"taking it over\", I think that's just re-restated FOOM denialism. Doing either of those things quickly against human bureaucratic resistance strike me as requiring cognitive power levels dangerous enough that failure to align them on corrigibility would result in FOOMs.\nLike, if you can do either of those things on purpose, you are doing it by operating in the regime where running the AI with higher bounds on the for loop will FOOM it, but you have politely asked it not to FOOM, please.\nIf the people doing this have any sense whatsoever, they will refrain from merely massively transforming the world until they are ready to do something that prevents the world from ending.\nAnd if the gap from \"massively transforming the world, briefly before it ends\" to \"preventing the world from ending, lastingly\" takes much longer than 6 months to cross, or if other people have the same technologies that scale to \"massive transformation\", somebody else will build an AI that fooms all the way.\n\nLikewise, if your AGI would give you a decisive strategic advantage, they could have spent less earlier in order to get a pretty large military advantage, which they could then use to take your stuff.\n\nAgain, this presupposes some weird model where everyone has easy alignment at the furthest frontiers of capability; everybody has the aligned version of the most rawly powerful AGI they can possibly build; and nobody in the future has the kind of tech advantage that Deepmind currently has; so before you can amp your AGI to the raw power level where it could take over the whole world by using the limit of its mental capacities to military ends – alignment of this being a trivial operation to be assumed away – some other party took their easily-aligned AGI that was less powerful at the limits of its operation, and used it to get 90% as much military power… is the implicit picture here?\nWhereas the picture I'm drawing is that the AGI that kills you via \"decisive strategic advantage\" is the one that foomed and got nanotech, and no, the AI tech from 6 months earlier did not do 95% of a foom and get 95% of the nanotech.\n\n\n \nDiscontinuities at 100% automation\n\n[Yudkowsky][21:31]\n\nSummary of my response: at the point where humans are completely removed from a process, they will have been modestly improving output rather than acting as a sharp bottleneck that is suddenly removed.\n\nNot very relevant to my whole worldview in the first place; also not a very good description of how horses got removed from automobiles, or how humans got removed from playing Go.\n\n\n \nThe weight of evidence\n\n[Yudkowsky][21:31]\n\nWe've discussed a lot of possible arguments for fast takeoff. Superficially it would be reasonable to believe that no individual argument makes fast takeoff look likely, but that in the aggregate they are convincing.\nHowever, I think each of these factors is perfectly consistent with the continuous change story and continuously accelerating hyperbolic growth, and so none of them undermine that hypothesis at all.\n\nUh huh. And how about if we have a mirror-universe essay which over and over again treats fast takeoff as the default to be assumed, and painstakingly shows how a bunch of particular arguments for slow takeoff might not be true?\nThis entire essay seems to me like it's drawn from the same hostile universe that produced Robin Hanson's side of the Yudkowsky-Hanson Foom Debate.\nLike, all these abstract arguments devoid of concrete illustrations and \"it need not necessarily be like…\" and \"now that I've shown it's not necessarily like X, well, on the meta-level, I have implicitly told you that you now ought to believe Y\".\nIt just seems very clear to me that the sort of person who is taken in by this essay is the same sort of person who gets taken in by Hanson's arguments in 2008 and gets caught flatfooted by AlphaGo and GPT-3 and AlphaFold 2.\nAnd empirically, it has already been shown to me that I do not have the power to break people out of the hypnosis of nodding along with Hansonian arguments, even by writing much longer essays than this.\nHanson's fond dreams of domain specificity, and smooth progress for stuff like Go, and of course somebody else has a precursor 90% as good as AlphaFold 2 before Deepmind builds it, and GPT-3 levels of generality just not being a thing, now stand refuted.\nDespite that they're largely being exhibited again in this essay.\nAnd people are still nodding along.\nReality just… doesn't work like this on some deep level.\nIt doesn't play out the way that people imagine it would play out when they're imagining a certain kind of reassuring abstraction that leads to a smooth world. Reality is less fond of that kind of argument than a certain kind of EA is fond of that argument.\nThere is a set of intuitive generalizations from experience which rules that out, which I do not know how to convey. There is an understanding of the rules of argument which leads you to roll your eyes at Hansonian arguments and all their locally invalid leaps and snuck-in defaults, instead of nodding along sagely at their wise humility and outside viewing and then going \"Huh?\" when AlphaGo or GPT-3 debuts. But this, I empirically do not seem to know how to convey to people, in advance of the inevitable and predictable contradiction by a reality which is not as fond of Hansonian dynamics as Hanson. The arguments sound convincing to them.\n(Hanson himself has still not gone \"Huh?\" at the reality, though some of his audience did; perhaps because his abstractions are loftier than his audience's? – because some of his audience, reading along to Hanson, probably implicitly imagined a concrete world in which GPT-3 was not allowed; but maybe Hanson himself is more abstract than this, and didn't imagine anything so merely concrete?)\nIf I don't respond to essays like this, people find them comforting and nod along. If I do respond, my words are less comforting and more concrete and easier to imagine concrete objections to, less like a long chain of abstractions that sound like the very abstract words in research papers and hence implicitly convincing because they sound like other things you were supposed to believe.\nAnd then there is another essay in 3 months. There is an infinite well of them. I would have to teach people to stop drinking from the well, instead of trying to whack them on the back until they cough up the drinks one by one, or actually, whacking them on the back and then they don't cough them up until reality contradicts them, and then a third of them notice that and cough something up, and then they don't learn the general lesson and go back to the well and drink again. And I don't know how to teach people to stop drinking from the well. I tried to teach that. I failed. If I wrote another Sequence I have no idea to believe that Sequence would work.\nSo what EAs will believe at the end of the world, will look like whatever the content was of the latest bucket from the well of infinite slow-takeoff arguments that hasn't yet been blatantly-even-to-them refuted by all the sharp jagged rapidly-generalizing things that happened along the way to the world's end.\nAnd I know, before anyone bothers to say, that all of this reply is not written in the calm way that is right and proper for such arguments. I am tired. I have lost a lot of hope. There are not obvious things I can do, let alone arguments I can make, which I expect to be actually useful in the sense that the world will not end once I do them. I don't have the energy left for calm arguments. What's left is despair that can be given voice.\n\n\n 5.6. Yudkowsky/Christiano discussion: AI progress and crossover points\n \n\n[Christiano][22:15]\nTo the extent that it was possible to make any predictions about 2015-2020 based on your views, I currently feel like they were much more wrong than right. I'm happy to discuss that. To the extent you are willing to make any bets about 2025, I expect they will be mostly wrong and I'd be happy to get bets on the record (most of all so that it will be more obvious in hindsight whether they are vindication for your view). Not sure if this is the place for that.\nCould also make a separate channel to avoid clutter.\n\n\n\n\n[Yudkowsky][22:16]\nPossibly. I think that 2015-2020 played out to a much more Eliezerish side than Eliezer on the Eliezer-Hanson axis, which sure is a case of me being wrong. What bets do you think we'd disagree on for 2025? I expect you have mostly misestimated my views, but I'm always happy to hear about anything concrete.\n\n\n\n[Christiano][22:20]\nI think the big points are: (i) I think you are significantly overestimating how large a discontinuity/trend break AlphaZero is, (ii) your view seems to imply that we will move quickly from much worse than humans to much better than humans, but it's likely that we will move slowly through the human range on many tasks. I'm not sure if we can get a bet out of (ii), I think I don't understand your view that well but I don't see how it could make the same predictions as mine over the next 10 years.\n\n\n\n\n[Yudkowsky][22:22]\nWhat are your 10-year predictions?\n\n\n\n[Christiano][22:23]\nMy basic expectation is that for any given domain AI systems will gradually increase in usefulness, we will see a crossing over point where their output is comparable to human output, and that from that time we can estimate how long until takeoff by estimating \"how long does it take AI systems to get 'twice as impactful'?\" which gives you a number like ~1 year rather than weeks. At the crossing over point you get a somewhat rapid change in derivative, since you are looking at (x+y) where y is growing faster than x.\nI feel like that should translate into different expectations about how impactful AI will be in any given domain—I don't see how to make the ultra-fast-takeoff view work if you think that AI output is increasingly smoothly (since the rate of progress at the crossing-over point will be similar to the current rate of progress, unless R&D is scaling up much faster then)\nSo like, I think we are going to have crappy coding assistants, and then slightly less crappy coding assistants, and so on. And they will be improving the speed of coding very significantly before the end times.\n\n\n\n\n[Yudkowsky][22:25]\nYou think in a different language than I do. My more confident statements about AI tech are about what happens after it starts to rise out of the metaphorical atmosphere and the turbulence subsides. When you have minds as early on the cognitive tech tree as humans they sure can get up to some weird stuff, I mean, just look at humans. Now take an utterly alien version of that with its own draw from all the weirdness factors. It sure is going to be pretty weird.\n\n\n\n[Christiano][22:26]\nOK, but you keep saying stuff about how people with my dumb views would be \"caught flat-footed\" by historical developments. Surely to be able to say something like that you need to be making some kind of prediction?\n\n\n\n\n[Yudkowsky][22:26]\nWell, sure, now that Codex has suddenly popped into existence one day at a surprisingly high base level of tech, we should see various jumps in its capability over the years and some outside imitators. What do you think you predict differently about that than I do?\n\n\n\n[Christiano][22:26]\nWhy do you think codex is a high base level of tech?\nThe models get better continuously as you scale them up, and the first tech demo is weak enough to be almost useless\n\n\n\n\n[Yudkowsky][22:27]\nI think the next-best coding assistant was, like, not useful.\n\n\n\n[Christiano][22:27]\nyes\nand it is still not useful\n\n\n\n\n[Yudkowsky][22:27]\nCould be. Some people on HN seemed to think it was useful.\nI haven't tried it myself.\n\n\n\n[Christiano][22:27]\nOK, I'm happy to take bets\n\n\n\n\n[Yudkowsky][22:28]\nI don't think the previous coding assistant would've been very good at coding an asteroid game, even if you tried a rigged demo at the same degree of rigging?\n\n\n\n[Christiano][22:28]\nit's unquestionably a radically better tech demo\n\n\n\n\n[Yudkowsky][22:28]\nWhere by \"previous\" I mean \"previously deployed\" not \"previous generations of prototypes inside OpenAI's lab\".\n\n\n\n[Christiano][22:28]\nMy basic story is that the model gets better and more useful with each doubling (or year of AI research) in a pretty smooth way. So the key underlying parameter for a discontinuity is how soon you build the first version—do you do that before or after it would be a really really big deal?\nand the answer seems to be: you do it somewhat before it would be a really big deal\nand then it gradually becomes a bigger and bigger deal as people improve it\nmaybe we are on the same page about getting gradually more and more useful? But I'm still just wondering where the foom comes from\n\n\n\n\n[Yudkowsky][22:30]\nSo, like… before we get systems that can FOOM and build nanotech, we should get more primitive systems that can write asteroid games and solve protein folding? Sounds legit.\nSo that happened, and now your model says that it's fine later on for us to get a FOOM, because we have the tech precursors and so your prophecy has been fulfilled?\n\n\n\n[Christiano][22:31]\nno\n\n\n\n\n[Yudkowsky][22:31]\nDidn't think so.\n\n\n\n[Christiano][22:31]\nI can't tell if you can't understand what I'm saying, or aren't trying, or do understand and are just saying kind of annoying stuff as a rhetorical flourish\nat some point you have an AI system that makes (humans+AI) 2x as good at further AI progress\n\n\n\n\n[Yudkowsky][22:32]\nI know that what I'm saying isn't your viewpoint. I don't know what your viewpoint is or what sort of concrete predictions it makes at all, let alone what such predictions you think are different from mine.\n\n\n\n[Christiano][22:32]\nmaybe by continuity you can grant the existence of such a system, even if you don't think it will ever exist?\nI want to (i) make the prediction that AI will actually have that impact at some point in time, (ii) talk about what happens before and after that\nI am talking about AI systems that become continuously more useful, because \"become continuously more useful\" is what makes me think that (i) AI will have that impact at some point in time, (ii) allows me to productively reason about what AI will look like before and after that. I expect that your view will say something about why AI improvements either aren't continuous, or why continuous improvements lead to discontinuous jumps in the productivity of the (human+AI) system\n\n\n\n\n[Yudkowsky][22:34]\n\nat some point you have an AI system that makes (humans+AI) 2x as good at further AI progress\n\nIs this prophecy fulfilled by using some narrow eld-AI algorithm to map out a TPU, and then humans using TPUs can write in 1 month a research paper that would otherwise have taken 2 months? And then we can go on to FOOM now that this prophecy about pre-FOOM states has been fulfilled? I know the answer is no, but I don't know what you think is a narrower condition on the prophecy than that.\n\n\n\n[Christiano][22:35]\nIf you can use narrow eld-AI in order to make every part of AI research 2x faster, so that the entire field moves 2x faster, then the prophecy is fulfilled\nand it may be just another 6 months until it makes all of AI research 2x faster again, and then 3 months, and then…\n\n\n\n\n[Yudkowsky][22:36]\nWhat, the entire field? Even writing research papers? Even the journal editors approving and publishing the papers? So if we speed up every part of research except the journal editors, the prophecy has not been fulfilled and no FOOM may take place?\n\n\n\n[Christiano][22:36]\nno, I mean the improvement in overall output, given the actual realistic level of bottlenecking that occurs in practice\n\n\n\n\n[Yudkowsky][22:37]\nSo if the realistic level of bottlenecking ever becomes dominated by a human gatekeeper, the prophecy is ever unfulfillable and no FOOM may ever occur.\n\n\n\n[Christiano][22:37]\nthat's what I mean by \"2x as good at further progress,\" the entire system is achieving twice as much\nthen the prophecy is unfulfillable and I will have been wrong\nI mean, I think it's very likely that there will be a hard takeoff, if people refuse or are unable to use AI to accelerate AI progress for reasons unrelated to AI capabilities, and then one day they become willing\n\n\n\n\n[Yudkowsky][22:38]\n…because on your view, the Prophecy necessarily goes through humans and AIs working together to speed up the whole collective field of AI?\n\n\n\n[Christiano][22:38]\nit's fine if the AI works alone\nthe point is just that it overtakes the humans at the point when it is roughly as fast as the humans\nwhy wouldn't it?\nwhy does it overtake the humans when it takes it 10 seconds to double in capability instead of 1 year?\nthat's like predicting that cultural evolution will be infinitely fast, instead of making the more obvious prediction that it will overtake evolution exactly when it's as fast as evolution\n\n\n\n\n[Yudkowsky][22:39]\nI live in a mental world full of weird prototypes that people are shepherding along to the world's end. I'm not even sure there's a short sentence in my native language that could translate the short Paul-sentence \"is roughly as fast as the humans\".\n\n\n\n[Christiano][22:40]\ndo you agree that you can measure the speed with which the community of human AI researchers develop and implement improvements in their AI systems?\nlike, we can look at how good AI systems are in 2021, and in 2022, and talk about the rate of progress?\n\n\n\n\n[Yudkowsky][22:40]\n…when exactly in hominid history was hominid intelligence exactly as fast as evolutionary optimization???\n\ndo you agree that you can measure the speed with which the community of human AI researchers develop and implement improvements in their AI systems?\n\nI mean… obviously not? How the hell would we measure real actual AI progress? What would even be the Y-axis on that graph?\nI have a rough intuitive feeling that it was going faster in 2015-2017 than 2018-2020.\n\"What was?\" says the stern skeptic, and I go \"I dunno.\"\n\n\n\n[Christiano][22:42]\nHere's a way of measuring progress you won't like: for almost all tasks, you can initially do them with lots of compute, and as technology improves you can do them with less compute. We can measure how fast the amount of compute required is going down.\n\n\n\n\n[Yudkowsky][22:43]\nYeah, that would be a cool thing to measure. It's not obviously a relevant thing to anything important, but it'd be cool to measure.\n\n\n\n[Christiano][22:43]\nAnother way you won't like: we can hold fixed the resources we invest and look at the quality of outputs in any given domain (or even $ of revenue) and ask how fast it's changing.\n\n\n\n\n[Yudkowsky][22:43]\nI wonder what it would say about Go during the age of AlphaGo.\nOr what that second metric would say.\n\n\n\n[Christiano][22:43]\nI think it would be completely fine, and you don't really understand what happened with deep learning in board games. Though I also don't know what happened in much detail, so this is more like a prediction then a retrodiction.\nBut it's enough of a retrodiction that I shouldn't get too much credit for it.\n\n\n\n\n[Yudkowsky][22:44]\nI don't know what result you would consider \"completely fine\". I didn't have any particular unfine result in mind.\n\n\n\n[Christiano][22:45]\noh, sure\nif it was just an honest question happy to use it as a concrete case\nI would measure the rate of progress in Go by looking at how fast Elo improves with time or increasing R&D spending\n\n\n\n\n[Yudkowsky][22:45]\nI mean, I don't have strong predictions about it so it's not yet obviously cruxy to me\n\n\n\n[Christiano][22:46]\nI'd roughly guess that would continue, and if there were multiple trendlines to extrapolate I'd estimate crossover points based on that\n\n\n\n\n[Yudkowsky][22:47]\nsuppose this curve is smooth, and we see that sharp Go progress over time happened because Deepmind dumped in a ton of increased R&D spend. you then argue that this cannot happen with AGI because by the time we get there, people will be pushing hard at the frontiers in a competitive environment where everybody's already spending what they can afford, just like in a highly competitive manufacturing industry.\n\n\n\n[Christiano][22:47]\nthe key input to making a prediction for AGZ in particular would be the precise form of the dependence on R&D spending, to try to predict the changes as you shift from a single programmer to a large team at DeepMind, but most reasonable functional forms would be roughly right\nYes, it's definitely a prediction of my view that it's easier to improve things that people haven't spent much money on than things have spent a lot of money on. It's also a separate prediction of my view that people are going to be spending a boatload of money on all of the relevant technologies. Perhaps $1B/year right now and I'm imagining levels of investment large enough to be essentially bottlenecked on the availability of skilled labor.\n\n\n\n\n[Bensinger][22:48]\n( Previous Eliezer-comments about AlphaGo as a break in trend, responding briefly to Miles Brundage: https://twitter.com/ESRogs/status/1337869362678571008 )\n\n\n\n \n5.7. Legal economic growth\n \n\n[Yudkowsky][22:49]\nDoes your prediction change if all hell breaks loose in 2025 instead of 2055?\n\n\n\n[Christiano][22:50]\nI think my prediction was wrong if all hell breaks loose in 2025, if by \"all hell breaks loose\" you mean \"dyson sphere\" and not \"things feel crazy\"\n\n\n\n\n[Yudkowsky][22:50]\nThings feel crazy in the AI field and the world ends less than 4 years later, well before the world economy doubles.\nWhy was the Prophecy wrong if the world begins final descent in 2025? The Prophecy requires the world to then last until 2029 while doubling its economic output, after which it is permitted to end, but does not obviously to me forbid the Prophecy to begin coming true in 2025 instead of 2055.\n\n\n\n[Christiano][22:52]\nyes, I just mean that some important underlying assumptions for the prophecy were violated, I wouldn't put much stock in it at that point, etc.\n\n\n\n\n[Yudkowsky][22:53]\nA lot of the issues I have with understanding any of your terminology in concrete Eliezer-language is that it looks to me like the premise-events of your Prophecy are fulfillable in all sorts of ways that don't imply the conclusion-events of the Prophecy.\n\n\n\n[Christiano][22:53]\nif \"things feel crazy\" happens 4 years before dyson sphere, then I think we have to be really careful about what crazy means\n\n\n\n\n[Yudkowsky][22:54]\na lot of people looking around nervously and privately wondering if Eliezer was right, while public pravda continues to prohibit wondering anything such thing out loud, so they all go on thinking that they must be wrong.\n\n\n\n[Christiano][22:55]\nOK, by \"things get crazy\" I mean like hundreds of billions of dollars of spending at google on automating AI R&D\n\n\n\n\n[Yudkowsky][22:55]\nI expect bureaucratic obstacles to prevent much GDP per se from resulting from this.\n\n\n\n[Christiano][22:55]\nmassive scaleups in semiconductor manufacturing, bidding up prices of inputs crazily\n\n\n\n\n[Yudkowsky][22:55]\nI suppose that much spending could well increase world GDP by hundreds of billions of dollars per year.\n\n\n\n[Christiano][22:56]\nmassive speculative rises in AI company valuations financing a significant fraction of GWP into AI R&D\n(+hardware R&D, +building new clusters, +etc.)\n\n\n\n\n[Yudkowsky][22:56]\nlike, higher than Tesla? higher than Bitcoin?\nboth of these things sure did skyrocket in market cap without that having much of an effect on housing stocks and steel production.\n\n\n\n[Christiano][22:57]\nright now I think hardware R&D is on the order of $100B/year, AI R&D is more like $10B/year, I guess I'm betting on something more like trillions? (limited from going higher because of accounting problems and not that much smart money)\nI don't think steel production is going up at that point\nplausibly going down since you are redirecting manufacturing capacity into making more computers. But probably just staying static while all of the new capacity is going into computers, since cannibalizing existing infrastructure is much more expensive\nthe original point was: you aren't pulling AlphaZero shit any more, you are competing with an industry that has invested trillions in cumulative R&D\n\n\n\n\n[Yudkowsky][23:00]\nis this in hopes of future profit, or because current profits are already in the trillions?\n\n\n\n[Christiano][23:01]\nlargely in hopes of future profit / reinvested AI outputs (that have high market cap), but also revenues are probably in the trillions?\n\n\n\n\n[Yudkowsky][23:02]\nthis all sure does sound \"pretty darn prohibited\" on my model, but I'd hope there'd be something earlier than that we could bet on. what does your Prophecy prohibit happening before that sub-prophesied day?\n\n\n\n[Christiano][23:02]\nTo me your model just seems crazy, and you are saying it predicts crazy stuff at the end but no crazy stuff beforehand, so I don't know what's prohibited. Mostly I feel like I'm making positive predictions, of gradually escalating value of AI in lots of different industries\nand rapidly increasing investment in AI\nI guess your model can be: those things happen, and then one day the AI explodes?\n\n\n\n\n[Yudkowsky][23:03]\nthe main way you get rapidly increasing investment in AI is if there's some way that AI can produce huge profits without that being effectively bureaucratically prohibited – eg this is where we get huge investments in burning electricity and wasting GPUs on Bitcoin mining.\n\n\n\n[Christiano][23:03]\nbut it seems like you should be predicting e.g. AI quickly jumping to superhuman in lots of domains, and some applications jumping from no value to massive value\nI don't understand what you mean by that sentence. Do you think we aren't seeing rapidly increasing investment in AI right now?\nor are you talking about increasing investment above some high threshold, or increasing investment at some rate significantly larger than the current rate?\nit seems to me like you can pretty seamlessly get up to a few $100B/year of revenue just by redirecting existing tech R&D\n\n\n\n\n[Yudkowsky][23:05]\nso I can imagine scenarios where some version of GPT-5 cloned outside OpenAI is able to talk hundreds of millions of mentally susceptible people into giving away lots of their income, and many regulatory regimes are unable to prohibit this effectively. then AI could be making a profit of trillions and then people would invest corresponding amounts in making new anime waifus trained in erotic hypnosis and findom.\nthis, to be clear, is not my mainline prediction.\nbut my sense is that our current economy is mostly not about the 1-day period to design new vaccines, it is about the multi-year period to be allowed to sell the vaccines.\nthe exceptions to this, like Bitcoin managing to say \"fuck off\" to the regulators for long enough, are where Bitcoin scales to a trillion dollars and gets massive amounts of electricity and GPU burned on it.\nso we can imagine something like this for AI, which earns a trillion dollars, and sparks a trillion-dollar competition.\nbut my sense is that your model does not work like this.\nmy sense is that your model is about general improvements across the whole economy.\n\n\n\n[Christiano][23:08]\nI think bitcoin is small even compared to current AI…\n\n\n\n\n[Yudkowsky][23:08]\nmy sense is that we've already built an economy which rejects improvement based on small amounts of cleverness, and only rewards amounts of cleverness large enough to bypass bureaucratic structures. it's not enough to figure out a version of e-gold that's 10% better. e-gold is already illegal. you have to figure out Bitcoin.\nwhat are you going to build? better airplanes? airplane costs are mainly regulatory costs. better medtech? mainly regulatory costs. better houses? building houses is illegal anyways.\nwhere is the room for the general AI revolution, short of the AI being literally revolutionary enough to overthrow governments?\n\n\n\n[Christiano][23:10]\nfactories, solar panels, robots, semiconductors, mining equipment, power lines, and \"factories\" just happens to be one word for a thousand different things\nI think it's reasonable to think some jurisdictions won't be willing to build things but it's kind of improbable as a prediction for the whole world. That's a possible source of shorter-term predictions?\nalso computers and the 100 other things that go in datacenters\n\n\n\n\n[Yudkowsky][23:12]\nThe whole developed world rejects open borders. The regulatory regimes all make the same mistakes with an almost perfect precision, the kind of coordination that human beings could never dream of when trying to coordinate on purpose.\nif the world lasts until 2035, I could perhaps see deepnets becoming as ubiquitous as computers were in… 1995? 2005? would that fulfill the terms of the Prophecy? I think it doesn't; I think your Prophecy requires that early AGI tech be that ubiquitous so that AGI tech will have trillions invested in it.\n\n\n\n[Christiano][23:13]\nwhat is AGI tech?\nthe point is that there aren't important drivers that you can easily improve a lot\n\n\n\n\n[Yudkowsky][23:14]\nfor purposes of the Prophecy, AGI tech is that which, scaled far enough, ends the world; this must have trillions invested in it, so that the trajectory up to it cannot look like pulling an AlphaGo. no?\n\n\n\n[Christiano][23:14]\nso it's relevant if you are imagining some piece of the technology which is helpful for general problem solving or something but somehow not helpful for all of the things people are doing with ML, to me that seems unlikely since it's all the same stuff\nsurely AGI tech should at least include the use of AI to automate AI R&D\nregardless of what you arbitrarily decree as \"ends the world if scaled up\"\n\n\n\n\n[Yudkowsky][23:15]\nonly if that's the path that leads to destroying the world?\nif it isn't on that path, who cares Prophecy-wise?\n\n\n\n[Christiano][23:15]\nalso I want to emphasize that \"pull an AlphaGo\" is what happens when you move from SOTA being set by an individual programmer to a large lab, you don't need to be investing trillions to avoid that\nand that the jump is still more like a few years\nbut the prophecy does involve trillions, and my view gets more like your view if people are jumping from $100B of R&D ever to $1T in a single year\n\n\n\n \n5.8. TPUs and GPUs, and automating AI R&D\n \n\n[Yudkowsky][23:17]\nI'm also wondering a little why the emphasis on \"trillions\". it seems to me that the terms of your Prophecy should be fulfillable by AGI tech being merely as ubiquitous as modern computers, so that many competing companies invest mere hundreds of billions in the equivalent of hardware plants. it is legitimately hard to get a chip with 50% better transistors ahead of TSMC.\n\n\n\n[Christiano][23:17]\nyes, if you are investing hundreds of billions then it is hard to pull ahead (though could still happen)\n(since the upside is so much larger here, no one cares that much about getting ahead of TSMC since the payoff is tiny in the scheme of the amounts we are discussing)\n\n\n\n\n[Yudkowsky][23:18]\nwhich, like, doesn't prevent Google from tossing out TPUs that are pretty significant jumps on GPUs, and if there's a specialized application of AGI-ish tech that is especially key, you can have everything behave smoothly and still get a jump that way.\n\n\n\n[Christiano][23:18]\nI think TPUs are basically the same as GPUs\nprobably a bit worse\n(but GPUs are sold at a 10x markup since that's the size of nvidia's lead)\n\n\n\n\n[Yudkowsky][23:19]\nnoted; I'm not enough of an expert to directly contradict that statement about TPUs from my own knowledge.\n\n\n\n[Christiano][23:19]\n(though I think TPUs are nevertheless leased at a slightly higher price than GPUs)\n\n\n\n\n[Yudkowsky][23:19]\nhow does Nvidia maintain that lead and 10x markup? that sounds like a pretty un-Paul-ish state of affairs given Bitcoin prices never mind AI investments.\n\n\n\n[Christiano][23:20]\nnvidia's lead isn't worth that much because historically they didn't sell many gpus\n(especially for non-gaming applications)\ntheir R&D investment is relatively large compared to the $ on the table\nmy guess is that their lead doesn't stick, as evidenced by e.g. Google very quickly catching up\n\n\n\n\n[Yudkowsky][23:21]\nparenthetically, does this mean – and I don't necessarily predict otherwise – that you predict a drop in Nvidia's stock and a drop in GPU prices in the next couple of years?\n\n\n\n[Christiano][23:21]\nnvidia's stock may do OK from riding general AI boom, but I do predict a relative fall in nvidia compared to other AI-exposed companies\n(though I also predicted google to more aggressively try to compete with nvidia for the ML market and think I was just wrong about that, though I don't really know any details of the area)\nI do expect the cost of compute to fall over the coming years as nvidia's markup gets eroded\nto be partially offset by increases in the cost of the underlying silicon (though that's still bad news for nvidia)\n\n\n\n\n[Yudkowsky][23:23]\nI parenthetically note that I think the Wise Reader should be justly impressed by predictions that come true about relative stock price changes, even if Eliezer has not explicitly contradicted those predictions before they come true. there are bets you can win without my having to bet against you.\n\n\n\n[Christiano][23:23]\nyou are welcome to counterpredict, but no saying in retrospect that reality proved you right if you don't \notherwise it's just me vs the market\n\n\n\n\n[Yudkowsky][23:24]\nI don't feel like I have a counterprediction here, but I think the Wise Reader should be impressed if you win vs. the market.\nhowever, this does require you to name in advance a few \"other AI-exposed companies\".\n\n\n\n[Christiano][23:25]\nNote that I made the same bet over the last year—I make a large AI bet but mostly moved my nvidia allocation to semiconductor companies. The semiconductor part of the portfolio is up 50% while nvidia is up 70%, so I lost that one. But that just means I like the bet even more next year.\nhappy to use nvidia vs tsmc\n\n\n\n\n[Yudkowsky][23:25]\nthere's a lot of noise in a 2-stock prediction.\n\n\n\n[Christiano][23:25]\nI mean, it's a 1-stock prediction about nvidia\n\n\n\n\n[Yudkowsky][23:26]\nbut your funeral or triumphal!\n\n\n\n[Christiano][23:26]\nindeed \nanyway\nI expect all of the $ amounts to be much bigger in the future\n\n\n\n\n[Yudkowsky][23:26]\nyeah, but using just TSMC for the opposition exposes you to I dunno Chinese invasion of Taiwan\n\n\n\n[Christiano][23:26]\nyes\nalso TSMC is not that AI-exposed\nI think the main prediction is: eventual move away from GPUs, nvidia can't maintain that markup\n\n\n\n\n[Yudkowsky][23:27]\n\"Nvidia can't maintain that markup\" sounds testable, but is less of a win against the market than predicting a relative stock price shift. (Over what timespan? Just the next year sounds quite fast for that kind of prediction.)\n\n\n\n[Christiano][23:27]\nregarding your original claim: if you think that it's plausible that AI will be doing all of the AI R&D, and that will be accelerating continuously from 12, 6, 3 month \"doubling times,\" but that we'll see a discontinuous change in the \"path to doom,\" then that would be harder to generate predictions about\nyes, it's hard to translate most predictions about the world into predictions about the stock market\n\n\n\n\n[Yudkowsky][23:28]\nthis again sounds like it's not written in Eliezer-language.\nwhat does it mean for \"AI will be doing all of the AI R&D\"? that sounds to me like something that happens after the end of the world, hence doesn't happen.\n\n\n\n[Christiano][23:29]\nthat's good, that's what I thought\n\n\n\n\n[Yudkowsky][23:29]\nI don't necessarily want to sound very definite about that in advance of understanding what it means\n\n\n\n[Christiano][23:29]\nI'm saying that I think AI will be automating AI R&D gradually, before the end of the world\nyeah, I agree that if you reject the construct of \"how fast the AI community makes progress\" then it's hard to talk about what it means to automate \"progress\"\nand that may be hard to make headway on\nthough for cases like AlphaGo (which started that whole digression) it seems easy enough to talk about elo gain per year\nmaybe the hard part is aggregating across tasks into a measure you actually care about?\n\n\n\n\n[Yudkowsky][23:30]\nup to a point, but yeah. (like, if we're taking Elo high above human levels and restricting our measurements to a very small range of frontier AIs, I quietly wonder if the measurement is still measuring quite the same thing with quite the same robustness.)\n\n\n\n[Christiano][23:31]\nI agree that elo measurement is extremely problematic in that regime\n\n\n\n \n5.9. Smooth exponentials vs. jumps in income\n \n\n[Yudkowsky][23:31]\nso in your worldview there's this big emphasis on things that must have been deployed and adopted widely to the point of already having huge impacts\nand in my worldview there's nothing very surprising about people with a weird powerful prototype that wasn't used to automate huge sections of AI R&D because the previous versions of the tech weren't useful for that or bigcorps didn't adopt it.\n\n\n\n[Christiano][23:32]\nI mean, Google is already 1% of the US economy and in this scenario it and its peers are more like 10-20%? So wide adoption doesn't have to mean that many people. Though I also do predict much wider adoption than you so happy to go there if it's happy for predictions.\nI don't really buy the \"weird powerful prototype\"\n\n\n\n\n[Yudkowsky][23:33]\nyes. I noticed.\nyou would seem, indeed, to be offering large quantities of it for short sale.\n\n\n\n[Christiano][23:33]\nand it feels like the thing you are talking about ought to have some precedent of some kind, of weird powerful prototypes that jump straight from \"does nothing\" to \"does something impactful\"\nlike if I predict that AI will be useful in a bunch of domains, and will get there by small steps, you should either predict that won't happen, or else also predict that there will be some domains with weird prototypes jumping to giant impact?\n\n\n\n\n[Yudkowsky][23:34]\nlike an electrical device that goes from \"not working at all\" to \"actually working\" as soon as you screw in the attachments for the electrical plug.\n\n\n\n[Christiano][23:34]\n(clearly takes more work to operationalize)\nI'm not sure I understand that sentence, hopefully it's clear enough why I expect those discontinuities?\n\n\n\n\n[Yudkowsky][23:34]\nthough, no, that's a facile bad analogy.\na better analogy would be an AI system that only starts working after somebody tells you about batch normalization or LAMB learning rate or whatever.\n\n\n\n[Christiano][23:36]\nsure, which I think will happen all the time for individual AI projects but not for sota\nbecause the projects at sota have picked the low hanging fruit, it's not easy to get giant wins\n\n\n\n\n[Yudkowsky][23:36]\n\nlike if I predict that AI will be useful in a bunch of domains, and will get there by small steps, you should either predict that won't happen, or else also predict that there will be some domains with weird prototypes jumping to giant impact?\n\nin the latter case, has this Eliezer-Prophecy already had its terms fulfilled by AlphaFold 2, or do you say nay because AlphaFold 2 hasn't doubled GDP?\n\n\n\n[Christiano][23:37]\n(you can also get giant wins by a new competitor coming up at a faster rate of progress, and then we have more dependence on whether people do it when it's a big leap forward or slightly worse than the predecessor, and I'm betting on the latter)\nI have no idea what AlphaFold 2 is good for, or the size of the community working on it, my guess would be that its value is pretty small\nwe can try to quantify\nlike, I get surprised when $X of R&D gets you something whose value is much larger than $X\nI'm not surprised at all if $X of R&D gets you <<$X, or even like 10*$X in a given case that was selected for working well\nhopefully it's clear enough why that's the kind of thing a naive person would predict\n\n\n\n\n[Yudkowsky][23:38]\nso a thing which Eliezer's Prophecy does not mandate per se, but sure does permit, and is on the mainline especially for nearer timelines, is that the world-ending prototype had no prior prototype containing 90% of the technology which earned a trillion dollars.\na lot of Paul's Prophecy seems to be about forbidding this.\nis that a fair way to describe your own Prophecy?\n\n\n\n[Christiano][23:39]\nI don't have a strong view about \"containing 90% of the technology\"\nthe main view is that whatever the \"world ending prototype\" does, there were earlier systems that could do practically the same thing\nif the world ending prototype does something that lets you go foom in a day, there was a system years earlier that could foom in a month, so that would have been the one to foom\n\n\n\n\n[Yudkowsky][23:41]\nbut, like, the world-ending thing, according to the Prophecy, must be squarely in the middle of a class of technologies which are in the midst of earning trillions of dollars and having trillions of dollars invested in them. it's not enough for the Worldender to be definitionally somewhere in that class, because then it could be on a weird outskirt of the class, and somebody could invest a billion dollars in that weird outskirt before anybody else had invested a hundred million, which is forbidden by the Prophecy. so the Worldender has got to be right in the middle, a plain and obvious example of the tech that's already earning trillions of dollars. …y/n?\n\n\n\n[Christiano][23:42]\nI agree with that as a prediction for some operationalization of \"a plain and obvious example,\" but I think we could make it more precise / it doesn't feel like it depends on the fuzziness of that\nI think that if the world can end out of nowhere like that, you should also be getting $100B/year products out of nowhere like that, but I guess you think not because of bureaucracy\nlike, to me it seems like our views stake out predictions about codex, where I'm predicting its value will be modest relative to R&D, and the value will basically improve from there with a nice experience curve, maybe something like ramping up quickly to some starting point <$10M/year and then doubling every year thereafter, whereas I feel like you are saying more like \"who knows, could be anything\" and so should be surprised each time the boring thing happens\n\n\n\n\n[Yudkowsky][23:45]\nthe concrete example I give is that the World-Ending Company will be able to use the same tech to build a true self-driving car, which would in the natural course of things be approved for sale a few years later after the world had ended.\n\n\n\n[Christiano][23:46]\nbut self-driving cars seem very likely to already be broadly deployed, and so the relevant question is really whether their technical improvements can also be deployed to those cars?\n(or else maybe that's another prediction we disagree about)\n\n\n\n\n[Yudkowsky][23:47]\nI feel like I would indeed not have the right to feel very surprised if Codex technology stagnated for the next 5 years, nor if it took a massive leap in 2 years and got ubiquitously adopted by lots of programmers.\nyes, I think that's a general timeline difference there\nre: self-driving cars\nI might be talkable into a bet where you took \"Codex tech will develop like this\" and I took the side \"literally anything else but that\"\n\n\n\n[Christiano][23:48]\nI think it would have to be over/under, I doubt I'm more surprised than you by something failing to be economically valuable, I'm surprised by big jumps in value\nseems like it will be tough to work\n\n\n\n\n[Yudkowsky][23:49]\nwell, if I was betting on something taking a big jump in income, I sure would bet on something in a relatively unregulated industry like Codex or anime waifus.\nbut that's assuming I made the bet at all, which is a hard sell when the bet is about the Future, which is notoriously hard to predict.\n\n\n\n[Christiano][23:50]\nI guess my strongest take is: if you want to pull the thing where you say that future developments proved you right and took unreasonable people like me by surprise, you've got to be able to say something in advance about what you expect to happen\n\n\n\n\n[Yudkowsky][23:51]\nso what if neither of us are surprised if Codex stagnates for 5 years, you win if Codex shows a smooth exponential in income, and I win if the income looks… jumpier? how would we quantify that?\n\n\n\n[Christiano][23:52]\ncodex also does seem a bit unfair to you in that it may have to be adopted by lots of programmers which could slow things down a lot even if capabilities are pretty jumpy\n(though I think in fact usefulness and not merely profit will basically just go up smoothly, with step sizes determined by arbitrary decisions about when to release something)\n\n\n\n\n[Yudkowsky][23:53]\nI'd also be concerned about unfairness to me in that earnable income is not the same as the gains from trade. If there's more than 1 competitor in the industry, their earnings from Codex may be much less than the value produced, and this may not change much with improvements in the tech.\n\n\n \n5.10. Late-stage predictions\n \n\n[Christiano][23:53]\nI think my main update from this conversation is that you don't really predict someone to come out of nowhere with a model that can earn a lot of $, even if they could come out of nowhere with a model that could end the world, because of regulatory bottlenecks and nimbyism and general sluggishness and unwillingness to do things\ndoes that seem right?\n\n\n\n\n[Yudkowsky][23:55]\nWell, and also because the World-ender is \"the first thing that scaled with compute\" and/or \"the first thing that ate the real core of generality\" and/or \"the first thing that went over neutron multiplication factor 1\".\n\n\n\n[Christiano][23:55]\nand so that cuts out a lot of the easily-specified empirical divergences, since \"worth a lot of $\" was the only general way to assess \"big deal that people care about\" and avoiding disputes like \"but Zen was mostly developed by a single programmer, it's not like intense competition\"\nyeah, that's the real disagreement it seems like we'd want to talk about\nbut it just doesn't seem to lead to many prediction differences in advance?\nI totally don't buy any of those models, I think they are bonkers\nwould love to bet on that\n\n\n\n\n[Yudkowsky][23:56]\nProlly but I think the from-my-perspective-weird talk about GDP is probably concealing some kind of important crux, because caring about GDP still feels pretty alien to me.\n\n\n\n[Christiano][23:56]\nI feel like getting up to massive economic impacts without seeing \"the real core of generality\" seems like it should also be surprising on your view\nlike if it's 10 years from now and AI is a pretty big deal but no crazy AGI, isn't that surprising?\n\n\n\n\n[Yudkowsky][23:57]\nMildly but not too surprising, I would imagine that people had built a bunch of neat stuff with gradient descent in realms where you could get a long way on self-play or massively collectible datasets.\n\n\n\n[Christiano][23:58]\nI'm fine with the crux being something that doesn't lead to any empirical disagreements, but in that case I just don't think you should claim credit for the worldview making great predictions.\n(or the countervailing worldview making bad predictions)\n\n\n\n\n[Yudkowsky][23:59]\nstuff that we could see then: self-driving cars (10 years is enough for regulatory approval in many countries), super Codex, GPT-6 powered anime waifus being an increasingly loud source of (arguably justified) moral panic and a hundred-billion-dollar industry\n\n\n\n[Christiano][23:59]\nanother option is \"10% GDP GWP growth in a year, before doom\"\nI think that's very likely, though might be too late to be helpful\n\n\n\n\n[Yudkowsky][0:01]\nsee, that seems genuinely hard unless somebody gets GPT-4 far head of any political opposition – I guess all the competent AGI groups lean solidly liberal at the moment? – and uses it to fake massive highly-persuasive sentiment on Twitter for housing liberalization.\n\n\n\n[Christiano][0:01]\nso seems like a bet?\nbut you don't get to win until doom \n\n\n\n\n[Yudkowsky][0:02]\nI mean, as written, I'd want to avoid cases like 10% growth on paper while recovering from a pandemic that produced 0% growth the previous year.\n\n\n\n[Christiano][0:02]\nyeah\n\n\n\n\n[Yudkowsky][0:04]\nI'd want to check the current rate (5% iirc) and what the variance on it was, 10% is a little low for surety (though my sense is that it's a pretty darn smooth graph that's hard to perturb)\nif we got 10% in a way that was clearly about AI tech becoming that ubiquitous, I'd feel relatively good about nodding along and saying, \"Yes, that is like unto the beginning of Paul's Prophecy\" not least because the timelines had been that long at all.\n\n\n\n[Christiano][0:05]\nlike 3-4%/year right now\nrandom wikipedia number is 5.5% in 2006-2007, 3-4% since 2010\n4% \n\n\n\n\n[Yudkowsky][0:06]\nI don't want to sound obstinate here. My model does not forbid that we dwiddle around on the AGI side while gradient descent tech gets its fingers into enough separate weakly-generalizing pies to produce 10% GDP growth, but I'm happy to say that this sounds much more like Paul's Prophecy is coming true.\n\n\n\n[Christiano][0:07]\nok, we should formalize at some point, but also need the procedure for you getting credit given that it can't resolve in your favor until the end of days\n\n\n\n\n[Yudkowsky][0:07]\nIs there something that sounds to you like Eliezer's Prophecy which we can observe before the end of the world?\n\n\n\n[Christiano][0:07]\nwhen you will already have all the epistemic credit you need\nnot on the \"simple core of generality\" stuff since that apparently immediately implies end of world\nmaybe something about ML running into obstacles en route to human level performance?\nor about some other kind of discontinuous jump even in a case where people care, though there seem to be a few reasons you don't expect many of those\n\n\n\n\n[Yudkowsky][0:08]\ndepends on how you define \"immediately\"? it's not long before the end of the world, but in some sad scenarios there is some tiny utility to you declaring me right 6 months before the end.\n\n\n\n[Christiano][0:09]\nI care a lot about the 6 months before the end personally\nthough I do think probably everything is more clear by then independent of any bet; but I guess you are more pessimistic about that\n\n\n\n\n[Yudkowsky][0:09]\nI'm not quite sure what I'd do in them, but I may have worked something out before then, so I care significantly in expectation if not in particular.\nI am more pessimistic about other people's ability to notice what reality is screaming in their faces, yes.\n\n\n\n[Christiano][0:10]\nif we were to look at various scaling curves, e.g. of loss vs model size or something, do you expect those to look distinctive as you hit the \"real core of generality\"?\n\n\n\n\n[Yudkowsky][0:10]\nlet me turn that around: if we add transformers into those graphs, do they jump around in a way you'd find interesting?\n\n\n\n[Christiano][0:11]\nnot really\n\n\n\n\n[Yudkowsky][0:11]\nis that because the empirical graphs don't jump, or because you don't think the jumps say much?\n\n\n\n[Christiano][0:11]\nbut not many good graphs to look at (I just have one in mind), so that's partly a prediction about what the exercise would show\nI don't think the graphs jump much, and also transformers come before people start evaluating on tasks where they help a lot\n\n\n\n\n[Yudkowsky][0:12]\nIt would not terribly contradict the terms of my Prophecy if the World-ending tech began by not producing a big jump on existing tasks, but generalizing to some currently not-so-popular tasks where it scaled much faster.\n\n\n\n[Christiano][0:13]\neh, they help significantly on contemporary tasks, but it's just not a huge jump relative to continuing to scale up model sizes\nor other ongoing improvements in architecture\nanyway, should try to figure out something, and good not to finalize a bet until you have some way to at least come out ahead, but I should sleep now\n\n\n\n\n[Yudkowsky][0:14]\nyeah, same.\nThing I want to note out loud lest I forget ere I sleep: I think the real world is full of tons and tons of technologies being developed as unprecedented prototypes in the midst of big fields, because the key thing to invest in wasn't the competitively explored center. Wright Flyer vs all expenditures on Traveling Machine R&D. First atomic pile and bomb vs all Military R&D.\nThis is one reason why Paul's Prophecy seems fragile to me. You could have the preliminaries come true as far as there being a trillion bucks in what looks like AI R&D, and then the WorldEnder is a weird prototype off to one side of that. saying \"But what about the rest of that AI R&D?\" is no more a devastating retort to reality than looking at AlphaGo and saying \"But weren't other companies investing billions in Better Software?\" Yeah but it was a big playing field with lots of different kinds of Better Software and no other medium-sized team of 15 people with corporate TPU backing was trying to build a system just like AlphaGo, even though multiple small outfits were trying to build prestige-earning gameplayers. Tech advancements very very often occur in places where investment wasn't dense enough to guarantee overlap.\n\n\n 6. Follow-ups on \"Takeoff Speeds\"\n \n6.1. Eliezer Yudkowsky's commentary\n \n\n[Yudkowsky][17:25]\nFurther comment that occurred to me on \"takeoff speeds\" if I've better understood the main thesis now: its hypotheses seem to include a perfectly anti-Thielian setup for AGI.\nThiel has a running thesis about how part of the story behind the Great Stagnation and the decline in innovation that's about atoms rather than bits – the story behind \"we were promised flying cars and got 140 characters\", to cite the classic Thielian quote – is that people stopped believing in \"secrets\".\nThiel suggests that you have to believe there are knowable things that aren't yet widely known – not just things that everybody already knows, plus mysteries that nobody will ever know – in order to be motivated to go out and innovate. Culture in developed countries shifted to label this kind of thinking rude – or rather, even ruder, even less tolerated than it had been decades before – so innovation decreased as a result.\nThe central hypothesis of \"takeoff speeds\" is that at the time of serious AGI being developed, it is perfectly anti-Thielian in that it is devoid of secrets in that sense. It is not permissible (on this viewpoint) for it to be the case that there is a lot of AI investment into AI that is directed not quite at the key path leading to AGI, such that somebody could spend $1B on compute for the key path leading to AGI before anybody else had spent $100M on that. There cannot exist any secret like that. The path to AGI will be known; everyone, or a wide variety of powerful actors, will know how profitable that path will be; the surrounding industry will be capable of acting on this knowledge, and will have actually been acting on it as early as possible; multiple actors are already investing in every tech path that would in fact be profitable (and is known to any human being at all), as soon as that R&D opportunity becomes available.\nAnd I'm not saying this is an inconsistent world to describe! I've written science fiction set in this world. I called it \"dath ilan\". It's a hypothetical world that is actually full of smart people in economic equilibrium. If anything like Covid-19 appears, for example, the governments and public-good philanthropists there have already set up prediction markets (which are not illegal, needless to say); and of course there are mRNA vaccine factories already built and ready to go, because somebody already calculated the profits from fast vaccines would be very high in case of a pandemic (no artificial price ceilings in this world, of course); so as soon as the prediction markets started calling the coming pandemic conditional on no vaccine, the mRNA vaccine factories were already spinning up.\nThis world, however, is not Earth.\nOn Earth, major chunks of technological progress quite often occur outside of a social context where everyone knew and agreed in advance on which designs would yield how much expected profit and many overlapping actors competed to invest in the most actually-promising paths simultaneously.\nAnd that is why you can read Inadequate Equilibria, and then read this essay on takeoff speeds, and go, \"Oh, yes, I recognize this; it's written inside the Modesty worldview; in particular, the imagination of an adequate world in which there is a perfect absence of Thielian secrets or unshared knowable knowledge about fruitful development pathways. This is the same world that already had mRNA vaccines ready to spin up on day one of the Covid-19 pandemic, because markets had correctly forecasted their option value and investors had acted on that forecast unimpeded. Sure would be an interesting place to live! But we don't live there.\"\nCould we perhaps end up in a world where the path to AGI is in fact not a Thielian secret, because in fact the first accessible path to AGI happens to lie along a tech pathway that already delivered large profits to previous investors who summed a lot of small innovations, a la experience with chipmaking, such that there were no large innovations just lots and lots of small innovations that yield 10% improvement annually on various tech benchmarks?\nI think that even in this case we will get weird, discontinuous, and fatal behaviors, and I could maybe talk about that when discussion resumes. But it is not ruled out to me that the first accessible pathway to AGI could happen to lie in the further direction of some road that was already well-traveled, already yielded much profit to now-famous tycoons back when its first steps were Thielian secrets, and hence is now replete with dozens of competing chasers for the gold rush.\nIt's even imaginable to me, though a bit less so, that the first path traversed to real actual pivotal/powerful/lethal AGI, happens to lie literally actually squarely in the central direction of the gold rush. It sounds a little less like the tech history I know, which is usually about how someone needed to swerve a bit and the popular gold-rush forecasts weren't quite right, but maybe that is just a selective focus of history on the more interesting cases.\nThough I remark that – even supposing that getting to big AGI is literally as straightforward and yet as difficult as falling down a semiconductor manufacturing roadmap (as otherwise the biggest actor to first see the obvious direction could just rush down the whole road) – well, TSMC does have a bit of an unshared advantage right now, if I recall correctly. And Intel had a bit of an advantage before that. So that happens even when there's competitors competing to invest billions.\nBut we can imagine that doesn't happen either, because instead of needing to build a whole huge manufacturing plant, there's just lots and lots of little innovations adding up to every key AGI threshold, which lots of actors are investing $10 million in at a time, and everybody knows which direction to move in to get to more serious AGI and they're right in this shared forecast.\nI am willing to entertain discussing this world and the sequelae there – I do think everybody still dies in this case – but I would not have this particular premise thrust upon us as a default, through a not-explicitly-spoken pressure against being so immodest and inegalitarian as to suppose that any Thielian knowable-secret will exist, or that anybody in the future gets as far ahead of others as today's TSMC or today's Deepmind.\nWe are, in imagining this world, imagining a world in which AI research has become drastically unlike today's AI research in a direction drastically different from the history of many other technologies.\nIt's not literally unprecedented, but it's also not a default environment for big moments in tech progress; it's narrowly precedented for particular industries with high competition and steady benchmark progress driven by huge investments into a sum of many tiny innovations.\nSo I can entertain the scenario. But if you want to claim that the social situation around AGI will drastically change in this way you foresee – not just that it could change in that direction, if somebody makes a big splash that causes everyone else to reevaluate their previous opinions and arrive at yours, but that this social change will occur and you know this now – and that the prerequisite tech path to AGI is known to you, and forces an investment situation that looks like the semiconductor industry – then your \"What do you think you know and how do you think you know it?\" has some significant explaining to do.\nOf course, I do appreciate that such a thing could be knowable, and yet not known to me. I'm not so silly as to disbelieve in secrets like that. They're all over the actual history of technological progress on our actual Earth.\n\n\n \n\nThe post Yudkowsky and Christiano discuss \"Takeoff Speeds\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Yudkowsky and Christiano discuss “Takeoff Speeds”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=4", "id": "e3d0e5db453420219d3359ef6bd0c4f5"} {"text": "Ngo and Yudkowsky on AI capability gains\n\n\n \nThis is the second post in a series of transcribed conversations about AGI forecasting and alignment. See the first post for prefaces and more information about the format.\n\nColor key:\n\n\n\n\n Chat by Richard and Eliezer \n Other chat \n Google Doc content \n Inline comments \n\n\n\n\n \n \n5. September 14 conversation\n \n5.1. Recursive self-improvement, abstractions, and miracles\n \n\n[Yudkowsky][11:00]\nGood morning / good evening.\nSo it seems like the obvious thread to pull today is your sense that I'm wrong about recursive self-improvement and consequentialism in a related way?\n\n\n\n[Ngo][11:04]\nRight. And then another potential thread (probably of secondary importance) is the question of what you mean by utility functions, and digging more into the intuitions surrounding those.\nBut let me start by fleshing out this RSI/consequentialism claim.\nI claim that your early writings about RSI focused too much on a very powerful abstraction, of recursively applied optimisation; and too little on the ways in which even powerful abstractions like this one become a bit… let's say messier, when they interact with the real world.\nIn particular, I think that Paul's arguments that there will be substantial progress in AI in the leadup to a RSI-driven takeoff are pretty strong ones.\n(Just so we're on the same page: to what extent did those arguments end up shifting your credences?)\n\n\n\n\n[Yudkowsky][11:09]\nI don't remember being shifted by Paul on this at all. I sure shifted a lot over events like Alpha Zero and the entire deep learning revolution. What does Paul say that isn't encapsulated in that update – does he furthermore claim that we're going to get fully smarter-than-human in all regards AI which doesn't cognitively scale much further either through more compute or through RSI?\n\n\n\n[Ngo][11:10]\nAh, I see. In that case, let's just focus on the update from the deep learning revolution.\n\n\n\n\n[Yudkowsky][11:12][11:13]\nI'll also remark that I see my foreseeable mistake there as having little to do with \"abstractions becoming messier when they interact with the real world\" – this truism tells you very little of itself, unless you can predict directional shifts in other variables just by contemplating the unknown messiness relative to the abstraction.\nRather, I'd see it as a neighboring error to what I've called the Law of Earlier Failure, where the Law of Earlier Failure says that, compared to the interesting part of the problem where it's fun to imagine yourself failing, you usually fail before then, because of the many earlier boring points where it's possible to fail.\nThe nearby reasoning error in my case is that I focused on an interesting way that AI capabilities could scale and the most powerful argument I had to overcome Robin's objections, while missing the way that Robin's objections could fail even earlier through rapid scaling and generalization in a more boring way.\n\n\n\n\nIt doesn't mean that my arguments about RSI were false about their domain of supposed application, but that other things were also true and those things happened first on our timeline. To be clear, I think this is an important and generalizable issue with the impossible task of trying to forecast the Future, and if I am wrong about other things it sure would be plausible if I was wrong in similar ways.\n\n\n\n[Ngo][11:13]\nThen the analogy here is something like: there is a powerful abstraction, namely consequentialism; and we both agree that (like RSI) a large amount of consequentialism is a very dangerous thing. But we disagree on the question of how much the strategic landscape in the leadup to highly-consequentialist AIs is affected by other factors apart from this particular abstraction.\n\"this truism tells you very little of itself, unless you can predict directional shifts in other variables just by contemplating the unknown messiness relative to the abstraction\"\nI disagree with this claim. It seems to me that the predictable direction in which the messiness pushes is away from the applicability of the high-level abstraction.\n\n\n\n\n[Yudkowsky][11:15]\nThe real world is messy, but good abstractions still apply, just with some messiness around them. The Law of Earlier Failure is not a failure of the abstraction being messy, it's a failure of the subject matter ending up different such that the abstractions you used were about a different subject matter.\nWhen a company fails before the exciting challenge where you try to scale your app across a million users, because you couldn't hire enough programmers to build your app at all, the problem is not that you had an unexpectedly messy abstraction about scaling to many users, but that the key determinants were a different subject matter than \"scaling to many users\".\nThrowing 10,000 TPUs at something and actually getting progress – not very much of a famous technological idiom at the time I was originally arguing with Robin – is not a leak in the RSI abstraction, it's just a way of getting powerful capabilities without RSI.\n\n\n\n[Ngo][11:18]\nTo me the difference between these two things seems mainly semantic; does it seem otherwise to you?\n\n\n\n\n[Yudkowsky][11:18]\nIf I'd been arguing with somebody who kept arguing in favor of faster timescales, maybe I'd have focused on that different subject matter and gotten a chance to be explicitly wrong about it. I mainly see my ur-failure here as letting myself be influenced by the whole audience that was nodding along very seriously to Robin's arguments, at the expense of considering how reality might depart in either direction from my own beliefs, and not just how Robin might be right or how to persuade the audience.\n\n\n\n[Ngo][11:19]\nAlso, \"throwing 10,000 TPUs at something and actually getting progress\" doesn't seem like an example of the Law of Earlier Failure – if anything it seems like an Earlier Success\n\n\n\n\n[Yudkowsky][11:19]\nit's an Earlier Failure of Robin's arguments about why AI wouldn't scale quickly, so my lack of awareness of this case of the Law of Earlier Failure is why I didn't consider why Robin's arguments could fail earlier\nthough, again, this is a bit harder to call if you're trying to call it in 2008 instead of 2018\nbut it's a valid lesson that the future is, in fact, hard to predict, if you're trying to do it in the past\nand I would not consider it a merely \"semantic\" difference as to whether you made a wrong argument about the correct subject matter, or a correct argument about the wrong subject matter\nthese are like… very different failure modes that you learn different lessons from\nbut if you're not excited by these particular fine differences in failure modes or lessons to learn from them, we should perhaps not dwell upon that part of the meta-level Art\n\n\n\n[Ngo][11:21]\nOkay, so let me see if I understand your position here.\nDue to the deep learning revolution, it turned out that there were ways to get powerful capabilities without RSI. This isn't intrinsically a (strong) strike against the RSI abstraction; and so, unless we have reason to expect another similarly surprising revolution before reaching AGI, it's not a good reason to doubt the consequentialism abstraction.\n\n\n\n\n[Yudkowsky][11:25]\nConsequentialism and RSI are very different notions in the first place. Consequentialism is, in my own books, significantly simpler. I don't see much of a conceptual connection between the two myself, except insofar as they both happen to be part of the connected fabric of a coherent worldview about cognition.\nIt is entirely reasonable to suspect that we may get another surprising revolution before reaching AGI. Expecting a particular revolution that gives you particular miraculous benefits is much more questionable and is an instance of conjuring expected good from nowhere, like hoping that you win the lottery because the first lottery ball comes up 37. (Also, if you sincerely believed you actually had info about what kind of revolution might lead to AGI, you should shut up about it and tell very few carefully selected people, not bake it into a public dialogue.)\n\n\n\n[Ngo][11:28]\n\nand I would not consider it a merely \"semantic\" difference as to whether you made a wrong argument about the correct subject matter, or a correct argument about the wrong subject matter\n\nOn this point: the implicit premise of \"and also nothing else will break this abstraction or render it much less relevant\" turns a correct argument about the wrong subject matter into an incorrect argument.\n\n\n\n\n[Yudkowsky][11:28]\nSure.\nThough I'd also note that there's an important lesson of technique where you learn to say things like that out loud instead of keeping them \"implicit\".\nLearned lessons like that are one reason why I go through your summary documents of our conversation and ask for many careful differences of wording about words like \"will happen\" and so on.\n\n\n\n[Ngo][11:30]\nMakes sense.\nSo I claim that:\n1. A premise like this is necessary for us to believe that your claims about consequentialism lead to extinction.\n2. A surprising revolution would make it harder to believe this premise, even if we don't know which particular revolution it is.\n3. If we'd been told back in 2008 that a surprising revolution would occur in AI, then we should have been less confident in the importance of the RSI abstraction to understanding AGI and AGI risk.\n\n\n\n\n[Yudkowsky][11:32][11:34]\nSuppose I put to you that this claim is merely subsumed by all of my previous careful qualifiers about how we might get a \"miracle\" and how we should be trying to prepare for an unknown miracle in any number of places. Why suspect that place particularly for a model-violation?\nI also think that you are misinterpreting my old arguments about RSI, in a pattern that matches some other cases of your summarizing my beliefs as \"X is the one big ultra-central thing\" rather than \"X is the point where the other person got stuck and Eliezer had to spend a lot of time arguing\".\nI was always claiming that RSI was a way for AGI capabilities to scale much further once they got far enough, not the way AI would scale to human-level generality.\n\n\n\n\nThis continues to be a key fact of relevance to my future model, in the form of the unfalsified original argument about the subject matter it previously applied to: if you lose control of a sufficiently smart AGI, it will FOOM, and this fact about what triggers the metaphorical equivalent of a full nuclear exchange and a total loss of the gameboard continues to be extremely relevant to what you have to do to obtain victory instead.\n\n\n\n[Ngo][11:34][11:35]\nPerhaps we're interpreting the word \"miracle\" in quite different ways.\n\n\n\n\n\nI think of it as an event with negligibly small probability.\n\n\n\n\n[Yudkowsky][11:35]\nEvents that actually have negligibly small probability are not much use in plans.\n\n\n\n[Ngo][11:35]\nWhich I guess doesn't fit with your claims that we should be trying to prepare for a miracle.\n\n\n\n\n[Yudkowsky][11:35]\nCorrect.\n\n\n\n[Ngo][11:35]\nBut I'm not recalling off the top of my head where you've claimed that.\nI'll do a quick search of the transcript\n\"You need to hold your mind open for any miracle and a miracle you didn't expect or think of in advance, because at this point our last hope is that in fact the future is often quite surprising.\"\nOkay, I see. The connotations of \"miracle\" seemed sufficiently strong to me that I didn't interpret \"you need to hold your mind open\" as practical advice.\nWhat sort of probability, overall, do you assign to us being saved by what you call a miracle?\n\n\n\n\n[Yudkowsky][11:40]\nIt's not a place where I find quantitative probabilities to be especially helpful.\nAnd if I had one, I suspect I would not publish it.\n\n\n\n[Ngo][11:41]\nCan you leak a bit of information? Say, more or less than 10%?\n\n\n\n\n[Yudkowsky][11:41]\nLess.\nThough a lot of that is dominated, not by the probability of a positive miracle, but by the extent to which we seem unprepared to take advantage of it, and so would not be saved by one.\n\n\n\n[Ngo][11:41]\nYeah, I see.\n\n\n\n \n5.2. The idea of expected utility\n \n\n[Ngo][11:43]\nOkay, I'm now significantly less confident about how much we actually disagree.\nAt least about the issues of AI cognition.\n\n\n\n\n[Yudkowsky][11:44]\nYou seem to suspect we'll get a particular miracle having to do with \"consequentialism\", which means that although it might be a miracle to me, it wouldn't be a miracle to you.\nThere is something forbidden in my model that is not forbidden in yours.\n\n\n\n[Ngo][11:45]\nI think that's partially correct, but I'd call it more a broad range of possibilities in the rough direction of you being wrong about consequentialism.\n\n\n\n\n[Yudkowsky][11:46]\nWell, as much as it may be nicer to debate when the other person has a specific positive expectation that X will work, we can also debate when I know that X won't work and the other person remains ignorant of that. So say more!\n\n\n\n[Ngo][11:47]\nThat's why I've mostly been trying to clarify your models rather than trying to make specific claims of my own.\nWhich I think I'd prefer to continue doing, if you're amenable, by asking you about what entities a utility function is defined over – say, in the context of a human.\n\n\n\n\n[Yudkowsky][11:51][11:53]\nI think that to contain the concept of Utility as it exists in me, you would have to do homework exercises I don't know how to prescribe. Maybe one set of homework exercises like that would be showing you an agent, including a human, making some set of choices that allegedly couldn't obey expected utility, and having you figure out how to pump money from that agent (or present it with money that it would pass up).\nLike, just actually doing that a few dozen times.\nMaybe it's not helpful for me to say this? If you say it to Eliezer, he immediately goes, \"Ah, yes, I could see how I would update that way after doing the homework, so I will save myself some time and effort and just make that update now without the homework\", but this kind of jumping-ahead-to-the-destination is something that seems to me to be… dramatically missing from many non-Eliezers. They insist on learning things the hard way and then act all surprised when they do. Oh my gosh, who would have thought that an AI breakthrough would suddenly make AI seem less than 100 years away the way it seemed yesterday? Oh my gosh, who would have thought that alignment would be difficult?\nUtility can be seen as the origin of Probability within minds, even though Probability obeys its own, simpler coherence constraints.\n\n\n\n\nthat is, you will have money pumped out of you, unless you weigh in your mind paths through time according to some quantitative weight, which determines how much resources you're willing to spend on preparing for them\nthis is why sapients think of things as being more or less likely\n\n\n\n[Ngo][11:53]\nSuppose that this agent has some high-level concept – say, honour – which leads it to pass up on offers of money.\n\n\n\n\n[Yudkowsky][11:55]\n\nSuppose that this agent has some high-level concept – say, honour – which leads it to pass up on offers of money.\n\nthen there's two possibilities:\n\nthis concept of honor is something that you can see as helping to navigate a path through time to a destination\nhonor isn't something that would be optimized into existence by optimization pressure for other final outcomes\n\n\n\n\n[Ngo][11:55]\nRight, I see.\nHmm, but it seems like humans often don't see concepts as helping to navigate a path in time to a destination. (E.g. the deontological instinct not to kill.)\nAnd yet those concepts were in fact optimised into existence by evolution.\n\n\n\n\n[Yudkowsky][11:59]\nYou're describing a defect of human reflectivity about their consequentialist structure, not a departure from consequentialist structure. \n\n\n\n[Ngo][12:01]\n(Sorry, internet was slightly buggy; switched to a better connection now.)\n\n\n\n\n[Yudkowsky][12:01]\nBut yes, from my perspective, it creates a very large conceptual gap that I can stare at something for a few seconds and figure out how to parse it as navigating paths through time, while others think that \"consequentialism\" only happens when their minds are explicitly thinking about \"well, what would have this consequence\" using language.\nSimilarly, when it comes to Expected Utility, I see that any time something is attaching relative-planning-weights to paths through time, not when a human is thinking out loud about putting spoken numbers on outcomes\n\n\n\n[Ngo][12:02]\nHuman consequentialist structure was optimised by evolution for a different environment. Insofar as we are consequentialists in a new environment, it's only because we're able to be reflective about our consequentialist structure (or because there are strong similarities between the environments).\n\n\n\n\n[Yudkowsky][12:02]\nFalse.\nIt just generalized out-of-distribution because the underlying coherence of the coherent behaviors was simple.\nWhen you have a very simple pattern, it can generalize across weak similarities, not \"strong similarities\".\nThe human brain is large but the coherence in it is simple.\nThe idea, the structure, that explains why the big thing works, is much smaller than the big thing.\nSo it can generalize very widely.\n\n\n\n[Ngo][12:04]\nTaking this example of the instinct not to kill people – is this one of the \"very simple patterns\" that you're talking about?\n\n\n\n\n[Yudkowsky][12:05]\n\"Reflectivity\" doesn't help per se unless on some core level a pattern already generalizes, I mean, either a truth can generalize across the data or it can't? So I'm a bit puzzled about why you're bringing up \"reflectivity\" in this context.\nAnd, no.\nAn instinct not to kill doesn't even seem to me like a plausible cross-cultural universal. 40% of deaths among Yanomami men are in intratribal fights, iirc.\n\n\n\n[Ngo][12:07]\nAh, I think we were talking past each other. When you said \"this concept of honor is something that you can see as helping to navigate a path through time to a destination\" I thought you meant \"you\" as in the agent in question (as you used it in some previous messages) not \"you\" as in a hypothetical reader.\n\n\n\n\n[Yudkowsky][12:07]\nah.\nit would not have occurred to me to ascribe that much competence to an agent that wasn't a superintelligence.\neven I don't have time to think about why more than 0.0001% 0.01% of my thoughts do anything, but thankfully, you don't have to think about why 2 + 2 = 4 for it to be the correct answer for counting sheep.\n\n\n\n[Ngo][12:10]\nGot it.\nI might now try to throw a high-level (but still inchoate) disagreement at you and see how that goes. But while I'm formulating that, I'm curious what your thoughts are on where to take the discussion.\nActually, let's spend a few minutes deciding where to go next, and then take a break\nI'm thinking that, at this point, there might be more value in moving onto geopolitics\n\n\n\n\n[Yudkowsky][12:19]\nSome of my current thoughts are a reiteration of old despair: It feels to me like the typical Other within EA has no experience with discovering unexpected order, with operating a generalization that you can expect will cover new cases even when that isn't immediately obvious, with operating that generalization to cover those new cases correctly, with seeing simple structures that generalize a lot and having that be a real and useful and technical experience; instead of somebody blathering in a non-expectation-constraining way about how \"capitalism is responsible for everything wrong with the world\", and being able to extend that to lots of cases.\nI could try to use much simpler language in hopes that people actually look-at-the-water Feynman-style, like \"navigating a path through time\" instead of Consequentialism which is itself a step down from Expected Utility.\nBut you actually do lose something when you throw away the more technical concept. And then people still think that either you instantly see in the first second how something is a case of \"navigating a path through time\", or that this is something that people only do explicitly when visualizing paths through time using that mental terminology; or, if Eliezer says that it's \"navigating time\" anyways, this must be an instance of Eliezer doing that thing other people do when they talk about how \"Capitalism is responsible for all the problems of the world\". They have no experience operating genuinely useful, genuinely deep generalizations that extend to nonobvious things.\nAnd in fact, being able to operate some generalizations like that is a lot of how I know what I know, in reality and in terms of the original knowledge that came before trying to argue that knowledge with people. So trying to convey the real source of the knowledge feels doomed. It's a kind of idea that our civilization has lost, like that college class Feynman ran into.\n\n\n\n[Soares][12:19]\nMy own sense (having been back for about 20min) is that one of the key cruxes is in \"is it possible that non-scary cognition will be able to end the acute risk period\", or perhaps \"should we expect a longish regime of pre-scary cognition, that we can study and learn to align in such a way that by the time we get scary cognition we can readily align it\".\n\n\n\n\n[Ngo][12:19]\nSome potential prompts for that:\n\nwhat are some scary things which might make governments take AI more seriously than they took covid, and which might happen before AGI\nhow much of a bottleneck in your model is governmental competence? and how much of a difference do you see in this between, say, the US and China?\n\n\n\n\n\n[Soares][12:20]\nI also have a bit of a sense that there's a bit more driving to do on the \"perhaps EY is just wrong about the applicability of the consequentialism arguments\" (in a similar domain), and would be happy to try articulating a bit of what I think are the not-quite-articulated-to-my-satisfaction arguments on that side.\n\n\n\n\n[Yudkowsky][12:21]\nI also had a sense – maybe mistaken – that RN did have some specific ideas about how \"consequentialism\" might be inapplicable. though maybe I accidentally refuted that in passing because the idea was \"well, what if it didn't know what consequentialism was?\" and then I explained that reflectivity was not required to make consequentialism generalize. but if so, I'd like RN to say explicitly what specific idea got refuted that way. or failing that, talk about the specific idea that didn't get refuted.\n\n\n\n[Ngo][12:23]\nThat wasn't my objection, but I do have some more specific ideas, which I could talk about.\nAnd I'd also be happy for Nate to try articulating some of the arguments he mentioned above.\n\n\n\n\n[Yudkowsky][12:23]\nI have a general worry that this conversation has gotten too general, and that it would be more productive, even of general understanding, to start from specific ideas and shoot those down specifically.\n\n\n\n\n[Ngo: ]\n\n\n\n\n\n\n\n[Ngo][12:26]\nThe other thing is that, for pedagogical purposes, I think it'd be useful for you to express some of your beliefs about how governments will respond to AI\nI think I have a rough guess about what those beliefs are, but even if I'm right, not everyone who reads this transcript will be\n\n\n\n\n[Yudkowsky][12:28]\nWhy would I be expected to know that? I could talk about weak defaults and iterate through an unending list of possibilities.\nThinking that Eliezer thinks he knows that to any degree of specificity feels like I'm being weakmanned!\n\n\n\n[Ngo][12:28]\nI'm not claiming you have any specific beliefs\n\n\n\n\n[Yudkowsky][12:29]\nI suppose I have skepticism when other people dream up elaborately positive and beneficial reactions apparently drawn from some alternate nicer political universe that had an absolutely different response to Covid-19, and so on.\n\n\n\n[Ngo][12:29]\nBut I'd guess that your models rule out, for instance, the US and China deeply cooperating on AI before it's caused any disasters\n\n\n\n\n[Yudkowsky][12:30]\n\"Deeply\"? Sure. That sounds like something that has never happened, and I'm generically skeptical about political things that go better than any political thing has ever gone before.\nI guess we could talk about that? It doesn't seem like the most productive area, but maybe it lies upstream of more technical disagreements because we disagree about what AGI would actually have to do to have the world not end.\n\n\n\n[Soares][12:31]\nCool. I claim it's time for a break, and then I nominate a little Eliezer gov't-response-overview followed by specific maybe-consequentialism-based-worries-aren't-a-problem-in-practice ideas from Richard.\n\n\n\n\n[Ngo: ]\n\n\n\n\n\n\n\n\n[Yudkowsky][12:32]\nSee you in 28mins\n\n\n \n5.3. Epistemology, and assessing the idea of expected utility\n \n\n[Ngo][13:03]\n\nSome of my current thoughts are a reiteration of old despair: It feels to me like the typical Other within EA has no experience with discovering unexpected order, with operating a generalization that you can expect will cover new cases even when that isn't immediately obvious, with operating that generalization to cover those new cases correctly, with seeing simple structures that generalize a lot and having that be a real and useful and technical experience; instead of somebody blathering in a non-expectation-constraining way about how \"capitalism is responsible for everything wrong with the world\", and being able to extend that to lots of cases.\nI could try to use much simpler language in hopes that people actually look-at-the-water Feynman-style, like \"navigating a path through time\" instead of Consequentialism which is itself a step down from Expected Utility.\nBut you actually do lose something when you throw away the more technical concept. And then people still think that either you instantly see in the first second how something is a case of \"navigating a path through time\", or that this is something that people only do explicitly when visualizing paths through time using that mental terminology; or, if Eliezer says that it's \"navigating time\" anyways, this must be an instance of Eliezer doing that thing other people do when they talk about how \"Capitalism is responsible for all the problems of the world\". They have no experience operating genuinely useful, genuinely deep generalizations that extend to nonobvious things.\nAnd in fact, being able to operate some generalizations like that is a lot of how I know what I know, in reality and in terms of the original knowledge that came before trying to argue that knowledge with people. So trying to convey the real source of the knowledge feels doomed. It's a kind of idea that our civilization has lost, like that college class Feynman ran into.\n\nOoops, didn't see this comment earlier. With respect to discovering unexpected order, one point that seems relevant is the extent to which that order provides predictive power. To what extent do you think that predictive successes in economics are important evidence for expected utility theory being a powerful formalism? (Or are there other ways in which it's predictively powerful that provide significant evidence?)\nI'd be happy with a quick response to that, and then on geopolitics, here's a prompt to kick us off:\n\nIf the only two actors involved in AGI development were the US and the UK governments, how much safer (or less safe) would you think we were compared with a world in which the two actors are the US and Chinese governments? How about a world in which the US government was a decade ahead of everyone else in reaching AGI?\n\n\n\n\n\n[Yudkowsky][13:06]\nI think that the Apollo space program is much deeper evidence for Utility. Observe, if you train protein blobs to run around the savanna, they also go to the moon!\nIf you think of \"utility\" as having something to do with the human discipline called \"economics\" then you are still thinking of it in a much much much more narrow way than I do.\n\n\n\n[Ngo][13:07]\nI'm not asking about evidence for utility as an abstraction in general, I'm asking for evidence based on successful predictions that have been made using it.\n\n\n\n\n[Yudkowsky][13:10]\nThat doesn't tend to happen a lot, because all of the deep predictions that it makes are covered by shallow predictions that people made earlier.\nConsider the following prediction of evolutionary psychology: Humans will enjoy activities associated with reproduction!\n\"What,\" says Simplicio, \"you mean like dressing up for dates? I don't enjoy that part.\"\n\"No, you're overthinking it, we meant orgasms,\" says the evolutionary psychologist.\n\"But I already knew that, that's just common sense!\" replies Simplicio.\n\"And yet it is very specifically a prediction of evolutionary psychology which is not made specifically by any other theory of human minds,\" replies the evolutionary psychologist.\n\"Not an advance prediction, just-so story, too obvious,\" replies Simplicio.\n\n\n\n[Ngo][13:11]\nYepp, I agree that most of its predictions won't be new. Yet evolution is a sufficiently powerful theory that people have still come up with a range of novel predictions that derive from it.\nInsofar as you're claiming that expected utility theory is also very powerful, then we should expect that it also provides some significant predictions.\n\n\n\n\n[Yudkowsky][13:12]\nAn advance prediction of the notion of Utility, I suppose, is that if you train an AI which is otherwise a large blob of layers – though this may be inadvisable for other reasons – to the point where it starts solving lots of novel problems, that AI will tend to value aspects of outcomes with weights, and weight possible paths through time (the dynamic progress of the environment), and use (by default, usually, roughly) the multiplication of these weights to allocate limited resources between mutually conflicting plans.\n\n\n\n[Ngo][13:13]\nAgain, I'm asking for evidence in the form of successful predictions.\n\n\n\n\n[Yudkowsky][13:14]\nI predict that people will want some things more than others, think some possibilities are more likely than others, and prefer to do things that lead to stuff they want a lot through possibilities they think are very likely!\n\n\n\n[Ngo][13:15]\nIt would be very strange to me if a theory which makes such strong claims about things we can't yet verify can't shed light on anything which we are in a position to verify.\n\n\n\n\n[Yudkowsky][13:15]\nIf you think I'm deriving my predictions of catastrophic alignment failure through something more exotic than that, you're missing the reason why I'm so worried. It doesn't take intricate complicated exotic assumptions.\nIt makes the same kind of claims about things we can't verify yet as it makes about things we can verify right now.\n\n\n\n[Ngo][13:16]\nBut that's very easy to do! Any theory can do that.\n\n\n\n\n[Yudkowsky][13:17]\nFor example, if somebody wants money, and you set up a regulation which prevents them from making money, it predicts that the person will look for a new way to make money that bypasses the regulation.\n\n\n\n[Ngo][13:17]\nAnd yes, of course fitting previous data is important evidence in favour of a theory\n\n\n\n\n[Yudkowsky][13:17]\n\n[But that's very easy to do! Any theory can do that.]\n\nFalse! Any theory can do that in the hands of a fallible agent which invalidly, incorrectly derives predictions from the theory.\n\n\n\n[Ngo][13:18]\nWell, indeed. But the very point at hand is whether the predictions you base on this theory are correctly or incorrectly derived.\n\n\n\n\n[Yudkowsky][13:18]\nIt is not the case that every theory does an equally good job of predicting the past, given valid derivations of predictions.\nWell, hence the analogy to evolutionary psychology. If somebody doesn't see the blatant obviousness of how sexual orgasms are a prediction specifically of evolutionary theory, because it's \"common sense\" and \"not an advance prediction\", what are you going to do? We can, in this case, with a lot more work, derive more detailed advance predictions about degrees of wanting that correlate in detail with detailed fitness benefits. But that's not going to convince anybody who overlooked the really blatant and obvious primary evidence.\nWhat they're missing there is a sense of counterfactuals, of how the universe could just as easily have looked if the evolutionary origins of psychology were false: why should organisms want things associated with reproduction, why not instead have organisms running around that want things associated with rolling down hills?\nSimilarly, if optimizing complicated processes for outcomes hard enough, didn't produce cognitive processes that internally mapped paths through time and chose actions conditional on predicted outcomes, human beings would… not think like that? What am I supposed to say here?\n\n\n\n[Ngo][13:24]\nLet me put it this way. There are certain traps that, historically, humans have been very liable to fall into. For example, seeing a theory, which seems to match so beautifully and elegantly the data which we've collected so far, it's very easy to dramatically overestimate how much that data favours that theory. Fortunately, science has a very powerful social technology for avoiding this (i.e. making falsifiable predictions) which seems like approximately the only reliable way to avoid it – and yet you don't seem concerned at all about the lack of application of this technology to expected utility theory.\n\n\n\n\n[Yudkowsky][13:25]\nThis is territory I covered in the Sequences, exactly because \"well it didn't make a good enough advance prediction yet!\" is an excuse that people use to reject evolutionary psychology, some other stuff I covered in the Sequences, and some very predictable lethalities of AGI.\n\n\n\n[Ngo][13:26]\nWith regards to evolutionary psychology: yes, there are some blatantly obvious ways in which it helps explain the data available to us. But there are also many people who have misapplied or overapplied evolutionary psychology, and it's very difficult to judge whether they have or have not done so, without asking them to make advance predictions.\n\n\n\n\n[Yudkowsky][13:26]\nI talked about the downsides of allowing humans to reason like that, the upsides, the underlying theoretical laws of epistemology (which are clear about why agents that reason validly or just unbiasedly would do that without the slightest hiccup), etc etc.\nIn the case of the theory \"people want stuff relatively strongly, predict stuff relatively strongly, and combine the strengths to choose\", what kind of advance prediction that no other theory could possibly make, do you expect that theory to make?\nIn the worlds where that theory is true, how should it be able to prove itself to you?\n\n\n\n[Ngo][13:28]\nI expect deeper theories to make more and stronger predictions.\nI'm currently pretty uncertain if expected utility theory is a deep or shallow theory.\nBut deep theories tend to shed light in all sorts of unexpected places.\n\n\n\n\n[Yudkowsky][13:30]\nThe fact is, when it comes to AGI (general optimization processes), we have only two major datapoints in our dataset, natural selection and humans. So you can either try to reason validly about what theories predict about natural selection and humans, even though we've already seen the effects of those; or you can claim to give up in great humble modesty while actually using other implicit theories instead to make all your predictions and be confident in them.\n\n\n\n[Ngo][13:30]\n\nI talked about the downsides of allowing humans to reason like that, the upsides, the underlying theoretical laws of epistemology (which are clear about why agents that reason validly or just unbiasedly would do that without the slightest hiccup), etc etc.\n\nI'm familiar with your writings on this, which is why I find myself surprised here. I could understand a perspective of \"yes, it's unfortunate that there are no advanced predictions, it's a significant weakness, I wish more people were doing this so we could better understand this vitally important theory\". But that seems very different from your perspective here.\n\n\n\n\n[Yudkowsky][13:32]\nOh, I'd love to be making predictions using a theory that made super detailed advance predictions made by no other theory which had all been borne out by detailed experimental observations! I'd also like ten billion dollars, a national government that believed everything I honestly told them about AGI, and a drug that raises IQ by 20 points.\n\n\n\n[Ngo][13:32]\nThe very fact that we have only two major datapoints is exactly why it seems like such a major omission that a theory which purports to describe intelligent agency has not been used to make any successful predictions about the datapoints we do have.\n\n\n\n\n[Yudkowsky][13:32][13:33]\nThis is making me think that you imagine the theory as something much more complicated and narrow than it is.\nJust look at the water.\nNot very special water with an index.\nJust regular water.\nPeople want stuff. They want some things more than others. When they do stuff they expect stuff to happen.\n\n\n\n\nThese are predictions of the theory. Not advance predictions, but predictions nonetheless.\n\n\n\n[Ngo][13:33][13:33]\nI'm accepting your premise that it's something deep and fundamental, and making the claim that deep, fundamental theories are likely to have a wide range of applications, including ones we hadn't previously thought of.\n\n\n\n\n\nDo you disagree with that premise, in general?\n\n\n\n\n[Yudkowsky][13:36]\nI don't know what you really mean by \"deep fundamental theory\" or \"wide range of applications we hadn't previously thought of\", especially when it comes to structures that are this simple. It sounds like you're still imagining something I mean by Expected Utility which is some narrow specific theory like a particular collection of gears that are appearing in lots of places.\nAre numbers a deep fundamental theory?\nIs addition a deep fundamental theory?\nIs probability a deep fundamental theory?\nIs the notion of the syntax-semantics correspondence in logic and the notion of a generally semantically valid reasoning step, a deep fundamental theory?\n\n\n\n[Ngo][13:38]\nYes to the first three, all of which led to very successful novel predictions.\n\n\n\n\n[Yudkowsky][13:38]\nWhat's an example of a novel prediction made by the notion of probability?\n\n\n\n[Ngo][13:38]\nMost applications of the central limit theorem.\n\n\n\n\n[Yudkowsky][13:39]\nThen I should get to claim every kind of optimization algorithm which used expected utility, as a successful advance prediction of expected utility? Optimal stopping and all the rest? Seems cheap and indeed invalid to me, and not particularly germane to whether these things appear inside AGIs, but if that's what you want, then sure.\n\n\n\n[Ngo][13:39]\n\nThese are predictions of the theory. Not advance predictions, but predictions nonetheless.\n\nI agree that it is a prediction of the theory. And yet it's also the case that smarter people than either of us have been dramatically mistaken about how well theories fit previously-collected data. (Admittedly we have advantages which they didn't, like a better understanding of cognitive biases – but it seems like you're ignoring the possibility of those cognitive biases applying to us, which largely negates those advantages.)\n\n\n\n\n[Yudkowsky][13:42]\nI'm not ignoring it, just adjusting my confidence levels and proceeding, instead of getting stuck in an infinite epistemic trap of self-doubt.\nI don't live in a world where you either have the kind of detailed advance experimental predictions that should convince the most skeptical scientist and render you immune to all criticism, or, alternatively, you are suddenly in a realm beyond the reach of all epistemic authority, and you ought to cuddle up into a ball and rely only on wordless intuitions and trying to put equal weight on good things happening and bad things happening.\nI live in a world where I proceed with very strong confidence if I have a detailed formal theory that made detailed correct advance predictions, and otherwise go around saying, \"well, it sure looks like X, but we can be on the lookout for a miracle too\".\nIf this was a matter of thermodynamics, I wouldn't even be talking like this, and we wouldn't even be having this debate.\nI'd just be saying, \"Oh, that's a perpetual motion machine. You can't build one of those. Sorry.\" And that would be the end.\nMeanwhile, political superforecasters go on making well-calibrated predictions about matters much murkier and more complicated than these, often without anything resembling a clearly articulated theory laid forth at length, let alone one that had made specific predictions even retrospectively. They just go do it instead of feeling helpless about it.\n\n\n\n[Ngo][13:45]\n\nThen I should get to claim every kind of optimization algorithm which used expected utility, as a successful advance prediction of expected utility? Optimal stopping and all the rest? Seems cheap and indeed invalid to me, and not particularly germane to whether these things appear inside AGIs, but if that's what you want, then sure.\n\nThese seem better than nothing, but still fairly unsatisfying, insofar as I think they are related to more shallow properties of the theory.\nHmm, I think you're mischaracterising my position. I nowhere advocated for feeling helpless or curling up in a ball. I was just noting that this is a particularly large warning sign which has often been valuable in the past, and it seemed like you were not only speeding past it blithely, but also denying the existence of this category of warning signs.\n\n\n\n\n[Yudkowsky][13:48]\nI think you're looking for some particular kind of public obeisance that I don't bother to perform internally because I'd consider it a wasted motion. If I'm lost in a forest I don't bother going around loudly talking about how I need a forest theory that makes detailed advance experimental predictions in controlled experiments, but, alas, I don't have one, so now I should be very humble. I try to figure out which way is north.\nWhen I have a guess at a northerly direction, it would then be an error to proceed with as much confidence as if I'd had a detailed map and had located myself upon it.\n\n\n\n[Ngo][13:49]\nInsofar as I think we're less lost than you do, then the weaknesses of whichever forest theory implies that we're lost are relevant for this discussion.\n\n\n\n\n[Yudkowsky][13:49]\nThe obeisance I make in that direction is visible in such statements as, \"But this, of course, is a prediction about the future, which is well-known to be quite difficult to predict, in fact.\"\nIf my statements had been matters of thermodynamics and particle masses, I would not be adding that disclaimer.\nBut most of life is not a statement about particle masses. I have some idea of how to handle that. I do not need to constantly recite disclaimers to myself about it.\nI know how to proceed when I have only a handful of data points which have already been observed and my theories of them are retrospective theories. This happens to me on a daily basis, eg when dealing with human beings.\n\n\n\n[Soares][13:50]\n(I have a bit of a sense that we're going in a circle. It also seems to me like there's some talking-past happening.)\n(I suggest a 5min break, followed by EY attempting to paraphrase RN to his satisfaction and vice versa.)\n\n\n\n\n[Yudkowsky][13:51]\nI'd have more trouble than usual paraphrasing RN because epistemic helplessness is something I find painful to type out.\n\n\n\n[Soares][13:51]\n(I'm also happy to attempt to paraphrase each point as I see it; it may be that this smooths over some conversational wrinkle.)\n\n\n\n\n[Ngo][13:52]\nSeems like a good suggestion. I'm also happy to move on to the next topic. This was meant to be a quick clarification.\n\n\n\n\n[Soares][13:52]\nnod. It does seem to me like it possibly contains a decently sized meta-crux, about what sorts of conclusions one is licensed to draw from what sorts of observations\nthat, eg, might be causing Eliezer's probabilities to concentrate but not Richard's.\n\n\n\n\n[Yudkowsky][13:52]\nYeah, this is in the opposite direction of \"more specificity\".\n\n\n\n\n[Soares: ]\n[Ngo: ]\n\n\n\n\nI frankly think that most EAs suck at explicit epistemology, OpenPhil and FHI affiliated EAs are not much of an exception to this, and I expect I will have more luck talking people out of specific errors than talking them out of the infinite pit of humble ignorance considered abstractly.\n\n\n\n[Soares][13:54]\nOk, that seems to me like a light bid to move to the next topic from both of you, my new proposal is that we take a 5min break and then move to the next topic, and perhaps I'll attempt to paraphrase each point here in my notes, and if there's any movement in the comments there we can maybe come back to it later.\n\n\n\n\n[Ngo: ]\n\n\n\n\n\n\n\n\n[Ngo][13:54]\nBroadly speaking I am also strongly against humble ignorance (albeit to a lesser extent than you are).\n\n\n\n\n[Yudkowsky][13:55]\nI'm off to take a 5-minute break, then!\n\n\n \n5.4. Government response and economic impact\n \n\n[Ngo][14:02]\nA meta-level note: I suspect we're around the point of hitting significant diminishing marginal returns from this format. I'm open to putting more time into the debate (broadly construed) going forward, but would probably want to think a bit about potential changes in format.\n\n\n\n\n[Soares][14:04, moved two up in log]\n\nA meta-level note: I suspect we're around the point of hitting significant diminishing marginal returns from this format. I'm open to putting more time into the debate (broadly construed) going forward, but would probably want to think a bit about potential changes in format.\n\n(Noted, thanks!)\n\n\n\n\n[Yudkowsky][14:03]\nI actually think that may just be a matter of at least one of us, including Nate, having to take on the thankless job of shutting down all digressions into abstractions and the meta-level.\n\n\n\n[Ngo][14:05]\n\nI actually think that may just be a matter of at least one of us, including Nate, having to take on the thankless job of shutting down all digressions into abstractions and the meta-level.\n\nI'm not so sure about this, because it seems like some of the abstractions are doing a lot of work.\n\n\n\n\n[Yudkowsky][14:03][14:04]\nAnyways, government reactions?\nIt seems to me like the best observed case for government reactions – which I suspect is no longer available in the present era as a possibility – was the degree of cooperation between the USA and Soviet Union about avoiding nuclear exchanges.\nThis included such incredibly extravagant acts of cooperation as installing a direct line between the President and Premier!\n\n\n\n\nwhich is not what I would really characterize as very \"deep\" cooperation, but it's more than a lot of cooperation you see nowadays.\nMore to the point, both the USA and Soviet Union proactively avoided doing anything that might lead towards starting down a path that led to a full nuclear exchange.\n\n\n\n[Ngo][14:04]\nThe question I asked earlier:\n\nIf the only two actors involved in AGI development were the US and the UK governments, how much safer (or less safe) would you think we were compared with a world in which the two actors are the US and Chinese governments? How about a world in which the US government was a decade ahead of everyone else in reaching AGI?\n\n\n\n\n\n[Yudkowsky][14:05]\nThey still provoked one another a lot, but, whenever they did so, tried to do so in a way that wouldn't lead to a full nuclear exchange.\nIt was mutually understood to be a strategic priority and lots of people on both sides thought a lot about how to avoid it.\nI don't know if that degree of cooperation ever got to the fantastic point of having people from both sides in the same room brainstorming together about how to avoid a full nuclear exchange, because that is, like, more cooperation than you would normally expect from two governments, but it wouldn't shock me to learn that this had ever happened.\nIt seems obvious to me that if some situation developed nowadays which increased the profile possibility of a nuclear exchange between the USA and Russia, we would not currently be able to do anything like installing a Hot Line between the US and Russian offices if such a Hot Line had not already been installed. This is lost social technology from a lost golden age. But still, it's not unreasonable to take this as the upper bound of attainable cooperation; it's been observed within the last 100 years.\nAnother guess for how governments react is a very simple and robust one backed up by a huge number of observations:\nThey don't.\nThey have the same kind of advance preparation and coordination around AGI, in advance of anybody getting killed, as governments had around the mortgage crisis of 2007 in advance of any mortgages defaulting.\nI am not sure I'd put this probability over 50% but it's certainly by far the largest probability over any competitor possibility specified to an equally low amount of detail.\nI would expect anyone whose primary experience was with government, who was just approaching this matter and hadn't been talked around to weird exotic views, to tell you the same thing as a matter of course.\n\n\n\n[Ngo][14:10]\n\nBut still, it's not unreasonable to take this as the upper bound of attainable cooperation; it's been observed within the last 100 years.\n\nIs this also your upper bound conditional on a world that has experienced a century's worth of changes within a decade, and in which people are an order of magnitude wealthier than they currently are?\n\nI am not sure I'd put this probability over 50% but it's certainly by far the largest probability over any competitor possibility specified to an equally low amount of detail.\n\nwhich one was this? US/UK?\n\n\n\n\n[Yudkowsky][14:12][14:14]\nAssuming governments do react, we have the problem of \"What kind of heuristic could have correctly led us to forecast that the US's reaction to a major pandemic would be for the FDA to ban hospitals from doing in-house Covid tests? What kind of mental process could have led us to make that call?\" And we couldn't have gotten it exactly right, because the future is hard to predict; the best heuristic I've come up with, that feels like it at least would not have been surprised by what actually happened, is, \"The government will react with a flabbergasting level of incompetence, doing exactly the wrong thing, in some unpredictable specific way.\"\n\nwhich one was this? US/UK?\n\nI think if we're talking about any single specific government like the US or UK then the probability is over 50% that they don't react in any advance coordinated way to the AGI crisis, to a greater and more effective degree than they \"reacted in an advance coordinated way\" to pandemics before 2020 or mortgage defaults before 2007.\n\n\n\n\nMaybe some two governments somewhere on Earth will have a high-level discussion between two cabinet officials.\n\n\n\n[Ngo][14:14]\nThat's one lesson you could take away. Another might be: governments will be very willing to restrict the use of novel technologies, even at colossal expense, in the face of even a small risk of large harms.\n\n\n\n\n[Yudkowsky][14:15]\n\nThat's one lesson you could take away. Another might be: governments will be very willing to restrict the use of novel technologies, even at colossal expense, in the face of even a small risk of large harms.\n\nI just… don't know what to do when people talk like this.\nIt's so absurdly, absurdly optimistic.\nIt's taking a massive massive failure and trying to find exactly the right abstract gloss to put on it that makes it sound like exactly the right perfect thing will be done next time.\nThis just – isn't how to understand reality.\nThis isn't how superforecasters think.\nThis isn't sane.\n\n\n\n[Soares][14:16]\n(be careful about ad hominem)\n(Richard might not be doing the insane thing you're imagining, to generate that sentence, etc)\n\n\n\n\n[Ngo][14:17]\nRight, I'm not endorsing this as my mainline prediction about what happens. Mainly what I'm doing here is highlighting that your view seems like one which cherrypicks pessimistic interpretations.\n\n\n\n\n[Yudkowsky][14:18]\nThat abstract description \"governments will be very willing to restrict the use of novel technologies, even at colossal expense, in the face of even a small risk of large harms\" does not in fact apply very well to the FDA banning hospitals from using their well-established in-house virus tests, at risk of the alleged harm of some tests giving bad results, when in fact the CDC's tests were giving bad results and much larger harms were on the way because of bottlenecked testing; and that abstract description should have applied to an effective and globally coordinated ban against gain-of-function research, which didn't happen.\n\n\n\n[Ngo][14:19]\nAlternatively: what could have led us to forecast that many countries will impose unprecedentedly severe lockdowns.\n\n\n\n\n[Yudkowsky][14:19][14:21][14:21]\nWell, I didn't! I didn't even realize that was an option! I thought Covid was just going to rip through everything.\n(Which, to be clear, it still may, and Delta arguably is in the more primitive tribal areas of the USA, as well as many other countries around the world that can't afford vaccines financially rather than epistemically.)\n\n\n\n\nBut there's a really really basic lesson here about the different style of \"sentences found in political history books\" rather than \"sentences produced by people imagining ways future politics could handle an issue successfully\".\n\n\n\n\nReality is so much worse than people imagining what might happen to handle an issue successfully.\n\n\n\n[Ngo][14:21][14:21][14:22]\nI might nudge us away from covid here, and towards the questions I asked before.\n\n\n\n\n\n\nThe question I asked earlier:\n\nIf the only two actors involved in AGI development were the US and the UK governments, how much safer (or less safe) would you think we were compared with a world in which the two actors are the US and Chinese governments? How about a world in which the US government was a decade ahead of everyone else in reaching AGI?\n\n\nThis being one.\n\n\n\n\n\n\n\"But still, it's not unreasonable to take this as the upper bound of attainable cooperation; it's been observed within the last 100 years.\" Is this also your upper bound conditional on a world that has experienced a century's worth of changes within a decade, and in which people are an order of magnitude wealthier than they currently are?\n\nAnd this being the other.\n\n\n\n\n[Yudkowsky][14:22]\n\nIs this also your upper bound conditional on a world that has experienced a century's worth of changes within a decade, and in which people are an order of magnitude wealthier than they currently are?\n\nI don't expect this to happen at all, or even come remotely close to happening; I expect AGI to kill everyone before self-driving cars are commercialized.\n\n\n\n[Yudkowsky][16:29]  (Nov. 14 follow-up comment)\n(This was incautiously put; maybe strike \"expect\" and put in \"would not be the least bit surprised if\" or \"would very tentatively guess that\".)\n\n\n\n[Ngo][14:23]\nah, I see\nOkay, maybe here's a different angle which I should have been using. What's the most impressive technology you expect to be commercialised before AGI kills everyone?\n\n\n\n\n[Yudkowsky][14:24]\n\nIf the only two actors involved in AGI development were the US and the UK governments, how much safer (or less safe) would you think we were compared with a world in which the two actors are the US and Chinese governments?\n\nVery hard to say; the UK is friendlier but less grown-up. We would obviously be VASTLY safer in any world where only two centralized actors (two effective decision processes) could ever possibly build AGI, though not safe / out of the woods / at over 50% survival probability.\n\nHow about a world in which the US government was a decade ahead of everyone else in reaching AGI?\n\nVastly safer and likewise impossibly miraculous, though again, not out of the woods at all / not close to 50% survival probability.\n\nWhat's the most impressive technology you expect to be commercialised before AGI kills everyone?\n\nThis is incredibly hard to predict. If I actually had to predict this for some reason I would probably talk to Gwern and Carl Shulman. In principle, there's nothing preventing me from knowing something about Go which lets me predict in 2014 that Go will probably fall in two years, but in practice I did not do that and I don't recall anybody else doing it either. It's really quite hard to figure out how much cognitive work a domain requires and how much work known AI technologies can scale to with more compute, let alone predict AI breakthroughs.\n\n\n\n[Ngo][14:27]\nI'd be happy with some very rough guesses\n\n\n\n\n[Yudkowsky][14:27]\nIf you want me to spin a scifi scenario, I would not be surprised to find online anime companions carrying on impressively humanlike conversations, because this is a kind of technology that can be deployed without major corporations signing on or regulatory approval.\n\n\n\n[Ngo][14:28]\nOkay, this is surprising; I expected something more advanced.\n\n\n\n\n[Yudkowsky][14:29]\nArguably AlphaFold 2 is already more advanced than that, along certain dimensions, but it's no coincidence that afaik people haven't really done much with AlphaFold 2 and it's made no visible impact on GDP.\nI expect GDP not to depart from previous trendlines before the world ends, would be a more general way of putting it.\n\n\n\n[Ngo][14:29]\nWhat's the most least impressive technology that your model strongly rules out happening before AGI kills us all?\n\n\n\n\n[Yudkowsky][14:30]\nyou mean least impressive?\n\n\n\n[Ngo][14:30]\noops, yes\nThat seems like a structurally easier question to answer\n\n\n\n\n[Yudkowsky][14:30]\n\"Most impressive\" is trivial. \"Dyson Spheres\" answers it.\nOr, for that matter, \"perpetual motion machines\".\n\n\n\n[Ngo][14:31]\nAh yes, I was thinking that Dyson spheres were a bit too prosaic\n\n\n\n\n[Yudkowsky][14:32]\nMy model mainly rules out that we get to certain points and then hang around there for 10 years while the technology gets perfected, commercialized, approved, adopted, ubiquitized enough to produce a visible trendline departure on the GDP graph; not so much various technologies themselves being initially demonstrated in a lab.\nI expect that the people who build AGI can build a self-driving car if they want to. Getting it approved and deployed before the world ends is quite another matter.\n\n\n\n[Ngo][14:33]\nOpenAI has commercialised GPT-3\n\n\n\n\n[Yudkowsky][14:33]\nHasn't produced much of a bump in GDP as yet.\n\n\n\n[Ngo][14:33]\nI wasn't asking about that, though\nI'm more interested in judging how hard you think it is for AIs to take over the world\n\n\n\n\n[Yudkowsky][14:34]\nI note that it seems to me like there is definitely a kind of thinking here, which, if told about GPT-3 five years ago, would talk in very serious tones about how much this technology ought to be predicted to shift GDP, and whether we could bet on that.\nBy \"take over the world\" do you mean \"turn the world into paperclips\" or \"produce 10% excess of world GDP over predicted trendlines\"?\n\n\n\n[Ngo][14:35]\nTurn world into paperclips\n\n\n\n\n[Yudkowsky][14:36]\nI expect this mainly happens as a result of superintelligence, which is way up in the stratosphere far above the minimum required cognitive capacities to get the job done?\nThe interesting question is about humans trying to deploy a corrigible AGI thinking in a restricted domain, trying to flip the gameboard / \"take over the world\" without full superintelligence?\nI'm actually not sure what you're trying to get at here.\n\n\n\n[Soares][14:37]\n(my guess, for the record, is that the crux Richard is attempting to drive for here, is centered more around something like \"will humanity spend a bunch of time in the regime where there are systems capable of dramatically increasing world GDP, and if not how can you be confident of that from here\")\n\n\n\n\n[Yudkowsky][14:38]\nThis is not the sort of thing I feel Confident about.\n\n\n\n[Yudkowsky][16:31]  (Nov. 14 follow-up comment)\n(My confidence here seems understated.  I am very pleasantly surprised if we spend 5 years hanging around with systems that can dramatically increase world GDP and those systems are actually being used for that.  There isn't one dramatic principle which prohibits that, so I'm not Confident, but it requires multiple nondramatic events to go not as I expect.)\n\n\n\n[Ngo][14:38]\nYeah, that's roughly what I'm going for. Or another way of putting it: we have some disagreements about the likelihood of humans being able to get an AI to do a pivotal act which saves the world. So I'm trying to get some estimates for what the hardest act you think humans can get an AI to do is.\n\n\n\n\n[Soares][14:39]\n(and that a difference here causes, eg, Richard to suspect the relevant geopolitics happen after a century of progress in 10y, everyone being suddenly much richer in real terms, and a couple of warning shots, whereas Eliezer expects the relevant geopolitics to happen the day after tomorrow, with \"realistic human-esque convos\" being the sort of thing we get in stead of warning shots)\n\n\n\n\n[Ngo: ]\n\n\n\n\n\n\n\n\n[Yudkowsky][14:40]\nI mostly do not expect pseudo-powerful but non-scalable AI powerful enough to increase GDP, hanging around for a while. But if it happens then I don't feel I get to yell \"what happened?\" at reality, because there's an obvious avenue for it to happen: something GDP-increasing proved tractable to non-deeply-general AI systems.\nwhere GPT-3 is \"not deeply general\"\n\n\n\n[Ngo][14:40]\nAgain, I didn't ask about GDP increases, I asked about impressive acts (in order to separate out the effects of AI capabilities from regulatory effects, people-having-AI-but-not-using-it, etc).\nWhere you can use whatever metric of impressiveness you think is reasonable.\n\n\n\n\n[Yudkowsky][14:42]\nso there's two questions here, one of which is something like, \"what is the most impressive thing you can do while still being able to align stuff and make it corrigible\", and one of which is \"if there's an incorrigible AI whose deeds are being exhibited by fools, what impressive things might it do short of ending the world\".\nand these are both problems that are hard for the same reason I did not predict in 2014 that Go would fall in 2016; it can in fact be quite hard – even with a domain as fully lawful and known as Go – to figure out which problems will fall to which level of cognitive capacity.\n\n\n\n[Soares][14:43]\nNate's attempted rephrasing: EY's model might not be confident that there's not big GDP boosts, but it does seem pretty confident that there isn't some \"half-capable\" window between the shallow-pattern-memorizer stuff and the scary-laserlike-consequentialist stuff, and in particular Eliezer seems confident humanity won't slowly traverse that capability regime\n\n\n\n\n[Yudkowsky][14:43]\nthat's… allowed? I don't get to yell at reality if that happens?\n\n\n\n[Soares][14:44]\nand (shakier extrapolation), that regime is where a bunch of Richard's hope lies (eg, in the beginning of that regime we get to learn how to do practical alignment, and also the world can perhaps be saved midway through that regime using non-laserlike-systems)\n\n\n\n\n[Ngo: ]\n\n\n\n\n\n\n\n\n[Yudkowsky][14:45]\nso here's an example of a thing I don't think you can do without the world ending: get an AI to build a nanosystem or biosystem which can synthesize two strawberries identical down to the cellular but not molecular level, and put them on a plate\nthis is why I use this capability as the definition of a \"powerful AI\" when I talk about \"powerful AIs\" being hard to align, if I don't want to start by explicitly arguing about pivotal acts\nthis, I think, is going to end up being first doable using a laserlike world-ending system\nso even if there's a way to do it with no lasers, that happens later and the world ends before then\n\n\n\n[Ngo][14:47]\nOkay, that's useful.\n\n\n\n\n[Yudkowsky][14:48]\nit feels like the critical bar there is something like \"invent a whole engineering discipline over a domain where you can't run lots of cheap simulations in full detail\"\n\n\n\n[Ngo][14:49]\n(Meta note: let's wrap up in 10 mins? I'm starting to feel a bit sleepy.)\n\n\n\n\n[Yudkowsky: ]\n[Soares: ]\n\n\n\n\nThis seems like a pretty reasonable bar\nLet me think a bit about where to go from that\nWhile I'm doing so, since this question of takeoff speeds seems like an important one, I'm wondering if you could gesture at your biggest disagreement with this post: https://sideways-view.com/2018/02/24/takeoff-speeds/\n\n\n\n\n[Yudkowsky][14:51]\nOh, also in terms of scifi possibilities, I can imagine seeing 5% GDP loss because text transformers successfully scaled to automatically filing lawsuits and environmental impact objections.\nMy read on the entire modern world is that GDP is primarily constrained by bureaucratic sclerosis rather than by where the technological frontiers lie, so AI ends up impacting GDP mainly insofar as it allows new ways to bypass regulatory constraints, rather than insofar as it allows new technological capabilities. I expect a sudden transition to paperclips, not just because of how fast I expect cognitive capacities to scale over time, but because nanomachines eating the biosphere bypass regulatory constraints, whereas earlier phases of AI will not be advantaged relative to all the other things we have the technological capacity to do but which aren't legal to do.\n\n\n\n[Shah][12:13]  (Sep. 21 follow-up comment)\n\nMy read on the entire modern world is that GDP is primarily constrained by bureaucratic sclerosis rather than by where the technological frontiers lie\n\nThis is a fair point and updates me somewhat towards fast takeoff as operationalized by Paul, though I'm not sure how much it updates me on p(doom).\nEr, wait, really fast takeoff as operationalized by Paul makes less sense as a thing to be looking for — presumably we die before any 1 year doubling. Whatever, it updates me somewhat towards \"less deployed stuff before scary stuff is around\"\n\n\n\n\n[Ngo][14:56]\nAh, interesting. What are the two or three main things in that category?\n\n\n\n\n[Yudkowsky][14:57]\nmRNA vaccines, building houses, building cities? Not sure what you mean there.\n\n\n\n[Ngo][14:57]\n\"things we have the technological capacity to do but which aren't legal to do\"\n\n\n\n\n[Yudkowsky][14:58][15:00]\nEg, you might imagine, \"What if AIs were smart enough to build houses, wouldn't that raise GDP?\" and the answer is that we already have the pure technology to manufacture homes cheaply, but the upright-stick-construction industry already successfully lobbied to get it banned as it was starting to develop, by adding on various constraints; so the question is not \"Is AI advantaged in doing this?\" but \"Is AI advantaged at bypassing regulatory constraints on doing this?\" Not to mention all the other ways that building a house in an existing city is illegal, or that it's been made difficult to start a new city, etcetera.\n\n\n\n\n\"What if AIs could design a new vaccine in a day?\" We can already do that. It's no longer the relevant constraint. Bureaucracy is the process-limiting constraint.\nI would – looking in again at the Sideways View essay on takeoff speeds – wonder whether it occurred to you, Richard, to ask about what detailed predictions all the theories there had made.\nAfter all, a lot of it is spending time explaining why the theories there shouldn't be expected to retrodict even the data points we have about progress rates over hominid evolution.\nSurely you, being the evenhanded judge that you are, must have been reading through that document saying, \"My goodness, this is even worse than retrodicting a few data points!\"\nA lot of why I have a bad taste in my mouth about certain classes of epistemological criticism is my sense that certain sentences tend to be uttered on incredibly selective occasions.\n\n\n\n[Ngo][14:59][15:06]\nSome meta thoughts: I now feel like I have a pretty reasonable broad outline of Eliezer's views. I haven't yet changed my mind much, but plausibly mostly because I haven't taken the time to internalise those views; once I ruminate on them a bunch, I expect my opinions will shift (uncertain how far; unlikely to be most of the way).\n\n\n\n\n\nMeta thoughts (continued): Insofar as a strong disagreement remains after that (which it probably will) I feel pretty uncertain about what would resolve it. Best guess is that I should write up some longer essays that try to tie a bunch of disparate strands together.\nNear the end it seemed like the crux, to a surprising extent, hinged on this question of takeoff speeds. So the other thing which seems like it'd plausibly help a lot is Eliezer writing up a longer version of his response to Paul's Takeoff Speeds post.\n(Just as a brief comment, I don't find the \"bureaucratic sclerosis\" explanation very compelling. I do agree that regulatory barriers are a huge problem, but they still don't seem nearly severe enough to cause a fast takeoff. I don't have strong arguments for that position right now though.)\n\n\n\n\n[Soares][15:12]\nThis seems like a fine point to call it!\nSome wrap-up notes\n\nI had the impression this round was a bit more frustrating than last rounds. Thanks all for sticking with things \nI have a sense that Richard was making a couple points that didn't quite land. I plan to attempt to articulate versions of them myself in the interim.\nRichard noted he had a sense we're in decreasing return territory. My own sense is that it's worth having at least one more discussion in this format about specific non-consequentialist plans Richard may have hope in, but I also think we shouldn't plow forward in spite of things feeling less useful, and I'm open to various alternative proposals.\n\nIn particular, it seems maybe plausible to me we should have a pause for some offline write-ups, such as Richard digesting a bit and then writing up some of his current state, and/or Eliezer writing up some object-level response to the takeoff speed post above?\n\n\n\n\n[Ngo: ]\n\n\n\n\n(I also could plausibly give that a go myself, either from my own models or from my model of Eliezer's model which he could then correct)\n\n\n\n\n[Ngo][15:15]\nThanks Nate!\nI endorse the idea of offline writeups\n\n\n\n\n[Soares][15:17]\nCool. Then I claim we are adjourned for the day, and Richard has the ball on digesting & doing a write-up from his end, and I have the ball on both writing up my attempts to articulate some points, and on either Eliezer or I writing some takes on timelines or something.\n(And we can coordinate our next discussion, if any, via email, once the write-ups are in shape.)\n\n\n\n\n[Yudkowsky][15:18]\nI also have a sense that there's more to be said about specifics of govt stuff or specifics of \"ways to bypass consequentialism\" and that I wish we could spend at least one session trying to stick to concrete details only\nEven if it's not where cruxes ultimately lie, often you learn more about the abstract by talking about the concrete than by talking about the abstract.\n\n\n\n[Soares][15:22]\n(I, too, would be enthusiastic to see such a discussion, and Richard, if you find yourself feeling enthusiastic or at least not-despairing about it, I'd happily moderate.)\n\n\n\n\n[Yudkowsky][15:37]\n(I'm a little surprised about how poorly I did at staying concrete after saying that aloud, and would nominate Nate to take on the stern duty of blowing the whistle at myself or at both of us.)\n\n\n \n\nThe post Ngo and Yudkowsky on AI capability gains appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Ngo and Yudkowsky on AI capability gains", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=4", "id": "620e62545b3e2547430d2345efe383ed"} {"text": "Ngo and Yudkowsky on alignment difficulty\n\n\n \nThis post is the first in a series of transcribed Discord conversations between Richard Ngo and Eliezer Yudkowsky, moderated by Nate Soares. We've also added Richard and Nate's running summaries of the conversation (and others' replies) from Google Docs.\nLater conversation participants include Ajeya Cotra, Beth Barnes, Carl Shulman, Holden Karnofsky, Jaan Tallinn, Paul Christiano, Rob Bensinger, and Rohin Shah.\nThe transcripts are a complete record of several Discord channels MIRI made for discussion. We tried to edit the transcripts as little as possible, other than to fix typos and a handful of confusingly-worded sentences, to add some paragraph breaks, and to add referenced figures and links. We didn't end up redacting any substantive content, other than the names of people who would prefer not to be cited. We swapped the order of some chat messages for clarity and conversational flow (indicated with extra timestamps), and in some cases combined logs where the conversation switched channels.\n \nColor key:\n\n\n\n\n Chat by Richard and Eliezer \n Other chat \n Google Doc content \n Inline comments \n\n\n\n\n \n0. Prefatory comments\n \n\n[Yudkowsky][8:32]\n(At Rob's request I'll try to keep this brief, but this was an experimental format and some issues cropped up that seem large enough to deserve notes.)\nEspecially when coming in to the early parts of this dialogue, I had some backed-up hypotheses about \"What might be the main sticking point? and how can I address that?\" which from the standpoint of a pure dialogue might seem to be causing me to go on digressions, relative to if I was just trying to answer Richard's own questions.  On reading the dialogue, I notice that this looks evasive or like point-missing, like I'm weirdly not just directly answering Richard's questions.\nOften the questions are answered later, or at least I think they are, though it may not be in the first segment of the dialogue.  But the larger phenomenon is that I came in with some things I wanted to say, and Richard came in asking questions, and there was a minor accidental mismatch there.  It would have looked better if we'd both stated positions first without question marks, say, or if I'd just confined myself to answering questions from Richard.  (This is not a huge catastrophe, but it's something for the reader to keep in mind as a minor hiccup that showed up in the early parts of experimenting with this new format.)\n\n\n\n[Yudkowsky][8:32]\n(Prompted by some later stumbles in attempts to summarize this dialogue.  Summaries seem plausibly a major mode of propagation for a sprawling dialogue like this, and the following request seems like it needs to be very prominent to work – embedded requests later on didn't work.)\nPlease don't summarize this dialogue by saying, \"and so Eliezer's MAIN idea is that\" or \"and then Eliezer thinks THE KEY POINT is that\" or \"the PRIMARY argument is that\" etcetera.  From my perspective, everybody comes in with a different set of sticking points versus things they see as obvious, and the conversation I have changes drastically depending on that.  In the old days this used to be the Orthogonality Thesis, Instrumental Convergence, and superintelligence being a possible thing at all; today most OpenPhil-adjacent folks have other sticking points instead.\nPlease transform:\n\n\"Eliezer's main reply is…\" -> \"Eliezer replied that…\"\n\"Eliezer thinks the key point is…\" -> \"Eliezer's point in response was…\"\n\"Eliezer thinks a major issue is…\"  -> \"Eliezer replied that one issue is…\"\n\"Eliezer's primary argument against this is…\" -> \"Eliezer tried the counterargument that…\"\n\"Eliezer's main scenario for this is…\" -> \"In a conversation in September of 2021, Eliezer sketched a hypothetical where…\"\n\nNote also that the transformed statements say what you observed, whereas the untransformed statements are (often incorrect) inferences about my latent state of mind.\n(Though \"distinguishing relatively unreliable inference from more reliable observation\" is not necessarily the key idea here or the one big reason I'm asking for this.  That's just one point I tried making – one argument that I hope might help drive home the larger thesis.)\n\n\n \n1. September 5 conversation\n \n1.1. Deep vs. shallow problem-solving patterns\n \n\n[Ngo][11:00]\nHi all! Looking forward to the discussion.\n\n\n\n\n[Yudkowsky][11:01]\nHi and welcome all.  My name is Eliezer and I think alignment is really actually quite extremely difficult.  Some people seem to not think this!  It's an important issue so ought to be resolved somehow, which we can hopefully fully do today.  (I will however want to take a break after the first 90 minutes, if it goes that far and if Ngo is in sleep-cycle shape to continue past that.)\n\n\n\n[Ngo][11:02]\nA break in 90 minutes or so sounds good.\nHere's one way to kick things off: I agree that humans trying to align arbitrarily capable AIs seems very difficult. One reason that I'm more optimistic (or at least, not confident that we'll have to face the full very difficult version of the problem) is that at a certain point AIs will be doing most of the work.\nWhen you talk about alignment being difficult, what types of AIs are you thinking about aligning?\n\n\n\n\n[Yudkowsky][11:04]\nOn my model of the Other Person, a lot of times when somebody thinks alignment shouldn't be that hard, they think there's some particular thing you can do to align an AGI, which isn't that hard, and their model is missing one of the foundational difficulties for why you can't do (easily or at all) one step of their procedure.  So one of my own conversational processes might be to poke around looking for a step that the other person doesn't realize is hard.  That said, I'll try to directly answer your own question first.\n\n\n\n[Ngo][11:07]\nI don't think I'm confident that there's any particular thing you can do to align an AGI. Instead I feel fairly uncertain over a broad range of possibilities for how hard the problem turns out to be.\nAnd on some of the most important variables, it seems like evidence from the last decade pushes towards updating that the problem will be easier.\n\n\n\n\n[Yudkowsky][11:09]\nI think that after AGI becomes possible at all and then possible to scale to dangerously superhuman levels, there will be, in the best-case scenario where a lot of other social difficulties got resolved, a 3-month to 2-year period where only a very few actors have AGI, meaning that it was socially possible for those few actors to decide to not just scale it to where it automatically destroys the world.\nDuring this step, if humanity is to survive, somebody has to perform some feat that causes the world to not be destroyed in 3 months or 2 years when too many actors have access to AGI code that will destroy the world if its intelligence dial is turned up. This requires that the first actor or actors to build AGI, be able to do something with that AGI which prevents the world from being destroyed; if it didn't require superintelligence, we could go do that thing right now, but no such human-doable act apparently exists so far as I can tell.\nSo we want the least dangerous, most easily aligned thing-to-do-with-an-AGI, but it does have to be a pretty powerful act to prevent the automatic destruction of Earth after 3 months or 2 years. It has to \"flip the gameboard\" rather than letting the suicidal game play out. We need to align the AGI that performs this pivotal act, to perform that pivotal act without killing everybody.\nParenthetically, no act powerful enough and gameboard-flipping enough to qualify is inside the Overton Window of politics, or possibly even of effective altruism, which presents a separate social problem. I usually dodge around this problem by picking an exemplar act which is powerful enough to actually flip the gameboard, but not the most alignable act because it would require way too many aligned details: Build self-replicating open-air nanosystems and use them (only) to melt all GPUs.\nSince any such nanosystems would have to operate in the full open world containing lots of complicated details, this would require tons and tons of alignment work, is not the pivotal act easiest to align, and we should do some other thing instead. But the other thing I have in mind is also outside the Overton Window, just like this is. So I use \"melt all GPUs\" to talk about the requisite power level and the Overton Window problem level, both of which seem around the right levels to me, but the actual thing I have in mind is more alignable; and this way, I can reply to anyone who says \"How dare you?!\" by saying \"Don't worry, I don't actually plan on doing that.\"\n\n\n\n[Ngo][11:14]\nOne way that we could take this discussion is by discussing the pivotal act \"make progress on the alignment problem faster than humans can\".\n\n\n\n\n[Yudkowsky][11:15]\nThis sounds to me like it requires extreme levels of alignment and operating in extremely dangerous regimes, such that, if you could do that, it would seem much more sensible to do some other pivotal act first, using a lower level of alignment tech.\n\n\n\n[Ngo][11:16]\nOkay, this seems like a crux on my end.\n\n\n\n\n[Yudkowsky][11:16]\nIn particular, I would hope that – in unlikely cases where we survive at all – we were able to survive by operating a superintelligence only in the lethally dangerous, but still less dangerous, regime of \"engineering nanosystems\".\nWhereas \"solve alignment for us\" seems to require operating in the even more dangerous regimes of \"write AI code for us\" and \"model human psychology in tremendous detail\".\n\n\n\n[Ngo][11:17]\nWhat makes these regimes so dangerous? Is it that it's very hard for humans to exercise oversight?\nOne thing that makes these regimes seem less dangerous to me is that they're broadly in the domain of \"solving intellectual problems\" rather than \"achieving outcomes in the world\".\n\n\n\n\n[Yudkowsky][11:19][11:21]\nEvery AI output effectuates outcomes in the world.  If you have a powerful unaligned mind hooked up to outputs that can start causal chains that effectuate dangerous things, it doesn't matter whether the comments on the code say \"intellectual problems\" or not.\nThe danger of \"solving an intellectual problem\" is when it requires a powerful mind to think about domains that, when solved, render very cognitively accessible strategies that can do dangerous things.\n\n\n\n\nI expect the first alignment solution you can actually deploy in real life, in the unlikely event we get a solution at all, looks like 98% \"don't think about all these topics that we do not absolutely need and are adjacent to the capability to easily invent very dangerous outputs\" and 2% \"actually think about this dangerous topic but please don't come up with a strategy inside it that kills us\".\n\n\n\n[Ngo][11:21][11:22]\nLet me try and be more precise about the distinction. It seems to me that systems which have been primarily trained to make predictions about the world would by default lack a lot of the cognitive machinery which humans use to take actions which pursue our goals.\n\n\n\n\n\nPerhaps another way of phrasing my point is something like: it doesn't seem implausible to me that we build AIs that are significantly more intelligent (in the sense of being able to understand the world) than humans, but significantly less agentic.\nIs this a crux for you?\n(obviously \"agentic\" is quite underspecified here, so maybe it'd be useful to dig into that first)\n\n\n\n\n[Yudkowsky][11:27][11:33]\nI would certainly have learned very new and very exciting facts about intelligence, facts which indeed contradict my present model of how intelligences liable to be discovered by present research paradigms work, if you showed me… how can I put this in a properly general way… that problems I thought were about searching for states that get fed into a result function and then a result-scoring function, such that the input gets an output with a high score, were in fact not about search problems like that. I have sometimes given more specific names to this problem setup, but I think people have become confused by the terms I usually use, which is why I'm dancing around them.\nIn particular, just as I have a model of the Other Person's Beliefs in which they think alignment is easy because they don't know about difficulties I see as very deep and fundamental and hard to avoid, I also have a model in which people think \"why not just build an AI which does X but not Y?\" because they don't realize what X and Y have in common, which is something that draws deeply on having deep models of intelligence. And it is hard to convey this deep theoretical grasp. But you can also see powerful practical hints that these things are much more correlated than, eg, Robin Hanson was imagining during the FOOM debate, because Robin did not think something like GPT-3 should exist; Robin thought you should need to train lots of specific domains that didn't generalize. I argued then with Robin that it was something of a hint that humans had visual cortex and cerebellar cortex but not Car Design Cortex, in order to design cars. Then in real life, it proved that reality was far to the Eliezer side of Eliezer on the Eliezer-Robin axis, and things like GPT-3 were built with less architectural complexity and generalized more than I was arguing to Robin that complex architectures should generalize over domains.\n\n\n\n\nThe metaphor I sometimes use is that it is very hard to build a system that drives cars painted red, but is not at all adjacent to a system that could, with a few alterations, prove to be very good at driving a car painted blue.  The \"drive a red car\" problem and the \"drive a blue car\" problem have too much in common.  You can maybe ask, \"Align a system so that it has the capability to drive red cars, but refuses to drive blue cars.\"  You can't make a system that is very good at driving red-painted cars, but lacks the basic capability to drive blue-painted cars because you never trained it on that.  The patterns found by gradient descent, by genetic algorithms, or by other plausible methods of optimization, for driving red cars, would be patterns very close to the ones needed to drive blue cars.  When you optimize for red cars you get the blue car capability whether you like it or not.\n\n\n\n[Ngo][11:32]\nDoes your model of intelligence rule out building AIs which make dramatic progress in mathematics without killing us all?\n\n\n\n\n[Yudkowsky][11:34][11:39]\nIf it were possible to perform some pivotal act that saved the world with an AI that just made progress on proving mathematical theorems, without, eg, needing to explain those theorems to humans, I'd be extremely interested in that as a potential pivotal act. We wouldn't be out of the woods, and I wouldn't actually know how to build an AI like that without killing everybody, but it would immediately trump everything else as the obvious line of research to pursue.\nParenthetically, there is very very little which my model of intelligence rules out. I think we all die because we cannot do certain dangerous things correctly, on the very first try in the dangerous regimes where one mistake kills you, and do them before proliferation of much easier technologies kills us. If you have the Textbook From 100 Years In The Future that gives the simple robust solutions for everything, that actually work, you can write a superintelligence that thinks 2 + 2 = 5 because the Textbook gives the methods for doing that which are simple and actually work in practice in real life.\n\n\n\n\n(The Textbook has the equivalent of \"use ReLUs instead of sigmoids\" everywhere, and avoids all the clever-sounding things that will work at subhuman levels and blow up when you run them at superintelligent levels.)\n\n\n\n[Ngo][11:36][11:40]\nHmm, so suppose we train an AI to prove mathematical theorems when given them, perhaps via some sort of adversarial setter-solver training process.\nBy default I have the intuition that this AI could become extremely good at proving theorems – far beyond human level – without having goals about real-world outcomes.\n\n\n\n\n\nIt seems to me that in your model of intelligence, being able to do tasks like mathematics is closely coupled with trying to achieve real-world outcomes. But I'd actually take GPT-3 as some evidence against this position (although still evidence in favour of your position over Hanson's), since it seems able to do a bunch of reasoning tasks while still not being very agentic.\nThere's some alternative world where we weren't able to train language models to do reasoning tasks without first training them to perform tasks in complex RL environments, and in that world I'd be significantly less optimistic.\n\n\n\n\n[Yudkowsky][11:41]\nI put to you that there is a predictable bias in your estimates, where you don't know about the Deep Stuff that is required to prove theorems, so you imagine that certain cognitive capabilities are more disjoint than they actually are.  If you knew about the things that humans are using to reuse their reasoning about chipped handaxes and other humans, to prove math theorems, you would see it as more plausible that proving math theorems would generalize to chipping handaxes and manipulating humans.\nGPT-3 is a… complicated story, on my view of it and intelligence.  We're looking at an interaction between tons and tons of memorized shallow patterns.  GPT-3 is very unlike the way that natural selection built humans.\n\n\n\n[Ngo][11:44]\nI agree with that last point. But this is also one of the reasons that I previously claimed that AIs could be more intelligent than humans while being less agentic, because there are systematic differences between the way in which natural selection built humans, and the way in which we'll train AGIs.\n\n\n\n\n[Yudkowsky][11:45]\nMy current suspicion is that Stack More Layers alone is not going to take us to GPT-6 which is a true AGI; and this is because of the way that GPT-3 is, in your own terminology, \"not agentic\", and which is, in my terminology, not having gradient descent on GPT-3 run across sufficiently deep problem-solving patterns.\n\n\n\n[Ngo][11:46]\nOkay, that helps me understand your position better.\nSo here's one important difference between humans and neural networks: humans face the genomic bottleneck which means that each individual has to rederive all the knowledge about the world that their parents already had. If this genetic bottleneck hadn't been so tight, then individual humans would have been significantly less capable of performing novel tasks.\n\n\n\n\n[Yudkowsky][11:50]\nI agree.\n\n\n\n[Ngo][11:50]\nIn my terminology, this is a reason that humans are \"more agentic\" than we otherwise would have been.\n\n\n\n\n[Yudkowsky][11:50]\nThis seems indisputable.\n\n\n\n[Ngo][11:51]\nAnother important difference: humans were trained in environments where we had to run around surviving all day, rather than solving maths problems etc.\n\n\n\n\n[Yudkowsky][11:51]\nI continue to nod.\n\n\n\n[Ngo][11:52]\nSupposing I agree that reaching a certain level of intelligence will require AIs with the \"deep problem-solving patterns\" you talk about, which lead AIs to try to achieve real-world goals. It still seems to me that there's likely a lot of space between that level of intelligence, and human intelligence.\nAnd if that's the case, then we could build AIs which help us solve the alignment problem before we build AIs which instantiate sufficiently deep problem-solving patterns that they decide to take over the world.\nNor does it seem like the reason humans want to take over the world is because of a deep fact about our intelligence. It seems to me that humans want to take over the world mainly because that's very similar to things we evolved to do (like taking over our tribe).\n\n\n\n\n[Yudkowsky][11:57]\nSo here's the part that I agree with: If there were one theorem only mildly far out of human reach, like proving the ABC Conjecture (if you think it hasn't already been proven), and providing a machine-readable proof of this theorem would immediately save the world – say, aliens will give us an aligned superintelligence, as soon as we provide them with this machine-readable proof – then there would exist a plausible though not certain road to saving the world, which would be to try to build a shallow mind that proved the ABC Conjecture by memorizing tons of relatively shallow patterns for mathematical proofs learned through self-play; without that system ever abstracting math as deeply as humans do, but the sheer width of memory and sheer depth of search sufficing to do the job. I am not sure, to be clear, that this would work. But my model of intelligence does not rule it out.\n\n\n\n[Ngo][11:58]\n(I'm actually thinking of a mind which understands maths more deeply than humans – but perhaps only understands maths, or perhaps also a range of other sciences better than humans.)\n\n\n\n\n[Yudkowsky][12:00]\nParts I disagree with: That \"help us solve alignment\" bears any significant overlap with \"provide us a machine-readable proof of the ABC Conjecture without thinking too deeply about it\". That humans want to take over the world only because it resembles things we evolved to do.\n\n\n\n[Ngo][12:01]\nI definitely agree that humans don't only want to take over the world because it resembles things we evolved to do.\n\n\n\n\n[Yudkowsky][12:02]\nAlas, eliminating 5 reasons why something would go wrong doesn't help much if there's 2 remaining reasons something would go wrong that are much harder to eliminate!\n\n\n\n[Ngo][12:02]\nBut if we imagine having a human-level intelligence which hadn't evolved primarily to do things that reasonably closely resembled taking over the world, then I expect that we could ask that intelligence questions in a fairly safe way.\nAnd that's also true for an intelligence that is noticeably above human level.\nSo one question is: how far above human level could we get before a system which has only been trained to do things like answer questions and understand the world will decide to take over the world?\n\n\n\n\n[Yudkowsky][12:04]\nI think this is one of the very rare cases where the intelligence difference between \"village idiot\" and \"Einstein\", which I'd usually see as very narrow, makes a structural difference! I think you can get some outputs from a village-idiot-level AGI, which got there by training on domains exclusively like math, and this will proooobably not destroy the world (if you were right about that, about what was going on inside). I have more concern about the Einstein level.\n\n\n\n[Ngo][12:05]\nLet's focus on the Einstein level then.\nHuman brains have been optimised very little for doing science.\nThis suggests that building an AI which is Einstein-level at doing science is significantly easier than building an AI which is Einstein-level at taking over the world (or other things which humans evolved to do).\n\n\n\n\n[Yudkowsky][12:08]\nI think there's a certain broad sense in which I agree with the literal truth of what you just said. You will systematically overestimate how much easier, or how far you can push the science part without getting the taking-over-the-world part, for as long as your model is ignorant of what they have in common.\n\n\n\n[Ngo][12:08]\nMaybe this is a good time to dig into the details of what they have in common, then.\n\n\n\n\n[Yudkowsky][12:09][12:11]][12:13]\nI feel like I haven't had much luck with trying to explain that on previous occasions. Not to you, to others too.\nThere are shallow topics like why p-zombies can't be real and how quantum mechanics works and why science ought to be using likelihood functions instead of p-values, and I can barely explain those to some people, but then there are some things that are apparently much harder to explain than that and which defeat my abilities as an explainer.\n\n\n\n\nThat's why I've been trying to point out that, even if you don't know the specifics, there's an estimation bias that you can realize should exist in principle.\n\n\n\n\nOf course, I also haven't had much luck in saying to people, \"Well, even if you don't know the truth about X that would let you see Y, can you not see by abstract reasoning that knowing any truth about X would predictably cause you to update in the direction of Y\" – people don't seem to actually internalize that much either. Not you, other discussions.\n\n\n\n[Ngo][12:10][12:11][12:13]\nMakes sense. Are there ways that I could try to make this easier? E.g. I could do my best to explain what I think your position is.\nGiven what you've said I'm not optimistic about this helping much.\n\n\n\n\n\nBut insofar as this is the key set of intuitions which has been informing your responses, it seems worth a shot.\nAnother approach would be to focus on our predictions for how AI capabilities will play out over the next few years.\n\n\n\n\n\nI take your point about my estimation bias. To me it feels like there's also a bias going the other way, which is that as long as we don't know the mechanisms by which different human capabilities work, we'll tend to lump them together as one thing.\n\n\n\n\n[Yudkowsky][12:14]\nYup. If you didn't know about visual cortex and auditory cortex, or about eyes and ears, you would assume much more that any sentience ought to both see and hear.\n\n\n\n[Ngo][12:16]\nSo then my position is something like: human pursuit of goals is driven by emotions and reward signals which are deeply evolutionarily ingrained, and without those we'd be much safer but not that much worse at pattern recognition.\n\n\n\n\n[Yudkowsky][12:17]\nIf there's a pivotal act you can get just by supreme acts of pattern recognition, that's right up there with \"pivotal act composed solely of math\" for things that would obviously instantly become the prime direction of research.\n\n\n\n[Ngo][12:18]\nTo me it seems like maths is much more about pattern recognition than, say, being a CEO. Being a CEO requires coherence over long periods of time; long-term memory; motivation; metacognition; etc.\n\n\n\n\n[Yudkowsky][12:18][12:23]\n(One occasionally-argued line of research can be summarized from a certain standpoint as \"how about a pivotal act composed entirely of predicting text\" and to this my reply is \"you're trying to get fully general AGI capabilities by predicting text that is about deep / 'agentic' reasoning, and that doesn't actually help\".)\nHuman math is very much about goals. People want to prove subtheorems on the way to proving theorems. We might be able to make a different kind of mathematician that works more like GPT-3 in the dangerously inscrutable parts that are all noninspectable vectors of floating-point numbers, but even there you'd need some Alpha-Zero-like outer framework to supply the direction of search.\n\n\n\n\nThat outer framework might be able to be powerful enough without being reflective, though. So it would plausibly be much easier to build a mathematician that was capable of superhuman formal theorem-proving but not agentic. The reality of the world might tell us \"lolnope\" but my model of intelligence doesn't mandate that. That's why, if you gave me a pivotal act composed entirely of \"output a machine-readable proof of this theorem and the world is saved\", I would pivot there! It actually does seem like it would be a lot easier!\n\n\n\n[Ngo][12:21][12:25]\nOkay, so if I attempt to rephrase your argument:\n\n\n\n\n\nYour position: There's a set of fundamental similarities between tasks like doing maths, doing alignment research, and taking over the world. In all of these cases, agents based on techniques similar to modern ML which are very good at them will need to make use of deep problem-solving patterns which include goal-oriented reasoning. So while it's possible to beat humans at some of these tasks without those core competencies, people usually overestimate the extent to which that's possible.\n\n\n\n\n[Yudkowsky][12:25]\nRemember, a lot of my concern is about what happens first, especially if it happens soon enough that future AGI bears any resemblance whatsoever to modern ML; not about what can be done in principle.\n\n\n\n[Soares][12:26]\n(Note: it's been 85 min, and we're planning to take a break at 90min, so this seems like a good point for a little bit more clarifying back-and-forth on Richard's summary before a break.)\n\n\n\n\n[Ngo][12:26]\nI'll edit to say \"plausible for ML techniques\"?\n(and \"extent to which that's plausible\")\n\n\n\n\n[Yudkowsky][12:28]\nI think that obvious-to-me future outgrowths of modern ML paradigms are extremely liable to, if they can learn how to do sufficiently superhuman X, generalize to taking over the world. How fast this happens does depend on X. It would plausibly happen relatively slower (at higher levels) with theorem-proving as the X, and with architectures that carefully stuck to gradient-descent-memorization over shallow network architectures to do a pattern-recognition part with search factored out (sort of, this is not generally safe, this is not a general formula for safe things!); rather than imposing anything like the genetic bottleneck you validly pointed out as a reason why humans generalize. Profitable X, and all X I can think of that would actually save the world, seem much more problematic.\n\n\n\n[Ngo][12:30]\nOkay, happy to take a break here.\n\n\n\n\n[Soares][12:30]\nGreat timing!\n\n\n\n\n[Ngo][12:30]\nWe can do a bit of meta discussion afterwards; my initial instinct is to push on the question of how similar Eliezer thinks alignment research is to theorem-proving.\n\n\n\n\n[Yudkowsky][12:30]\nYup. This is my lunch break (actually my first-food-of-day break on a 600-calorie diet) so I can be back in 45min if you're still up for that.\n\n\n\n[Ngo][12:31]\nSure.\nAlso, if any of the spectators are reading in real time, and have suggestions or comments, I'd be interested in hearing them.\n\n\n\n\n[Yudkowsky][12:31]\nI'm also cheerful about spectators posting suggestions or comments during the break.\n\n\n\n[Soares][12:32]\nSounds good. I declare us on a break for 45min, at which point we'll reconvene (for another 90, by default).\nFloor's open to suggestions & commentary.\n\n\n\n \n1.2. Requirements for science\n \n\n[Yudkowsky][12:50]\nI seem to be done early if people (mainly Richard) want to resume in 10min (30m break)\n\n\n\n[Ngo][12:51]\nYepp, happy to do so\n\n\n\n\n[Soares][12:57]\nSome quick commentary from me:\n\nIt seems to me like we're exploring a crux in the vicinity of \"should we expect that systems capable of executing a pivotal act would, by default in lieu of significant technical alignment effort, be using their outputs to optimize the future\".\nI'm curious whether you two agree that this is a crux (but plz don't get side-tracked answering me).\nThe general discussion seems to be going well to me.\n\nIn particular, huzzah for careful and articulate efforts to zero in on cruxes.\n\n\n\n\n\n\n\n[Ngo][13:00]\nI think that's a crux for the specific pivotal act of \"doing better alignment research\", and maybe some other pivotal acts, but not all (or necessarily most) of them.\n\n\n\n\n[Yudkowsky][13:01]\nI should also say out loud that I've been working a bit with Ajeya on making an attempt to convey the intuitions behind there being deep patterns that generalize and are liable to be learned, which covered a bunch of ground, taught me how much ground there was, and made me relatively more reluctant to try to re-cover the same ground in this modality.\n\n\n\n[Ngo][13:02]\nGoing forward, a couple of things I'd like to ask Eliezer about:\n\nIn what ways are the tasks that are most useful for alignment similar or different to proving mathematical theorems (which we agreed might generalise relatively slowly to taking over the world)?\nWhat are the deep problem-solving patterns underlying these tasks?\nCan you summarise my position?\n\nI was going to say that I was most optimistic about #2 in order to get these ideas into a public format\nBut if that's going to happen anyway based on Ajeya's work, then that seems less important\n\n\n\n\n[Yudkowsky][13:03]\nI could still try briefly and see what happens.\n\n\n\n[Ngo][13:03]\nThat seems valuable to me, if you're up for it.\nAt the same time, I'll try to summarise some of my own intuitions about intelligence which I expect to be relevant.\n\n\n\n\n[Yudkowsky][13:04]\nI'm not sure I could summarize your position in a non-straw way. To me there's a huge visible distance between \"solve alignment for us\" and \"output machine-readable proofs of theorems\" where I can't give a good account of why you think talking about the latter would tell us much about the former. I don't know what other pivotal act you think might be easier.\n\n\n\n[Ngo][13:06]\nI see. I was considering \"solving scientific problems\" as an alternative to \"proving theorems\", with alignment being one (particularly hard) example of a scientific problem.\nBut decided to start by discussing theorem-proving since it seemed like a clearer-cut case.\n\n\n\n\n[Yudkowsky][13:07]\nCan you predict in advance why Eliezer thinks \"solving scientific problems\" is significantly thornier? (Where alignment is like totally not \"a particularly hard example of a scientific problem\" except in the sense that it has science in it at all; which is maybe the real crux; but also a more difficult issue.)\n\n\n\n[Ngo][13:09]\nBased on some of your earlier comments, I'm currently predicting that you think the step where the solutions need to be legible to and judged by humans makes science much thornier than theorem-proving, where the solutions are machine-checkable.\n\n\n\n\n[Yudkowsky][13:10]\nThat's one factor. Should I state the other big one or would you rather try to state it first?\n\n\n\n[Ngo][13:10]\nRequiring a lot of real-world knowledge for science?\nIf it's not that, go ahead and say it.\n\n\n\n\n[Yudkowsky][13:11]\nThat's one way of stating it. The way I'd put it is that it's about making up hypotheses about the real world.\nLike, the real world is then a thing that the AI is modeling, at all.\nFactor 3: On many interpretations of doing science, you would furthermore need to think up experiments. That's planning, value-of-information, search for an experimental setup whose consequences distinguish between hypotheses (meaning you're now searching for initial setups that have particular causal consequences).\n\n\n\n[Ngo][13:12]\nTo me \"modelling the real world\" is a very continuous variable. At one end you have physics equations that are barely separable from maths problems, at the other end you have humans running around in physical bodies.\nTo me it seems plausible that we could build an agent which solves scientific problems but has very little self-awareness (in the sense of knowing that it's an AI, knowing that it's being trained, etc).\nI expect that your response to this is that modelling oneself is part of the deep problem-solving patterns which AGIs are very likely to have.\n\n\n\n\n[Yudkowsky][13:15]\nThere's a problem of inferring the causes of sensory experience in cognition-that-does-science. (Which, in fact, also appears in the way that humans do math, and is possibly inextricable from math in general; but this is an example of the sort of deep model that says \"Whoops I guess you get science from math after all\", not a thing that makes science less dangerous because it's more like just math.)\nYou can build an AI that only ever drives red cars, and which, at no point in the process of driving a red car, ever needs to drive a blue car in order to drive a red car. That doesn't mean its red-car-driving capabilities won't be extremely close to blue-car-driving capabilities if at any point the internal cognition happens to get pointed towards driving a blue car.\nThe fact that there's a deep car-driving pattern which is the same across red cars and blue cars doesn't mean that the AI has ever driven a blue car, per se, or that it has to drive blue cars to drive red cars. But if blue cars are fire, you sure are playing with that fire.\n\n\n\n[Ngo][13:18]\nTo me, \"sensory experience\" as in \"the video and audio coming in from this body that I'm piloting\" and \"sensory experience\" as in \"a file containing the most recent results of the large hadron collider\" are very very different.\n(I'm not saying we could train an AI scientist just from the latter – but plausibly from data that's closer to the latter than the former)\n\n\n\n\n[Yudkowsky][13:19]\nSo there's separate questions about \"does an AGI inseparably need to model itself inside the world to do science\" and \"did we build something that would be very close to modeling itself, and could easily stumble across that by accident somewhere in the inscrutable floating-point numbers, especially if that was even slightly useful for solving the outer problems\".\n\n\n\n[Ngo][13:19]\nHmm, I see\n\n\n\n\n[Yudkowsky][13:20][13:21][13:21]\nIf you're trying to build an AI that literally does science only to observations collected without the AI having had a causal impact on those observations, that's legitimately \"more dangerous than math but maybe less dangerous than active science\".\n\n\n\n\nYou might still stumble across an active scientist because it was a simple internal solution to something, but the outer problem would be legitimately stripped of an important structural property the same way that pure math not describing Earthly objects is stripped of important structural properties.\n\n\n\n\nAnd of course my reaction again is, \"There is no pivotal act which uses only that cognitive capability.\"\n\n\n\n[Ngo][13:20][13:21][13:26]\nI guess that my (fairly strong) prior here is that something like self-modelling, which is very deeply built into basically every organism, is a very hard thing for an AI to stumble across by accident without significant optimisation pressure in that direction.\n\n\n\n\n\nBut I'm not sure how to argue this except by digging into your views on what the deep problem-solving patterns are. So if you're still willing to briefly try and explain those, that'd be useful to me.\n\n\n\n\n\n\"Causal impact\" again seems like a very continuous variable – it seems like the amount of causal impact you need to do good science is much less than the amount which is needed to, say, be a CEO.\n\n\n\n\n[Yudkowsky][13:26]\nThe amount doesn't seem like the key thing, nearly so much as what underlying facilities you need to do whatever amount of it you need.\n\n\n\n[Ngo][13:27]\nAgreed.\n\n\n\n\n[Yudkowsky][13:27]\nIf you go back to the 16th century and ask for just one mRNA vaccine, that's not much of a difference from asking for a million hundred of them.\n\n\n\n[Ngo][13:28]\nRight, so the additional premise which I'm using here is that the ability to reason about causally impacting the world in order to achieve goals is something that you can have a little bit of.\nOr a lot of, and that the difference between these might come down to the training data used.\nWhich at this point I don't expect you to agree with.\n\n\n\n\n[Yudkowsky][13:29]\nIf you have reduced a pivotal act to \"look over the data from this hadron collider you neither built nor ran yourself\", that really is a structural step down from \"do science\" or \"build a nanomachine\". But I can't see any pivotal acts like that, so is that question much of a crux?\nIf there's intermediate steps they might be described in my native language like \"reason about causal impacts across only this one preprogrammed domain which you didn't learn in a general way, in only this part of the cognitive architecture that is separable from the rest of the cognitive architecture\".\n\n\n\n[Ngo][13:31]\nPerhaps another way of phrasing this intermediate step is that the agent has a shallow understanding of how to induce causal impacts.\n\n\n\n\n[Yudkowsky][13:31]\nWhat is \"shallow\" to you?\n\n\n\n[Ngo][13:31]\nIn a similar way to how you claim that GPT-3 has a shallow understanding of language.\n\n\n\n\n[Yudkowsky][13:32]\nSo it's memorized a ton of shallow causal-impact-inducing patterns from a large dataset, and this can be verified by, for example, presenting it with an example mildly outside the dataset and watching it fail, which we think will confirm our hypothesis that it didn't learn any deep ways of solving that dataset.\n\n\n\n[Ngo][13:33]\nRoughly speaking, yes.\n\n\n\n\n[Yudkowsky][13:34]\nEg, it wouldn't surprise us at all if GPT-4 had learned to predict \"27 * 18\" but not \"what is the area of a rectangle 27 meters by 18 meters\"… is what I'd like to say, but Codex sure did demonstrate those two were kinda awfully proximal.\n\n\n\n[Ngo][13:34]\nHere's one way we could flesh this out. Imagine an agent that loses coherence quickly when it's trying to act in the world.\nSo for example, we've trained it to do scientific experiments over a period of a few hours or days\nAnd then it's very good at understanding the experimental data and extracting patterns from it\nBut upon running it for a week or a month, it loses coherence in a similar way to how GPT-3 loses coherence – e.g. it forgets what it's doing.\nMy story for why this might happen is something like: there is a specific skill of having long-term memory, and we never trained our agent to have this skill, and so it has not acquired that skill (even though it can reason in very general and powerful ways in the short term).\nThis feels similar to the argument I was making before about how an agent might lack self-awareness, if we haven't trained it specifically to have that.\n\n\n\n\n[Yudkowsky][13:39]\nThere's a set of obvious-to-me tactics for doing a pivotal act with minimal danger, which I do not think collectively make the problem safe, and one of these sets of tactics is indeed \"Put a limit on the 'attention window' or some other internal parameter, ramp it up slowly, don't ramp it any higher than you needed to solve the problem.\"\n\n\n\n[Ngo][13:41]\nYou could indeed do this manually, but my expectation is that you could also do this automatically, by training agents in environments where they don't benefit from having long attention spans.\n\n\n\n\n[Yudkowsky][13:42]\n(Any time one imagines a specific tactic of this kind, if one has the security mindset, one can also imagine all sorts of ways it might go wrong; for example, an attention window can be defeated if there's any aspect of the attended data or the internal state that ended up depending on past events in a way that leaked info about them. But, depending on how much superintelligence you were throwing around elsewhere, you could maybe get away with that, some of the time.)\n\n\n\n[Ngo][13:43]\nAnd that if you put agents in environments where they answer questions but don't interact much with the physical world, then there will be many different traits which are necessary for achieving goals in the real world which they will lack, because there was little advantage to the optimiser of building those traits in.\n\n\n\n\n[Yudkowsky][13:43]\nI'll observe that TransformerXL built an attention window that generalized, trained it on I think 380 tokens or something like that, and then found that it generalized to 4000 tokens or something like that.\n\n\n\n[Ngo][13:43]\nYeah, an order of magnitude of generalisation is not surprising to me.\n\n\n\n\n[Yudkowsky][13:44]\nHaving observed one order of magnitude, I would personally not be surprised by two orders of magnitude either, after seeing that.\n\n\n\n[Ngo][13:45]\nI'd be a little surprised, but I assume it would happen eventually.\n\n\n\n \n1.3. Capability dials\n \n\n[Yudkowsky][13:46]\nI have a sense that this is all circling back to the question, \"But what is it we do with the intelligence thus weakened?\" If you can save the world using a rock, I can build you a very safe rock.\n\n\n\n[Ngo][13:46]\nRight.\nSo far I've said \"alignment research\", but I haven't been very specific about it.\nI guess some context here is that I expect that the first things we do with intelligence similar to this is create great wealth, produce a bunch of useful scientific advances, etc.\nAnd that we'll be in a world where people take the prospect of AGI much more seriously\n\n\n\n\n[Yudkowsky][13:48]\nI mostly expect – albeit with some chance that reality says \"So what?\" to me and surprises me, because it is not as solidly determined as some other things – that we do not hang around very long in the \"weirdly ~human AGI\" phase before we get into the \"if you crank up this AGI it destroys the world\" phase. Less than 5 years, say, to put numbers on things.\nIt would not surprise me in the least if the world ends before self-driving cars are sold on the mass market. On some quite plausible scenarios which I think have >50% of my probability mass at the moment, research AGI companies would be able to produce prototype car-driving AIs if they spent time on that, given the near-world-ending tech level; but there will be Many Very Serious Questions about this relatively new unproven advancement in machine learning being turned loose on the roads. And their AGI tech will gain the property \"can be turned up to destroy the world\" before Earth gains the property \"you're allowed to sell self-driving cars on the mass market\" because there just won't be much time.\n\n\n\n[Ngo][13:52]\nThen I expect that another thing we do with this is produce a very large amount of data which rewards AIs for following human instructions.\n\n\n\n\n[Yudkowsky][13:52]\nOn other scenarios, of course, self-driving becomes possible by limited AI well before things start to break (further) on AGI. And on some scenarios, the way you got to AGI was via some breakthrough that is already scaling pretty fast, so by the time you can use the tech to get self-driving cars, that tech already ends the world if you turn up the dial, or that event follows very swiftly.\n\n\n\n[Ngo][13:53]\nWhen you talk about \"cranking up the AGI\", what do you mean?\nUsing more compute on the same data?\n\n\n\n\n[Yudkowsky][13:53]\nRunning it with larger bounds on the for loops, over more GPUs, to be concrete about it.\n\n\n\n[Ngo][13:53]\nIn a RL setting, or a supervised, or unsupervised learning setting?\nAlso: can you elaborate on the for loops?\n\n\n\n\n[Yudkowsky][13:56]\nI do not quite think that gradient descent on Stack More Layers alone – as used by OpenAI for GPT-3, say, and as opposed to Deepmind which builds more complex artifacts like Mu Zero or AlphaFold 2 – is liable to be the first path taken to AGI. I am reluctant to speculate more in print about clever ways to AGI, and I think any clever person out there will, if they are really clever and not just a fancier kind of stupid, not talk either about what they think is missing from Stack More Layers or how you would really get AGI. That said, the way that you cannot just run GPT-3 at a greater search depth, the way you can run Mu Zero at a greater search depth, is part of why I think that AGI is not likely to look exactly like GPT-3; the thing that kills us is likely to be a thing that can get more dangerous when you turn up a dial on it, not a thing that intrinsically has no dials that can make it more dangerous.\n\n\n \n1.4. Consequentialist goals vs. deontologist goals\n \n\n[Ngo][13:59]\nHmm, okay. Let's take a quick step back and think about what would be useful for the last half hour.\nI want to flag that my intuitions about pivotal acts are not very specific; I'm quite uncertain about how the geopolitics of that situation would work, as well as the timeframe between somewhere-near-human-level AGI and existential risk AGI.\nSo we could talk more about this, but I expect there'd be a lot of me saying \"well we can't rule out that X happens\", which is perhaps not the most productive mode of discourse.\nA second option is digging into your intuitions about how cognition works.\n\n\n\n\n[Yudkowsky][14:03]\nWell, obviously, in the limit of alignment not being accessible to our civilization, and my successfully building a model weaker than reality which nonetheless correctly rules out alignment being accessible to our civilization, I could spend the rest of my short remaining lifetime arguing with people whose models are weak enough to induce some area of ignorance where for all they know you could align a thing. But that is predictably how conversations go in possible worlds where the Earth is doomed; so somebody wiser on the meta-level, though also ignorant on the object-level, might prefer to ask: \"Where do you think your knowledge, rather than your ignorance, says that alignment ought to be doable and you will be surprised if it is not?\"\n\n\n\n[Ngo][14:07]\nThat's a fair point. Although it seems like a structural property of the \"pivotal act\" framing, which builds in doom by default.\n\n\n\n\n[Yudkowsky][14:08]\nWe could talk about that, if you think it's a crux. Though I'm also not thinking that this whole conversation gets done in a day, so maybe for publishability reasons we should try to focus more on one line of discussion?\nBut I do think that lots of people get their optimism by supposing that the world can be saved by doing less dangerous things with an AGI. So it's a big ol' crux of mine on priors.\n\n\n\n[Ngo][14:09]\nAgreed that one line of discussion is better; I'm happy to work within the pivotal act framing for current purposes.\nA third option is that I make some claims about how cognition works, and we see how much you agree with them.\n\n\n\n\n[Yudkowsky][14:12]\n(Though it's something of a restatement, a reason I'm not going into \"my intuitions about how cognition works\" is that past experience has led me to believe that conveying this info in a form that the Other Mind will actually absorb and operate, is really quite hard and takes a long discussion, relative to my current abilities to Actually Explain things; it is the sort of thing that might take doing homework exercises to grasp how one structure is appearing in many places, as opposed to just being flatly told that to no avail, and I have not figured out the homework exercises.)\nI'm cheerful about hearing your own claims about cognition and disagreeing with them.\n\n\n\n[Ngo][14:12]\nGreat\nOkay, so one claim is that something like deontology is a fairly natural way for minds to operate.\n\n\n\n\n[Yudkowsky][14:14]\n(\"If that were true,\" he thought at once, \"bureaucracies and books of regulations would be a lot more efficient than they are in real life.\")\n\n\n\n[Ngo][14:14]\nHmm, although I think this was probably not a very useful phrasing, let me think about how to rephrase it.\nOkay, so in our earlier email discussion, we talked about the concept of \"obedience\".\nTo me it seems like it is just as plausible for a mind to have a concept like \"obedience\" as its rough goal, as a concept like maximising paperclips.\nIf we imagine training an agent on a large amount of data which pointed in the rough direction of rewarding obedience, for example, then I imagine that by default obedience would be a constraint of comparable strength to, say, the human survival instinct.\n(Which is obviously not strong enough to stop humans doing a bunch of things that contradict it – but it's a pretty good starting point.)\n\n\n\n\n[Yudkowsky][14:18]\nHeh. You mean of comparable strength to the human instinct to explicitly maximize inclusive genetic fitness?\n\n\n\n[Ngo][14:19]\nGenetic fitness wasn't a concept that our ancestors were able to understand, so it makes sense that they weren't pointed directly towards it.\n(And nor did they understand how to achieve it.)\n\n\n\n\n[Yudkowsky][14:19]\nEven in that paradigm, except insofar as you expect gradient descent to work very differently from gene-search optimization – which, admittedly, it does – when you optimize really hard on a thing, you get contextual correlates to it, not the thing you optimized on.\nThis is of course one of the Big Fundamental Problems that I expect in alignment.\n\n\n\n[Ngo][14:20]\nRight, so the main correlate that I've seen discussed is \"do what would make the human give you a high rating, not what the human actually wants\"\nOne thing I'm curious about is the extent to which you're concerned about this specific correlate, versus correlates in general.\n\n\n\n\n[Yudkowsky][14:21]\nThat said, I also see basic structural reasons why paperclips would be much easier to train than \"obedience\", even if we could magically instill simple inner desires that perfectly reflected the simple outer algorithm we saw ourselves as running over many particular instances of a loss function.\n\n\n\n[Ngo][14:22]\nI'd be interested in hearing what those are.\n\n\n\n\n[Yudkowsky][14:22]\nwell, first of all, why is a book of regulations so much more unwieldy than a hunter-gatherer?\nif deontology is just as good as consequentialism, y'know.\n(do you want to try replying or should I just say?)\n\n\n\n[Ngo][14:23]\nGo ahead\nI should probably clarify that I agree that you can't just replace consequentialism with deontology\nThe claim is more like: when it comes to high-level concepts, it's not clear to me why high-level consequentialist goals are more natural than high-level deontological goals.\n\n\n\n\n[Yudkowsky][14:24]\nI reply that reality is complicated, so when you pump a simple goal through complicated reality you get complicated behaviors required to achieve the goal. If you think of reality as a complicated function Input->Probability(Output), then even to get a simple Output or a simple partition on Output or a high expected score in a simple function over Output, you may need very complicated Input.\nHumans don't trust each other. They imagine, \"Well, if I just give this bureaucrat a goal, perhaps they won't reason honestly about what it takes to achieve that goal! Oh no! Therefore I will instead, being the trustworthy and accurate person that I am, reason myself about constraints and requirements on the bureaucrat's actions, such that, if the bureaucrat obeys these regulations, I expect the outcome of their action will be what I want.\"\nBut (compared to a general intelligence that observes and models complicated reality and does its own search to pick actions) an actually-effective book of regulations (implemented by some nonhuman mind with a large enough and perfect enough memory to memorize it) would tend to involve a (physically unmanageable) vast number of rules saying \"if you observe this, do that\" to follow all the crinkles of complicated reality as it can be inferred from observation.\n\n\n\n[Ngo][14:28]\n\n(Though it's something of a restatement, a reason I'm not going into \"my intuitions about how cognition works\" is that past experience has led me to believe that conveying this info in a form that the Other Mind will actually absorb and operate, is really quite hard and takes a long discussion, relative to my current abilities to Actually Explain things; it is the sort of thing that might take doing homework exercises to grasp how one structure is appearing in many places, as opposed to just being flatly told that to no avail, and I have not figured out the homework exercises.)\n\n(As a side note: do you have a rough guess for when your work with Ajeya will be made public? If it's still a while away, I'm wondering whether it's still useful to have a rough outline of these intuitions even if it's in a form that very few people will internalise)\n\n\n\n\n[Yudkowsky][14:30]\n\n(As a side note: do you have a rough guess for when your work with Ajeya will be made public? If it's still a while away, I'm wondering whether it's still useful to have a rough outline of these intuitions even if it's in a form that very few people will internalise)\n\nPlausibly useful, but not to be attempted today, I think?\n\n\n\n[Ngo][14:30]\nAgreed.\n\n\n\n\n[Yudkowsky][14:30]\n(We are now theoretically in overtime, which is okay for me, but for you it is 11:30pm (I think?) and so it is on you to call when to halt, now or later.)\n\n\n\n[Ngo][14:32]\nYeah, it's 11.30 for me. I think probably best to halt here. I agree with all the things you just said about reality being complicated, and why consequentialism is therefore valuable. My \"deontology\" claim (which was, in its original formulation, far too general – apologies for that) was originally intended as a way of poking into your intuitions about which types of cognition are natural or unnatural, which I think is the topic we've been circling around for a while.\n\n\n\n\n[Yudkowsky][14:33]\nYup, and a place to resume next time might be why I think \"obedience\" is unnatural compared to \"paperclips\" – though that is a thing that probably requires taking that stab at what underlies surface competencies.\n\n\n\n[Ngo][14:34]\nRight. I do think that even a vague gesture at that would be reasonably helpful (assuming that this doesn't already exist online?)\n\n\n\n\n[Yudkowsky][14:34]\nNot yet afaik, and I don't want to point you to Ajeya's stuff even if she were ok with that, because then this in-context conversation won't make sense to others.\n\n\n\n[Ngo][14:35]\nFor my part I should think more about pivotal acts that I'd be willing to specifically defend.\nIn any case, thanks for the discussion \nLet me know if there's a particular time that suits you for a follow-up; otherwise we can sort it out later.\n\n\n\n\n[Soares][14:37]\n(y'all are doing all my jobs for me)\n\n\n\n\n[Yudkowsky][14:37]\ncould try Tuesday at this same time – though I may be in worse shape for dietary reasons, still, seems worth trying.\n\n\n\n[Soares][14:37]\n(wfm)\n\n\n\n\n[Ngo][14:39]\nTuesday not ideal, any others work?\n\n\n\n\n[Yudkowsky][14:39]\nWednesday?\n\n\n\n[Ngo][14:40]\nYes, Wednesday would be good\n\n\n\n\n[Yudkowsky][14:40]\nlet's call it tentatively for that\n\n\n\n[Soares][14:41]\nGreat! Thanks for the chats.\n\n\n\n\n[Ngo][14:41]\nThanks both!\n\n\n\n\n[Yudkowsky][14:41]\nThanks, Richard!\n\n\n \n2. Follow-ups\n \n2.1. Richard Ngo's summary\n \n\n[Tallinn][0:35]  (Sep. 6)\njust caught up here & wanted to thank nate, eliezer and (especially) richard for doing this! it's great to see eliezer's model being probed so intensively. i've learned a few new things (such as the genetic bottleneck being plausibly a big factor in human cognition). FWIW, a minor comment re deontology (as that's fresh on my mind): in my view deontology is more about coordination than optimisation: deontological agents are more trustworthy, as they're much easier to reason about (in the same way how functional/declarative code is easier to reason about than imperative code). hence my steelman of bureaucracies (as well as social norms): humans just (correctly) prefer their fellow optimisers (including non-human optimisers) to be deontological for trust/coordination reasons, and are happy to pay the resulting competence tax.\n\n\n\n\n[Ngo][3:10]  (Sep. 8)\nThanks Jaan! I agree that greater trust is a good reason to want agents which are deontological at some high level.\nI've attempted a summary of the key points so far; comments welcome: [GDocs link]\n\n\n\n\n[Ngo]  (Sep. 8 Google Doc)\n1st discussion\n(Mostly summaries not quotations)\nEliezer, summarized by Richard: \"To avoid catastrophe, whoever builds AGI first will have to a) align it to some extent, and b) decide not to scale it up beyond the point where their alignment techniques fail, and c) do some pivotal act that prevents others from scaling it up to that level. But our alignment techniques will not be good enough  our alignment techniques will be very far from adequate on our current trajectory, our alignment techniques will be very far from adequate to create an AI that safely performs any such pivotal act.\"\n\n\n\n\n[Yudkowsky][11:05]  (Sep. 8 comment)\n\nwill not be good enough\n\nAre not presently on course to be good enough, missing by not a little.  \"Will not be good enough\" is literally declaring for lying down and dying.\n\n\n\n[Yudkowsky][16:03]  (Sep. 9 comment)\n\nwill [be very far from adequate]\n\nSame problem as the last time I commented.  I am not making an unconditional prediction about future failure as would be implied by the word \"will\".  Conditional on current courses of action or their near neighboring courses, we seem to be well over an order of magnitude away from surviving, unless a miracle occurs.  It's still in the end a result of people doing what they seem to be doing, not an inevitability.\n\n\n\n[Ngo][5:10]  (Sep. 10 comment)\nAh, I see. Does adding \"on our current trajectory\" fix this?\n\n\n\n\n[Yudkowsky][10:46]  (Sep. 10 comment)\nYes.\n\n\n\n[Ngo]  (Sep. 8 Google Doc)\nRichard, summarized by Richard: \"Consider the pivotal act of 'make a breakthrough in alignment research'. It is likely that, before the point where AGIs are strongly superhuman at seeking power, they will already be strongly superhuman at understanding the world, and at performing narrower pivotal acts like alignment research which don't require as much agency (by which I roughly mean: large-scale motivations and the ability to pursue them over long timeframes).\"\nEliezer, summarized by Richard: \"There's a deep connection between solving intellectual problems and taking over the world – the former requires a powerful mind to think about domains that, when solved, render very cognitively accessible strategies that can do dangerous things. Even mathematical research is a goal-oriented task which involves identifying then pursuing instrumental subgoals – and if brains which evolved to hunt on the savannah can quickly learn to do mathematics, then it's also plausible that AIs trained to do mathematics could quickly learn a range of other skills. Since almost nobody understands the deep similarities in the cognition required for these different tasks, the distance between AIs that are able to perform fundamental scientific research, and dangerously agentic AGIs, is smaller than almost anybody expects.\"\n\n\n\n\n[Yudkowsky][11:05]  (Sep. 8 comment)\n\nThere's a deep connection between solving intellectual problems and taking over the world\n\nThere's a deep connection by default between chipping flint handaxes and taking over the world, if you happen to learn how to chip handaxes in a very general way.  \"Intellectual\" problems aren't special in this way.  And maybe you could avert the default, but that would take some work and you'd have to do it before easier default ML techniques destroyed the world.\n\n\n\n[Ngo]  (Sep. 8 Google Doc)\nRichard, summarized by Richard: \"Our lack of understanding about how intelligence works also makes it easy to assume that traits which co-occur humans will also co-occur in future AIs. But human brains are badly-optimised for tasks like scientific research, and well-optimised for seeking power over the world, for reasons including a) evolving while embodied in a harsh environment; b) the genetic bottleneck; c) social environments which rewarded power-seeking. By contrast, training neural networks on tasks like mathematical or scientific research optimises them much less for seeking power. For example, GPT-3 has knowledge and reasoning capabilities but little agency, and loses coherence when run for longer timeframes.\"\n\n\n\n\n[Tallinn][4:19]  (Sep. 8 comment)\n\n[well-optimised for] seeking power\n\nmale-female differences might be a datapoint here (annoying as it is to lean on pinker's point :))\n\n\n\n\n[Yudkowsky][11:31]  (Sep. 8 comment)\nI don't think a female Eliezer Yudkowsky doesn't try to save / optimize / takeover the world.  Men may do that for nonsmart reasons; smart men and women follow the same reasoning when they are smart enough.  Eg Anna Salamon and many others.\n\n\n\n[Ngo]  (Sep. 8 Google Doc)\nEliezer, summarized by Richard: \"Firstly, there's a big difference between most scientific research and the sort of pivotal act that we're talking about – you need to explain how AIs with a given skill can be used to actually prevent dangerous AGIs from being built. Secondly, insofar as GPT-3 has little agency, that's because it has memorised many shallow patterns in a way which won't directly scale up to general intelligence. Intelligence instead consists of deep problem-solving patterns which link understanding and agency at a fundamental level.\"\n\n\n\n \n3. September 8 conversation\n \n3.1. The Brazilian university anecdote\n \n\n[Yudkowsky][11:00]\n(I am here.)\n\n\n\n[Ngo][11:01]\nMe too.\n\n\n\n\n[Soares][11:01]\nWelcome back!\n(I'll mostly stay out of the way again.)\n\n\n\n\n[Ngo][11:02]\nCool. Eliezer, did you read the summary – and if so, do you roughly endorse it?\nAlso, I've been thinking about the best way to approach discussing your intuitions about cognition. My guess is that starting with the obedience vs paperclips thread is likely to be less useful than starting somewhere else – e.g. the description you gave near the beginning of the last discussion, about \"searching for states that get fed into a result function and then a result-scoring function\".\n\n\n\n\n[Yudkowsky][11:06]\nmade a couple of comments about phrasings in the doc\nSo, from my perspective, there's this thing where… it's really quite hard to teach certain general points by talking at people, as opposed to more specific points. Like, they're trying to build a perpetual motion machine, and even if you can manage to argue them into believing their first design is wrong, they go looking for a new design, and the new design is complicated enough that they can no longer be convinced that they're wrong because they managed to make a more complicated error whose refutation they couldn't keep track of anymore.\nTeaching people to see an underlying structure in a lot of places is a very hard thing to teach in this way. Richard Feynman gave an example of the mental motion in his story that ends \"Look at the water!\", where people learned in classrooms about how \"a medium with an index\" is supposed to polarize light reflected from it, but they didn't realize that sunlight coming off of water would be polarized. My guess is that doing this properly requires homework exercises; and that, unfortunately from my own standpoint, it happens to be a place where I have extra math talent, the same way that eg Marcello is more talented at formally proving theorems than I happen to be; and that people without the extra math talent, have to do a lot more exercises than I did, and I don't have a good sense of which exercises to give them.\n\n\n\n[Ngo][11:13]\nI'm sympathetic to this, and can try to turn off skeptical-discussion-mode and turn on learning-mode, if you think that'll help.\n\n\n\n\n[Yudkowsky][11:14]\nThere's a general insight you can have about how arithmetic is commutative, and for some people you can show them 1 + 2 = 2 + 1 and their native insight suffices to generalize over the 1 and the 2 to any other numbers you could put in there, and they realize that strings of numbers can be rearranged and all end up equivalent. For somebody else, when they're a kid, you might have to show them 2 apples and 1 apple being put on the table in a different order but ending up with the same number of apples, and then you might have to show them again with adding up bills in different denominations, in case they didn't generalize from apples to money. I can actually remember being a child young enough that I tried to add 3 to 5 by counting \"5, 6, 7\" and I thought there was some clever enough way to do that to actually get 7, if you tried hard.\nBeing able to see \"consequentialism\" is like that, from my perspective.\n\n\n\n[Ngo][11:15]\nAnother possibility: can you trace the origins of this belief, and how it came out of your previous beliefs?\n\n\n\n\n[Yudkowsky][11:15]\nI don't know what homework exercises to give people to make them able to see \"consequentialism\" all over the place, instead of inventing slightly new forms of consequentialist cognition and going \"Well, now that isn't consequentialism, right?\"\nTrying to say \"searching for states that get fed into an input-result function and then a result-scoring function\" was one attempt of mine to describe the dangerous thing in a way that would maybe sound abstract enough that people would try to generalize it more.\n\n\n\n[Ngo][11:17]\nAnother possibility: can you describe the closest thing to real consequentialism in humans, and how it came about in us?\n\n\n\n\n[Yudkowsky][11:18][11:21]\nOk, so, part of the problem is that… before you do enough homework exercises for whatever your level of talent is (and even I, at one point, had done little enough homework that I thought there might be a clever way to add 3 and 5 in order to get to 7), you tend to think that only the very crisp formal thing that's been presented to you, is the \"real\" thing.\nWhy would your engine have to obey the laws of thermodynamics? You're not building one of those Carnot engines you saw in the physics textbook!\nHumans contain fragments of consequentialism, or bits and pieces whose interactions add up to partially imperfectly shadow consequentialism, and the critical thing is being able to see that the reason why humans' outputs 'work', in a sense, is because these structures are what is doing the work, and the work gets done because of how they shadow consequentialism and only insofar as they shadow consequentialism.\n\n\n\n\nPut a human in one environment, it gets food. Put a human in a different environment, it gets food again. Wow, different initial conditions, same output! There must be things inside the human that, whatever else they do, are also along the way somehow effectively searching for motor signals such that food is the end result!\n\n\n\n[Ngo][11:20]\nTo me it feels like you're trying to nudge me (and by extension whoever reads this transcript) out of a specific failure mode. If I had to guess, something like: \"I understand what Eliezer is talking about so now I'm justified in disagreeing with it\", or perhaps \"Eliezer's explanation didn't make sense to me and so I'm justified in thinking that his concepts don't make sense\". Is that right?\n\n\n\n\n[Yudkowsky][11:22]\nMore like… from my perspective, even after I talk people out of one specific perpetual motion machine being possible, they go off and try to invent a different, more complicated perpetual motion machine.\nAnd I am not sure what to do about that. It has been going on for a very long time from my perspective.\nIn the end, a lot of what people got out of all that writing I did, was not the deep object-level principles I was trying to point to – they did not really get Bayesianism as thermodynamics, say, they did not become able to see Bayesian structures any time somebody sees a thing and changes their belief. What they got instead was something much more meta and general, a vague spirit of how to reason and argue, because that was what they'd spent a lot of time being exposed to over and over and over again in lots of blog posts.\nMaybe there's no way to make somebody understand why corrigibility is \"unnatural\" except to repeatedly walk them through the task of trying to invent an agent structure that lets you press the shutdown button (without it trying to force you to press the shutdown button), and showing them how each of their attempts fails; and then also walking them through why Stuart Russell's attempt at moral uncertainty produces the problem of fully updated (non-)deference; and hope they can start to see the informal general pattern of why corrigibility is in general contrary to the structure of things that are good at optimization.\nExcept that to do the exercises at all, you need them to work within an expected utility framework. And then they just go, \"Oh, well, I'll just build an agent that's good at optimizing things but doesn't use these explicit expected utilities that are the source of the problem!\"\nAnd then if I want them to believe the same things I do, for the same reasons I do, I would have to teach them why certain structures of cognition are the parts of the agent that are good at stuff and do the work, rather than them being this particular formal thing that they learned for manipulating meaningless numbers as opposed to real-world apples.\nAnd I have tried to write that page once or twice (eg \"coherent decisions imply consistent utilities\") but it has not sufficed to teach them, because they did not even do as many homework problems as I did, let alone the greater number they'd have to do because this is in fact a place where I have a particular talent.\nI don't know how to solve this problem, which is why I'm falling back on talking about it at the meta-level.\n\n\n\n[Ngo][11:30]\nI'm reminded of a LW post called \"Write a thousand roads to Rome\", which iirc argues in favour of trying to explain the same thing from as many angles as possible in the hope that one of them will stick.\n\n\n\n\n[Soares][11:31]\n(Suggestion, not-necessarily-good: having named this problem on the meta-level, attempt to have the object-level debate, while flagging instances of this as it comes up.)\n\n\n\n\n[Ngo][11:31]\nI endorse Nate's suggestion.\nAnd will try to keep the difficulty of the meta-level problem in mind and respond accordingly.\n\n\n\n\n[Yudkowsky][11:33]\nThat (Nate's suggestion) is probably the correct thing to do. I name it out loud because sometimes being told about the meta-problem actually does help on the object problem. It seems to help me a lot and others somewhat less, but it does help others at all, for many others.\n\n\n \n3.2. Brain functions and outcome pumps\n \n\n[Yudkowsky][11:34]\nSo, do you have a particular question you would ask about input-seeking cognitions? I did try to say why I mentioned those at all (it's a different road to Rome on \"consequentialism\").\n\n\n\n[Ngo][11:36]\nLet's see. So the visual cortex is an example of quite impressive cognition in humans and many other animals. But I'd call this \"pattern-recognition\" rather than \"searching for high-scoring results\".\n\n\n\n\n[Yudkowsky][11:37]\nYup! And it is no coincidence that there are no whole animals formed entirely out of nothing but a visual cortex!\n\n\n\n[Ngo][11:37]\nOkay, cool. So you'd agree that the visual cortex is doing something that's qualitatively quite different from the thing that animals overall are doing.\nThen another question is: can you characterise searching for high-scoring results in non-human animals? Do they do it? Or are you mainly talking about humans and AGIs?\n\n\n\n\n[Yudkowsky][11:39]\nAlso by the time you get to like the temporal lobes or something, there is probably some significant amount of \"what could I be seeing that would produce this visual field?\" that is searching through hypothesis-space for hypotheses with high plausibility scores, and for sure at the human level, humans will start to think, \"Well, could I be seeing this? No, that theory has the following problem. How could I repair that theory?\" But it is plausible that there is no low-level analogue of this in a monkey's temporal cortex; and even more plausible that the parts of the visual cortex, if any, which do anything analogous to this, are doing it in a relatively local and definitely very domain-specific way.\nOh, that's the cerebellum and motor cortex and so on, if we're talking about a cat or whatever. They have to find motor plans that result in their catching the mouse.\nJust because the visual cortex isn't (obviously) running a search doesn't mean the rest of the animal isn't running any searches.\n(On the meta-level, I notice myself hiccuping \"But how could you not see that when looking at a cat?\" and wondering what exercises would be required to teach that.)\n\n\n\n[Ngo][11:41]\nWell, I see something when I look at a cat, but I don't know how well it corresponds to the concepts you're using. So just taking it slowly for now.\nI have the intuition, by the way, that the motor cortex is in some sense doing a similar thing to the visual cortex – just in reverse. So instead of taking low-level inputs and producing high-level outputs, it's taking high-level inputs and producing low-level outputs. Would you agree with that?\n\n\n\n\n[Yudkowsky][11:43]\nIt doesn't directly parse in my ontology because (a) I don't know what you mean by 'high-level' and (b) whole Cartesian agents can be viewed as functions, that doesn't mean all agents can be viewed as non-searching pattern-recognizers.\nThat said, all parts of the cerebral cortex have surprisingly similar morphology, so it wouldn't be at all surprising if the motor cortex is doing something similar to visual cortex. (The cerebellum, on the other hand…)\n\n\n\n[Ngo][11:44]\nThe signal from the visual cortex saying \"that is a cat\", and the signal to the motor cortex saying \"grab that cup\", are things I'd characterise as high-level.\n\n\n\n\n[Yudkowsky][11:45]\nStill less of a native distinction in my ontology, but there's an informal thing it can sort of wave at, and I can hopefully take that as understood and run with it.\n\n\n\n[Ngo][11:45]\nThe firing of cells in the retina, and firing of motor neurons, are the low-level parts.\nCool. So to a first approximation, we can think about the part in between the cat recognising a mouse, and the cat's motor cortex producing the specific neural signals required to catch the mouse, as the part where the consequentialism happens?\n\n\n\n\n[Yudkowsky][11:49]\nThe part between the cat's eyes seeing the mouse, and the part where the cat's limbs move to catch the mouse, is the whole cat-agent. The whole cat agent sure is a baby consequentialist / searches for mouse-catching motor patterns / gets similarly high-scoring end results even as you vary the environment.\nThe visual cortex is a particular part of this system-viewed-as-a-feedforward-function that is, plausibly, by no means surely, either not very searchy, or does only small local visual-domain-specific searches not aimed per se at catching mice; it has the epistemic nature rather than the planning nature.\nThen from one perspective you could reason that \"well, most of the consequentialism is in the remaining cat after visual cortex has sent signals onward\". And this is in general a dangerous mode of reasoning that is liable to fail in, say, inspecting every particular neuron for consequentialism and not finding it; but in this particular case, there are significantly more consequentialist parts of the cat than the visual cortex, so I am okay running with it.\n\n\n\n[Ngo][11:50]\nAh, the more specific thing I meant to say is: most of the consequentialism is strictly between the visual cortex and the motor cortex. Agree/disagree?\n\n\n\n\n[Yudkowsky][11:51]\nDisagree, I'm rusty on my neuroanatomy but I think the motor cortex may send signals on to the cerebellum rather than the other way around.\n(I may also disagree with the actual underlying notion you're trying to hint at, so possibly not just a \"well include the cerebellum then\" issue, but I think I should let you respond first.)\n\n\n\n[Ngo][11:53]\nI don't know enough neuroanatomy to chase that up, so I was going to try a different tack.\nBut actually, maybe it's easier for me to say \"let's include the cerebellum\" and see where you think the disagreement ends up.\n\n\n\n\n[Yudkowsky][11:56]\nSo since cats are not (obviously) (that I have read about) cross-domain consequentialists with imaginations, their consequentialism is in bits and pieces of consequentialism embedded in them all over by the more purely pseudo-consequentialist genetic optimization loop that built them.\nA cat who fails to catch a mouse may then get little bits and pieces of catbrain adjusted all over.\nAnd then those adjusted bits and pieces get a pattern lookup later.\nWhy do these pattern-lookups with no obvious immediate search element, all happen to point towards the same direction of catching the mouse? Because of the past causal history about how what gets looked up, which was tweaked to catch the mouse.\nSo it is legit harder to point out \"the consequentialist parts of the cat\" by looking for which sections of neurology are doing searches right there. That said, to the extent that the visual cortex does not get tweaked on failure to catch a mouse, it's not part of that consequentialist loop either.\nAnd yes, the same applies to humans, but humans also do more explicitly searchy things and this is part of the story for why humans have spaceships and cats do not.\n\n\n\n[Ngo][12:00]\nOkay, this is interesting. So in biological agents we've got these three levels of consequentialism: evolution, reinforcement learning, and planning.\n\n\n\n\n[Yudkowsky][12:01]\nIn biological agents we've got evolution + local evolved system-rules that in the past promoted genetic fitness. Two kinds of local rules like this are \"operant-conditioning updates from success or failure\" and \"search through visualized plans\". I wouldn't characterize these two kinds of rules as \"levels\".\n\n\n\n[Ngo][12:02]\nOkay, I see. And when you talk about searching through visualised plans (the type of thing that humans do) can you say more about what it means for that to be a \"search\"?\nFor example, if I imagine writing a poem line-by-line, I may only be planning a few words ahead. But somehow the whole poem, which might be quite long, ends up a highly-optimised product. Is that a central example of planning?\n\n\n\n\n[Yudkowsky][12:04][12:07]\nPlanning is one way to succeed at search. I think for purposes of understanding alignment difficulty, you want to be thinking on the level of abstraction where you see that in some sense it is the search itself that is dangerous when it's a strong enough search, rather than the danger seeming to come from details of the planning process.\nOne of my early experiences in successfully generalizing my notion of intelligence, what I'd later verbalize as \"computationally efficient finding of actions that produce outcomes high in a preference ordering\", was in writing an (unpublished) story about time-travel in which the universe was globally consistent.\nThe requirement of global consistency, the way in which all events between Paradox start and Paradox finish had to map the Paradox's initial conditions onto the endpoint that would go back and produce those exact initial conditions, ended up imposing strong complicated constraints on reality that the Paradox in effect had to navigate using its initial conditions. The time-traveler needed to end up going through certain particular experiences that would produce the state of mind in which he'd take the actions that would end up prodding his future self elsewhere into having those experiences.\n\n\n\n\nThe Paradox ended up killing the people who built the time machine, for example, because they would not otherwise have allowed that person to go back in time, or kept the temporal loop open that long for any other reason if they were still alive.\nJust having two examples of strongly consequentialist general optimization in front of me – human intelligence, and evolutionary biology – hadn't been enough for me to properly generalize over a notion of optimization. Having three examples of homework problems I'd worked – human intelligence, evolutionary biology, and the fictional Paradox – caused it to finally click for me.\n\n\n\n[Ngo][12:07]\nHmm. So to me, one of the central features of search is that you consider many possibilities. But in this poem example, I may only have explicitly considered a couple of possibilities, because I was only looking ahead a few words at a time. This seems related to the distinction Abram drew a while back between selection and control (https://www.alignmentforum.org/posts/ZDZmopKquzHYPRNxq/selection-vs-control). Do you distinguish between them in the same way as he does? Or does \"control\" of a system (e.g. a football player dribbling a ball down the field) count as search too in your ontology?\n\n\n\n\n[Yudkowsky][12:10][12:11]\nline:1pt solid #000000;vertical-align:top\">\n[Yudkowsky][12:10][12:11]\nI would later try to tell people to \"imagine a paperclip maximizer as not being a mind at all, imagine it as a kind of malfunctioning time machine that spits out outputs which will in fact result in larger numbers of paperclips coming to exist later\". I don't think it clicked because people hadn't done the same homework problems I had, and didn't have the same \"Aha!\" of realizing how part of the notion and danger of intelligence could be seen in such purely material terms.\n\n\n\n\nBut the convergent instrumental strategies, the anticorrigibility, these things are contained in the true fact about the universe that certain outputs of the time machine will in fact result in there being lots more paperclips later. What produces the danger is not the details of the search process, it's the search being strong and effective at all. The danger is in the territory itself and not just in some weird map of it; that building nanomachines that kill the programmers will produce more paperclips is a fact about reality, not a fact about paperclip maximizers!\n\n\n\n[Ngo][12:11]\nRight, I remember a very similar idea in your writing about Outcome Pumps (https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes).\n\n\n\n\n[Yudkowsky][12:12]\nYup! Alas, the story was written in 2002-2003 when I was a worse writer and the real story that inspired the Outcome Pump never did get published.\n\n\n\n[Ngo][12:14]\nOkay, so I guess the natural next question is: what is it that makes you think that a strong, effective search isn't likely to be limited or constrained in some way?\nWhat is it about search processes (like human brains) that makes it hard to train them with blind spots, or deontological overrides, or things like that?\nHmmm, although it feels like this is a question I can probably predict your answer to. (Or maybe not, I wasn't expecting the time travel.)\n\n\n\n\n[Yudkowsky][12:15]\nIn one sense, they are! A paperclip-maximizing superintelligence is nowhere near as powerful as a paperclip-maximizing time machine. The time machine can do the equivalent of buying winning lottery tickets from lottery machines that have been thermodynamically randomized; a superintelligence can't, at least not directly without rigging the lottery or whatever.\nBut a paperclip-maximizing strong general superintelligence is epistemically and instrumentally efficient, relative to you, or to me. Any time we see it can get at least X paperclips by doing Y, we should expect that it gets X or more paperclips by doing Y or something that leads to even more paperclips than that, because it's not going to miss the strategy we see.\nSo in that sense, searching our own brains for how a time machine would get paperclips, asking ourselves how many paperclips are in principle possible and how they could be obtained, is a way of getting our own brains to consider lower bounds on the problem without the implicit stupidity assertions that our brains unwittingly use to constrain story characters. Part of the point of telling people to think about time machines instead of superintelligences was to get past the ways they imagine superintelligences being stupid. Of course that didn't work either, but it was worth a try.\nI don't think that's quite what you were asking about, but I want to give you a chance to see if you want to rephrase anything before I try to answer your me-reformulated questions.\n\n\n\n[Ngo][12:20]\nYeah, I think what I wanted to ask is more like: why should we expect that, out of the space of possible minds produced by optimisation algorithms like gradient descent, strong general superintelligences are more common than other types of agents which score highly on our loss functions?\n\n\n\n\n[Yudkowsky][12:20][12:23][12:24]\nIt depends on how hard you optimize! And whether gradient descent on a particular system can even successfully optimize that hard! Many current AIs are trained by gradient descent and yet not superintelligences at all.\n\n\n\n\nBut the answer is that some problems are difficult in that they require solving lots of subproblems, and an easy way to solve all those subproblems is to use patterns which collectively have some coherence and overlap, and the coherence within them generalizes across all the subproblems. Lots of search orderings will stumble across something like that before they stumble across separate solutions for lots of different problems.\n\n\n\n\nI suspect that you cannot get this out of small large amounts of gradient descent on small large layered transformers, and therefore I suspect that GPT-N does not approach superintelligence before the world is ended by systems that look differently, but I could be wrong about that.\n\n\n\n[Ngo][12:22][12:23]\nSuppose that we optimise hard enough to produce an epistemic subsystem that can make plans much better than any human's.\n\n\n\n\n\nMy guess is that you'd say that this is possible, but that we're much more likely to first produce a consequentialist agent which does this (rather than a purely epistemic agent which does this).\n\n\n\n\n[Yudkowsky][12:24]\nI am confused by what you think it means to have an \"epistemic subsystem\" that \"makes plans much better than any human's\". If it searches paths through time and selects high-scoring ones for output, what makes it \"epistemic\"?\n\n\n\n[Ngo][12:25]\nSuppose, for instance, that it doesn't actually carry out the plans, it just writes them down for humans to look at.\n\n\n\n\n[Yudkowsky][12:25]\nIf it can in fact do the thing that a paperclipping time machine does, what makes it any safer than a paperclipping time machine because we called it \"epistemic\" or by some other such name?\nBy what criterion is it selecting the plans that humans look at?\nWhy did it make a difference that its output was fed through the causal systems called humans on the way to the causal systems called protein synthesizers or the Internet or whatever? If we build a superintelligence to design nanomachines, it makes no obvious difference to its safety whether it sends DNA strings directly to a protein synthesis lab, or humans read the output and retype it manually into an email. Presumably you also don't think that's where the safety difference comes from. So where does the safety difference come from?\n(note: lunchtime for me in 2 minutes, propose to reconvene in 30m after that)\n\n\n\n[Ngo][12:28]\n(break for half an hour sounds good)\nIf we consider the visual cortex at a given point in time, how does it decide which objects to recognise?\nInsofar as the visual cortex can be non-consequentialist about which objects it recognises, why couldn't a planning system be non-consequentialist about which plans it outputs?\n\n\n\n\n[Yudkowsky][12:32]\nThis does feel to me like another \"look at the water\" moment, so what do you predict I'll say about that?\n\n\n\n[Ngo][12:34]\nI predict that you say something like: in order to produce an agent that can create very good plans, we need to apply a lot of optimisation power to that agent. And if the channel through which we're applying that optimisation power is \"giving feedback on its plans\", then we don't have a mechanism to ensure that the agent actually learns to optimise for creating really good plans, as opposed to creating plans that receive really good feedback.\n\n\n\n\n[Soares][12:35]\nSeems like a fine cliffhanger?\n\n\n\n\n[Ngo][12:35]\nYepp.\n\n\n\n\n[Soares][12:35]\nGreat. Let's plan to reconvene in 30min.\n\n\n\n \n3.3. Hypothetical-planning systems, nanosystems, and evolving generality\n \n\n[Yudkowsky][13:03][13:11]\nSo the answer you expected from me, translated into my terms, would be, \"If you select for the consequence of the humans hitting 'approve' on the plan, you're still navigating the space of inputs for paths through time to probable outcomes (namely the humans hitting 'approve'), so you're still doing consequentialism.\"\nBut suppose you manage to avoid that. Suppose you get exactly what you ask for. Then the system is still outputting plans such that, when humans follow them, they take paths through time and end up with outcomes that score high in some scoring function.\nMy answer is, \"What the heck would it mean for a planning system to be non-consequentialist? You're asking for nonwet water! What's consequentialist isn't the system that does the work, it's the work you're trying to do! You could imagine it being done by a cognition-free material system like a time machine and it would still be consequentialist because the output is a plan, a path through time!\"\nAnd this indeed is a case where I feel a helpless sense of not knowing how I can rephrase things, which exercises you have to get somebody to do, what fictional experience you have to walk somebody through, before they start to look at the water and see a material with an index, before they start to look at the phrase \"why couldn't a planning system be non-consequentialist about which plans it outputs\" and go \"um\".\n\n\n\n\nMy imaginary listener now replies, \"Ah, but what if we have plans that don't end up with outcomes that score high in some function?\" and I reply \"Then you lie on the ground randomly twitching because any outcome you end up with which is not that is one that you wanted more than that meaning you preferred it more than the outcome of random motor outputs which is optimization toward higher in the preference function which is taking a path through time that leads to particular destinations more than it leads to random noise.\"\n\n\n\n[Ngo][13:09][13:11]\nYeah, this does seem like a good example of the thing you were trying to explain at the beginning\n\n\n\n\n\nIt still feels like there's some sort of levels distinction going on here though, let me try to tease out that intuition.\nOkay, so suppose I have a planning system that, given a situation and a goal, outputs a plan that leads from that situation to that goal.\nAnd then suppose that we give it, as input, a situation that we're not actually in, and it outputs a corresponding plan.\nIt seems to me that there's a difference between the sense in which that planning system is consequentialist by virtue of making consequentialist plans (as in: if that plan were used in the situation described in its inputs, it would lead to some goal being achieved) versus another hypothetical agent that is just directly trying to achieve goals in the situation it's actually in.\n\n\n\n\n[Yudkowsky][13:18]\nSo I'd preface by saying that, if you could build such a system, which is indeed a coherent thing (it seems to me) to describe for the purpose of building it, then there would possibly be a safety difference on the margins, it would be noticeably less dangerous though still dangerous. It would need a special internal structural property that you might not get by gradient descent on a loss function with that structure, just like natural selection on inclusive genetic fitness doesn't get you explicit fitness optimizers; you could optimize for planning in hypothetical situations, and get something that didn't explicitly care only and strictly about hypothetical situations. And even if you did get that, the outputs that would kill or brain-corrupt the operators in hypothetical situations might also be fatal to the operators in actual situations. But that is a coherent thing to describe, and the fact that it was not optimizing our own universe, might make it safer.\nWith that said, I would worry that somebody would think there was some bone-deep difference of agentiness, of something they were empathizing with like personhood, of imagining goals and drives being absent or present in one case or the other, when they imagine a planner that just solves \"hypothetical\" problems. If you take that planner and feed it the actual world as its hypothetical, tada, it is now that big old dangerous consequentialist you were imagining before, without it having acquired some difference of psychological agency or 'caring' or whatever.\nSo I think there is an important homework exercise to do here, which is something like, \"Imagine that safe-seeming system which only considers hypothetical problems. Now see that if you take that system, don't make any other internal changes, and feed it actual problems, it's very dangerous. Now meditate on this until you can see how the hypothetical-considering planner was extremely close in the design space to the more dangerous version, had all the dangerous latent properties, and would probably have a bunch of actual dangers too.\"\n\"See, you thought the source of the danger was this internal property of caring about actual reality, but it wasn't that, it was the structure of planning!\"\n\n\n\n[Ngo][13:22]\nI think we're getting closer to the same page now.\nLet's consider this hypothetical planner for a bit. Suppose that it was trained in a way that minimised the, let's say, adversarial component of its plans.\nFor example, let's say that the plans it outputs for any situation are heavily regularised so only the broad details get through.\nHmm, I'm having a bit of trouble describing this, but basically I have an intuition that in this scenario there's a component of its plan which is cooperative with whoever executes the plan, and a component that's adversarial.\nAnd I agree that there's no fundamental difference in type between these two things.\n\n\n\n\n[Yudkowsky][13:27]\n\"What if this potion we're brewing has a Good Part and a Bad Part, and we could just keep the Good Parts…\"\n\n\n\n[Ngo][13:27]\nNor do I think they're separable. But in some cases, you might expect one to be much larger than the other.\n\n\n\n\n[Soares][13:29]\n(I observe that my model of some other listeners, at this point, protest \"there is yet a difference between the hypothetical-planner applied to actual problems, and the Big Scary Consequentialist, which is that the hypothetical planner is emitting descriptions of plans that would work if executed, whereas the big scary consequentialist is executing those plans directly.\")\n(Not sure that's a useful point to discuss, or if it helps Richard articulate, but it's at least a place I expect some reader's minds to go if/when this is published.)\n\n\n\n\n[Yudkowsky][13:30]\n(That is in fact a difference! The insight is in realizing that the hypothetical planner is only one line of outer shell command away from being a Big Scary Thing and is therefore also liable to be Big and Scary in many ways.)\n\n\n\n[Ngo][13:31]\nTo me it seems that Eliezer's position is something like: \"actually, in almost no training regimes do we get agents that decide which plans to output by spending almost all of their time thinking about the object-level problem, and very little of their time thinking about how to manipulate the humans carrying out the plan\".\n\n\n\n\n[Yudkowsky][13:32]\nMy position is that the AI does not neatly separate its internals into a Part You Think Of As Good and a Part You Think Of As Bad, because that distinction is sharp in your map but not sharp in the territory or the AI's map.\nFrom the perspective of a paperclip-maximizing-action-outputting-time-machine, its actions are not \"object-level making paperclips\" or \"manipulating the humans next to the time machine to deceive them about what the machine does\", they're just physical outputs that go through time and end up with paperclips.\n\n\n\n[Ngo][13:34]\n@Nate, yeah, that's a nice way of phrasing one point I was trying to make. And I do agree with Eliezer that these things can be very similar. But I'm claiming that in some cases these things can also be quite different – for instance, when we're training agents that only get to output a short high-level description of the plan.\n\n\n\n\n[Yudkowsky][13:35]\nThe danger is in how hard the agent has to work to come up with the plan. I can, for instance, build an agent that very safely outputs a high-level plan for saving the world:\necho \"Hey Richard, go save the world!\"\nSo I do have to ask what kind of \"high-level\" planning output, that saves the world, you are envisioning, and why it was hard to cognitively come up with such that we didn't just make that high-level plan right now, if humans could follow it. Then I'll look at the part where the plan was hard to come up with, and say how the agent had to understand lots of complicated things in reality and accurately navigate paths through time for those complicated things, in order to even invent the high-level plan, and hence it was very dangerous if it wasn't navigating exactly where you hoped. Or, alternatively, I'll say, \"That plan couldn't save the world: you're not postulating enough superintelligence to be dangerous, and you're also not using enough superintelligence to flip the tables on the currently extremely doomed world.\"\n\n\n\n[Ngo][13:39]\nAt this point I'm not envisaging a particular planning output that saves the world, I'm just trying to get more clarity on the issue of consequentialism.\n\n\n\n\n[Yudkowsky][13:40]\nLook at the water; it's not the way you're doing the work that's dangerous, it's the work you're trying to do. What work are you trying to do, never mind how it gets done?\n\n\n\n[Ngo][13:41]\nI think I agree with you that, in the limit of advanced capabilities, we can't say much about how the work is being done, we have to primarily reason from the work that we're trying to do.\nBut here I'm only talking about systems that are intelligent enough to come up with plans and do research that are beyond the capability of humanity.\nAnd for me the question is: for those systems, can we tilt the way they do the work so they spend 99% of their time trying to solve the object-level problem, and 1% of their time trying to manipulate the humans who are going to carry out the plan? (Where these are not fundamental categories for the AI, they're just a rough categorisation that emerges after we've trained it – the same way that the categories of \"physically moving around\" and \"thinking about things\" aren't fundamentally different categories of action for humans, but the way we've evolved means there's a significant internal split between them.)\n\n\n\n\n[Soares][13:43]\n(I suspect Eliezer is not trying to make a claim of the form \"in the limit of advanced capabilities, we are relegated to reasoning about what work gets done, not about how it was done\". I suspect some miscommunication. It might be a reasonable time for Richard to attempt to paraphrase Eliezer's argument?)\n(Though it also seems to me like Eliezer responding to the 99%/1% point may help shed light.)\n\n\n\n\n[Yudkowsky][13:46]\nWell, for one thing, I'd note that a system which is designing nanosystems, and spending 1% of its time thinking about how to kill the operators, is lethal. It has to be such a small fraction of thinking that it, like, never completes the whole thought about \"well, if I did X, that would kill the operators!\"\n\n\n\n[Ngo][13:46]\nThanks for that, Nate. I'll try to paraphrase Eliezer's argument now.\nEliezer's position (partly in my own terminology): we're going to build AIs that can perform very difficult tasks using cognition which we can roughly describe as \"searching over many options to find one that meets our criteria\". An AI that can solve these difficult tasks will need to be able to search in a very general and flexible way, and so it will be very difficult to constrain that search into a particular region.\nHmm, that felt like a very generic summary, let me try and think about the more specific claims he's making.\n\n\n\n\n[Yudkowsky][13:54]\n\nAn AI that can solve these difficult tasks will need to be able to\n\nVery very little is universally necessary over the design space. The first AGI that our tech becomes able to build is liable to work in certain easier and simpler ways.\n\n\n\n[Ngo][13:55]\nPoint taken; thanks for catching this misphrasing (this and previous times).\n\n\n\n\n[Yudkowsky][13:56]\nCan you, in principle, build a red-car-driver that is totally incapable of driving blue cars? In principle, sure! But the first red-car-driver that gradient descent stumbles over is liable to be a blue-car-driver too.\n\n\n\n[Ngo][13:57]\nEliezer, I'm wondering how much of our disagreement is about how high the human level is here.\nOr, to put it another way: we can build systems that outperform humans at quite a few tasks by now, without having search abilities that are general enough to even try to take over the world.\n\n\n\n\n[Yudkowsky][13:58]\nIndubitably and indeed, this is so.\n\n\n\n[Ngo][13:59]\nPutting aside for a moment the question of which tasks are pivotal enough to save the world, which parts of your model draw the line between human-level chess players and human-level galaxy-colonisers?\nAnd say that we'll be able to align ones that they outperform us on these tasks before taking over the world, but not on these other tasks?\n\n\n\n\n[Yudkowsky][13:59][14:01]\nThat doesn't have a very simple answer, but one aspect there is domain generality which in turn is achieved through novel domain learning.\n\n\n\n\nHumans, you will note, were not aggressively optimized by natural selection to be able to breathe underwater or fly into space. In terms of obvious outer criteria, there is not much outer sign that natural selection produced these creatures much more general than chimpanzees, by training on a much wider range of environments and loss functions.\n\n\n\n[Soares][14:00]\n(Before we drift too far from it: thanks for the summary! It seemed good to me, and I updated towards the miscommunication I feared not-having-happened.)\n\n\n\n\n[Ngo][14:03]\n\n(Before we drift too far from it: thanks for the summary! It seemed good to me, and I updated towards the miscommunication I feared not-having-happened.)\n\n(Good to know, thanks for keeping an eye out. To be clear, I didn't ever interpret Eliezer as making a claim explicitly about the limit of advanced capabilities; instead it just seemed to me that he was thinking about AIs significantly more advanced than the ones I've been thinking of. I think I phrased my point poorly.)\n\n\n\n\n[Yudkowsky][14:05][14:10]\nThere are complicated aspects of this story where natural selection may metaphorically be said to have \"had no idea of what it was doing\", eg, after early rises in intelligence possibly produced by sexual selection on neatly chipped flint handaxes or whatever, all the cumulative brain-optimization on chimpanzees reached a point where there was suddenly a sharp selection gradient on relative intelligence at Machiavellian planning against other humans (even more so than in the chimp domain) as a subtask of inclusive genetic fitness, and so continuing to optimize on \"inclusive genetic fitness\" in the same old savannah, turned out to happen to be optimizing hard on the subtask and internal capability of \"outwit other humans\", which optimized hard on \"model other humans\", which was a capability that could be reused for modeling the chimp-that-is-this-chimp, which turned the system on itself and made it reflective, which contributed greatly to its intelligence being generalized, even though it was just grinding the same loss function on the same savannah; the system being optimized happened to go there in the course of being optimized even harder for the same thing.\nSo one can imagine asking the question: Is there a superintelligent AGI that can quickly build nanotech, which has a kind of passive safety in some if not all respects, in virtue of it solving problems like \"build a nanotech system which does X\" the way that a beaver solves building dams, in virtue of having a bunch of specialized learning abilities without it ever having a cross-domain general learning ability?\nAnd in this regard one does note that there are many, many, many things that humans do which no other animal does, which you might think would contribute a lot to that animal's fitness if there were animalistic ways to do it. They don't make iron claws for themselves. They never did evolve a tendency to search for iron ore, and burn wood into charcoal that could be used in hardened-clay furnaces.\nNo animal plays chess, but AIs do, so we can obviously make AIs to do things that animals don't do. On the other hand, the environment didn't exactly present any particular species with a challenge of chess-playing either.\n\n\n\n\nEven so, though, even if some animal had evolved to play chess, I fully expect that current AI systems would be able to squish it at chess, because the AI systems are on chips that run faster than neurons and doing crisp calculations and there are things you just can't do with noisy slow neurons. So that again is not a generally reliable argument about what AIs can do.\n\n\n\n[Ngo][14:09][14:11]\nYes, although I note that challenges which are trivial from a human-engineering perspective can be very challenging from an evolutionary perspective (e.g. spinning wheels).\n\n\n\n\n\nAnd so the evolution of animals-with-a-little-bit-of-help-from-humans might end up in very different places from the evolution of animals-just-by-themselves. And analogously, the ability of humans to fill in the gaps to help less general AIs achieve more might be quite significant.\n\n\n\n\n[Yudkowsky][14:11]\nSo we can again ask: Is there a way to make an AI system that is only good at designing nanosystems, which can achieve some complicated but hopefully-specifiable real-world outcomes, without that AI also being superhuman at understanding and manipulating humans?\nAnd I roughly answer, \"Perhaps, but not by default, there's a bunch of subproblems, I don't actually know how to do it right now, it's not the easiest way to get an AGI that can build nanotech (and kill you), you've got to make the red-car-driver specifically not be able to drive blue cars.\" Can I explain how I know that? I'm really not sure I can, in real life where I explain X0 and then the listener doesn't generalize X0 to X and respecialize it to X1.\nIt's like asking me how I could possibly know in 2008, before anybody had observed AlphaFold 2, that superintelligences would be able to crack the protein folding problem on the way to nanotech, which some people did question back in 2008.\nThough that was admittedly more of a slam-dunk than this was, and I could not have told you that AlphaFold 2 would become possible at a prehuman level of general intelligence in 2021 specifically, or that it would be synced in time to a couple of years after GPT-2's level of generality at text.\n\n\n\n[Ngo][14:18]\nWhat are the most relevant axes of difference between solving protein folding and designing nanotech that, say, self-assembles into a computer?\n\n\n\n\n[Yudkowsky][14:20]\nDefinitely, \"turns out it's easier than you thought to use gradient descent's memorization of zillions of shallow patterns that overlap and recombine into larger cognitive structures, to add up to a consequentialist nanoengineer that only does nanosystems and never does sufficiently general learning to apprehend the big picture containing humans, while still understanding the goal for that pivotal act you wanted to do\" is among the more plausible advance-specified miracles we could get.\nBut it is not what my model says actually happens, and I am not a believer that when your model says you are going to die, you get to start believing in particular miracles. You need to hold your mind open for any miracle and a miracle you didn't expect or think of in advance, because at this point our last hope is that in fact the future is often quite surprising – though, alas, negative surprises are a tad more frequent than positive ones, when you are trying desperately to navigate using a bad map.\n\n\n\n[Ngo][14:22]\nPerhaps one metric we could use here is something like: how much extra reward does the consequentialist nanoengineer get from starting to model humans, versus from becoming better at nanoengineering?\n\n\n\n\n[Yudkowsky][14:23]\nBut that's not where humans came from. We didn't get to nuclear power by getting a bunch of fitness from nuclear power plants. We got to nuclear power because if you get a bunch of fitness from chipping flint handaxes and Machiavellian scheming, as found by relatively simple and local hill-climbing, that entrains the same genes that build nuclear power plants.\n\n\n\n[Ngo][14:24]\nOnly in the specific case where you also have the constraint that you keep having to learn new goals every generation.\n\n\n\n\n[Yudkowsky][14:24]\nHuh???\n\n\n\n[Soares][14:24]\n(I think Richard's saying, \"that's a consequence of the genetic bottleneck\")\n\n\n\n\n[Ngo][14:25]\nRight.\nHmm, but I feel like we may have covered this ground before.\nSuggestion: I have a couple of other directions I'd like to poke at, and then we could wrap up in 20 or 30 minutes?\n\n\n\n\n[Yudkowsky][14:27]\nOK\n\nWhat are the most relevant axes of difference between solving protein folding and designing nanotech that, say, self-assembles into a computer?\n\nThough I want to mark that this question seemed potentially cruxy to me, though perhaps not for others. I.e., if building protein factories that built nanofactories that built nanomachines that met a certain deep and lofty engineering goal, didn't involve cognitive challenges different in kind from protein folding, we could maybe just safely go do that using AlphaFold 3, which would be just as safe as AlphaFold 2.\nI don't think we can do that. And I would note to the generic Other that if, to them, these both just sound like thinky things, so why can't you just do that other thinky thing too using the thinky program, this is a case where having any specific model of why we don't already have this nanoengineer right now would tell you there were specific different thinky things involved.\n\n\n \n3.4. Coherence and pivotal acts\n \n\n[Ngo][14:31]\nIn either order:\n\nI'm curious how the things we've been talking about relate to your opinions about meta-level optimisation from the AI foom debate. (I.e. talking about how wrapping around so that there's no longer any protected level of optimisation leads to dramatic change.)\nI'm curious how your claims about the \"robustness\" of consequentialism (i.e. the difficulty of channeling an agent's thinking in the directions we want it to go) relate to the reliance of humans on culture, and in particular the way in which humans raised without culture are such bad consequentialists.\n\nOn the first: if I were to simplify to the extreme, it seems like there are these two core intuitions that you've been trying to share for a long time. One is a certain type of recursive improvement, and another is a certain type of consequentialism.\n\n\n\n\n[Yudkowsky][14:32]\nThe second question didn't make much sense in my native ontology? Humans raised without culture don't have access to environmental constants whose presence their genes assume, so they end up as broken machines and then they're bad consequentialists.\n\n\n\n[Ngo][14:35]\nHmm, good point. Okay, question modification: the ways in which humans reason, act, etc, vary greatly depending on which cultures they're raised in. (I'm mostly thinking about differences over time – e.g. cavemen vs moderns.) My low-fidelity version of your view about consequentialists says that general consequentialists like humans possess a robust search process which isn't so easily modified.\n(Sorry if this doesn't make much sense in your ontology, I'm getting a bit tired.)\n\n\n\n\n[Yudkowsky][14:36]\nWhat is it that varies that you think I think should predict would stay more constant?\n\n\n\n[Ngo][14:37]\nGoals, styles of reasoning, deontological constraints, level of conformity.\n\n\n\n\n[Yudkowsky][14:39]\nWith regards to your first point, my first reaction was, \"I just have one view of intelligence, what you see me arguing about reflects which points people have proved weirdly obstinate about. In 2008, Robin Hanson was being weirdly obstinate about how capabilities scaled and whether there was even any point in analyzing AIs differently from ems, so I talked about what I saw as the most slam-dunk case for there being Plenty Of Room Above Biology and for stuff going whoosh once it got above the human level.\n\"It later turned out that capabilities started scaling a whole lot without self-improvement, which is an example of the kind of weird surprise the Future throws at you, and maybe a case where I missed something by arguing with Hanson instead of imagining how I could be wrong in either direction and not just the direction that other people wanted to argue with me about.\n\"Later on, people were unable to understand why alignment is hard, and got stuck on generalizing the concept I refer to as consequentialism. A theory of why I talked about both things for related reasons would just be a theory of why people got stuck on these two points for related reasons, and I think that theory would mainly be overexplaining an accident because if Yann LeCun had been running effective altruism I would have been explaining different things instead, after the people who talked a lot to EAs got stuck on a different point.\"\nReturning to your second point, humans are broken things; if it were possible to build computers while working even worse than humans, we'd be having this conversation at that level of intelligence instead.\n\n\n\n[Ngo][14:41]\n(Retracted)I entirely agree about humans, but it doesn't matter that much how broken humans are when the regime of AIs that we're talking about is the regime that's directly above humans, and therefore only a bit less broken than humans.\n\n\n\n\n[Yudkowsky][14:41]\nAmong the things to bear in mind about that, is that we then get tons of weird phenomena that are specific to humans, and you may be very out of luck if you start wishing for the same weird phenomena in AIs. Yes, even if you make some sort of attempt to train it using a loss function.\nHowever, it does seem to me like as we start getting towards the Einstein level instead of the village-idiot level, even though this is usually not much of a difference, we do start to see the atmosphere start to thin already, and the turbulence start to settle down already. Von Neumann was actually a fairly reflective fellow who knew about, and indeed helped generalize, utility functions. The great achievements of von Neumann were not achieved by some very specialized hypernerd who spent all his fluid intelligence on crystallizing math and science and engineering alone, and so never developed any opinions about politics or started thinking about whether or not he had a utility function.\n\n\n\n[Ngo][14:44]\nI don't think I'm asking for the same weird phenomena. But insofar as a bunch of the phenomena I've been talking about have seemed weird according to your account of consequentialism, then the fact that approximately-human-level-consequentialists have lots of weird things about them is a sign that the phenomena I've been talking about are less unlikely than you expect.\n\n\n\n\n[Yudkowsky][14:45][14:46]\nI suspect that some of the difference here is that I think you have to be noticeably better than a human at nanoengineering to pull off pivotal acts large enough to make a difference, which is why I am not instead trying to gather the smartest people left alive and doing that pivotal act directly.\n\n\n\n\nI can't think of anything you can do with somebody just barely smarter than a human, which flips the gameboard, aside of course from \"go build a Friendly AI\" which I did try to set up to just go do and which would be incredibly hard to align if we wanted an AI to do it instead (full-blown chicken-and-egg, that AI is already fully aligned).\n\n\n\n[Ngo][14:45]\nOh, interesting. Actually one more question then: to what extent do you think that explicitly reasoning about utility functions and laws of rationality is what makes consequentialists have the properties you've been talking about?\n\n\n\n\n[Yudkowsky][14:47, moved up in log]\nExplicit reflection is one possible later stage of the path; an earlier part of the path is from being optimized to do things difficult enough that you need to stop stepping on your own feet and have different parts of your thoughts work well together.\nIt's the sort of path that has only one destination at its end, so there will be many ways to get there.\n(Modulo various cases where different decision theories seem reflectively consistent and so on; I want to say \"you know what I mean\" but maybe people don't.)\n\n\n\n[Ngo][14:47, moved down in log]\n\nI suspect that some of the difference here is that I think you have to be noticeably better than a human at nanoengineering to pull off pivotal acts large enough to make a difference, which is why I am not instead trying to gather the smartest people left alive and doing that pivotal act directly.\n\nYepp, I think there's probably some disagreements about geopolitics driving this too. E.g. in my earlier summary document I mentioned some possible pivotal acts:\n\nMonitoring all potential AGI projects to an extent that makes it plausible for the US and China to work on a joint project without worrying that the other is privately racing.\nProvide arguments/demonstrations/proofs related to impending existential risk that are sufficiently compelling to scare the key global decision-makers into bottlenecking progress.\n\nI predict that you think these would not be pivotal enough; but I don't think digging into the geopolitical side of things is the best use of our time.\n\n\n\n\n[Yudkowsky][14:49, moved up in log]\nMonitoring all AGI projects – either not politically feasible in real life given the actual way that countries behave in history books instead of fantasy; or at politically feasible levels, does not work well enough to prevent the world from ending once the know-how proliferates. The AI isn't doing much work here either; why not go do this now, if it's possible? (Note: please don't try to go do this now, it backfires badly.)\nProvide sufficiently compelling arguments = superhuman manipulation, an incredibly dangerous domain that is just about the worst domain to try to align.\n\n\n\n[Ngo][14:49, moved down in log]\n\nWith regards to your first point, my first reaction was, \"I just have one view of intelligence, what you see me arguing about reflects which points people have proved weirdly obstinate about. In 2008, Robin Hanson was being weirdly obstinate about how capabilities scaled and whether there was even any point in analyzing AIs differently from ems, so I talked about what I saw as the most slam-dunk case for there being Plenty Of Room Above Biology and for stuff going whoosh once it got above the human level.\n\"It later turned out that capabilities started scaling a whole lot without self-improvement, which is an example of the kind of weird surprise the Future throws at you, and maybe a case where I missed something by arguing with Hanson instead of imagining how I could be wrong in either direction and not just the direction that other people wanted to argue with me about.\n\"Later on, people were unable to understand why alignment is hard, and got stuck on generalizing the concept I refer to as consequentialism. A theory of why I talked about both things for related reasons would just be a theory of why people got stuck on these two points for related reasons, and I think that theory would mainly be overexplaining an accident because if Yann LeCun had been running effective altruism I would have been explaining different things instead, after the people who talked a lot to EAs got stuck on a different point.\"\n\nOn my first point, it seems to me that your claims about recursive self-improvement were off in a fairly similar way to how I think your claims about consequentialism are off – which is that they defer too much to one very high-level abstraction.\n\n\n\n\n[Yudkowsky][14:52]\n\nOn my first point, it seems to me that your claims about recursive self-improvement were off in a fairly similar way to how I think your claims about consequentialism are off – which is that they defer too much to one very high-level abstraction.\n\nI suppose that is what it could potentially feel like from the inside to not get an abstraction. Robin Hanson kept on asking why I was trusting my abstractions so much, when he was in the process of trusting his worse abstractions instead.\n\n\n\n[Ngo][14:51][14:53]\n\nExplicit reflection is one possible later stage of the path; an earlier part of the path is from being optimized to do things difficult enough that you need to stop stepping on your own feet and have different parts of your thoughts work well together.\n\nCan you explain a little more what you mean by \"have different parts of your thoughts work well together\"? Is this something like the capacity for metacognition; or the global workspace; or self-control; or…?\n\n\n\n\n\nAnd I guess there's no good way to quantify how important you think the explicit reflection part of the path is, compared with other parts of the path – but any rough indication of whether it's a more or less crucial component of your view?\n\n\n\n\n[Yudkowsky][14:55]\n\nCan you explain a little more what you mean by \"have different parts of your thoughts work well together\"? Is this something like the capacity for metacognition; or the global workspace; or self-control; or…?\n\nNo, it's like when you don't, like, pay five apples for something on Monday, sell it for two oranges on Tuesday, and then trade an orange for an apple.\nI have still not figured out the homework exercises to convey to somebody the Word of Power which is \"coherence\" by which they will be able to look at the water, and see \"coherence\" in places like a cat walking across the room without tripping over itself.\nWhen you do lots of reasoning about arithmetic correctly, without making a misstep, that long chain of thoughts with many different pieces diverging and ultimately converging, ends up making some statement that is… still true and still about numbers! Wow! How do so many different thoughts add up to having this property? Wouldn't they wander off and end up being about tribal politics instead, like on the Internet?\nAnd one way you could look at this, is that even though all these thoughts are taking place in a bounded mind, they are shadows of a higher unbounded structure which is the model identified by the Peano axioms; all the things being said are true about the numbers. Even though somebody who was missing the point would at once object that the human contained no mechanism to evaluate each of their statements against all of the numbers, so obviously no human could ever contain a mechanism like that, so obviously you can't explain their success by saying that each of their statements was true about the same topic of the numbers, because what could possibly implement that mechanism which (in the person's narrow imagination) is The One Way to implement that structure, which humans don't have?\nBut though mathematical reasoning can sometimes go astray, when it works at all, it works because, in fact, even bounded creatures can sometimes manage to obey local relations that in turn add up to a global coherence where all the pieces of reasoning point in the same direction, like photons in a laser lasing, even though there's no internal mechanism that enforces the global coherence at every point.\nTo the extent that the outer optimizer trains you out of paying five apples on Monday for something that you trade for two oranges on Tuesday and then trading two oranges for four apples, the outer optimizer is training all the little pieces of yourself to be locally coherent in a way that can be seen as an imperfect bounded shadow of a higher unbounded structure, and then the system is powerful though imperfect because of how the power is present in the coherence and the overlap of the pieces, because of how the higher perfect structure is being imperfectly shadowed. In this case the higher structure I'm talking about is Utility, and doing homework with coherence theorems leads you to appreciate that we only know about one higher structure for this class of problems that has a dozen mathematical spotlights pointing at it saying \"look here\", even though people have occasionally looked for alternatives.\nAnd when I try to say this, people are like, \"Well, I looked up a theorem, and it talked about being able to identify a unique utility function from an infinite number of choices, but if we don't have an infinite number of choices, we can't identify the utility function, so what relevance does this have\" and this is a kind of mistake I don't remember even coming close to making so I do not know how to make people stop doing that and maybe I can't.\n\n\n\n[Soares][15:07]\nWe're already pushing our luck on time, so I nominate that we wrap up (after, perhaps, a few more Richard responses if he's got juice left.)\n\n\n\n\n[Yudkowsky][15:07]\nYeah, was thinking the same.\n\n\n\n[Soares][15:07]\nAs a proposed cliffhanger to feed into the next discussion, my take is that Richard's comment:\n\nOn my first point, it seems to me that your claims about recursive self-improvement were off in a fairly similar way to how I think your claims about consequentialism are off – which is that they defer too much to one very high-level abstraction.\n\nprobably contains some juicy part of the disagreement, and I'm interested in Eliezer understanding Richard's claim to the point of being able to paraphrase it to Richard's satisfaction.\n\n\n\n\n[Ngo][15:08]\nWrapping up here makes sense.\nI endorse the thing Nate just said.\nI also get the sense that I have a much better outline now of Eliezer's views about consequentialism (if not the actual details and texture).\nOn a meta level, I personally tend to focus more on things like \"how should we understand cognition\" and not \"how should we understand geopolitics and how it affects the level of pivotal action required\".\nIf someone else were trying to prosecute this disagreement they might say much more about the latter. I'm uncertain how useful it is for me to do so, given that my comparative advantage compared with the rest of the world (and probably Eliezer's too) is the cognition part.\n\n\n\n\n[Yudkowsky][15:12]\nReconvene… tomorrow? Monday of next week?\n\n\n\n[Ngo][15:12]\nMonday would work better for me.\nYou okay with me summarising the discussion so far to [some people — redacted for privacy reasons]?\n\n\n\n\n[Yudkowsky][15:13]\nNate, take a minute to think of your own thoughts there?\n\n\n\n\n[Soares: ]\n\n\n\n\n\n\n\n[Soares][15:15]\nMy take: I think it's fine to summarize, though generally virtuous to mark summaries as summaries (rather than asserting that your summaries are Eliezer-endorsed or w/e).\n\n\n\n\n[Ngo: ]\n\n\n\n\n\n\n\n\n[Yudkowsky][15:16]\nI think that broadly matches my take. I'm also a bit worried about biases in the text summarizer, and about whether I managed to say anything that Rob or somebody will object to pre-publication, but we ultimately intended this to be seen and I was keeping that in mind, so, yeah, go ahead and summarize.\n\n\n\n[Ngo][15:17]\nGreat, thanks\n\n\n\n\n[Yudkowsky][15:17]\nI admit to being curious as to what you thought was said that was important or new, but that's a question that can be left open to be answered at your leisure, earlier in your day.\n\n\n\n[Ngo][15:17]\n\nI admit to being curious as to what you thought was said that was important or new, but that's a question that can be left open to be answered at your leisure, earlier in your day.\n\nYou mean, what I thought was worth summarising?\n\n\n\n\n[Yudkowsky][15:17]\nYeah.\n\n\n\n[Ngo][15:18]\nHmm, no particular opinion. I wasn't going to go out of my way to do so, but since I'm chatting to [some people — redacted for privacy reasons] regularly anyway, it seemed low-cost to fill them in.\nAt your leisure, I'd be curious to know how well the directions of discussion are meeting your goals for what you want to convey when this is published, and whether there are topics you want to focus on more.\n\n\n\n\n[Yudkowsky][15:19]\nI don't know if it's going to help, but trying it currently seems better than to go on saying nothing.\n\n\n\n[Ngo][15:20]\n(personally, in addition to feeling like less of an expert on geopolitics, it also seems more sensitive for me to make claims about in public, which is another reason I haven't been digging into that area as much)\n\n\n\n\n[Soares][15:21]\n\n(personally, in addition to feeling like less of an expert on geopolitics, it also seems more sensitive for me to make claims about in public, which is another reason I haven't been digging into that area as much)\n\n(seems reasonable! note, though, that i'd be quite happy to have sensitive sections stricken from the record, insofar as that lets us get more convergence than we otherwise would, while we're already in the area)\n\n\n\n\n[Ngo: ]\n\n\n\n\n(tho ofc it is less valuable to spend conversational effort in private discussions, etc.)\n\n\n\n\n[Ngo: ]\n\n\n\n\n\n\n\n\n[Ngo][15:22]\n\nAt your leisure, I'd be curious to know how well the directions of discussion are meeting your goals for what you want to convey when this is published, and whether there are topics you want to focus on more.\n\n(this question aimed at you too Nate)\nAlso, thanks Nate for the moderation! I found your interventions well-timed and useful.\n\n\n\n\n[Soares: ]\n\n\n\n\n\n\n\n\n[Soares][15:23]\n\n(this question aimed at you too Nate)\n\n(noted, thanks, I'll probably write something up after you've had the opportunity to depart for sleep.)\nOn that note, I declare us adjourned, with intent to reconvene at the same time on Monday.\nThanks again, both.\n\n\n\n\n[Ngo][15:23]\nThanks both \nOh, actually, one quick point\nWould one hour earlier suit, for Monday?\nI've realised that I'll be moving to a one-hour-later time zone, and starting at 9pm is slightly suboptimal (but still possible if necessary)\n\n\n\n\n[Soares][15:24]\nOne hour earlier would work fine for me.\n\n\n\n\n[Yudkowsky][15:25]\nDoesn't work as fine for me because I've been trying to avoid any food until 12:30p my time, but on that particular day I may be more caloried than usual from the previous day, and could possibly get away with it. (That whole day could also potentially fail if a minor medical procedure turns out to take more recovery than it did the last time I had it.)\n\n\n\n[Ngo][15:26]\nHmm, is this something where you'd have more information on the day? (For the calories thing)\n\n\n\n\n[Yudkowsky][15:27]\n\n(seems reasonable! note, though, that i'd be quite happy to have sensitive sections stricken from the record, insofar as that lets us get more convergence than we otherwise would, while we're already in the area)\n\nI'm a touch reluctant to have discussions that we intend to delete, because then the larger debate will make less sense once those sections are deleted. Let's dance around things if we can.\n\n\n\n\n[Ngo: ]\n[Soares: ]\n\n\n\n\nI mean, I can that day at 10am my time say how I am doing and whether I'm in shape for that day.\n\n\n\n[Ngo][15:28]\ngreat. and if at that point it seems net positive to postpone to 11am your time (at the cost of me being a bit less coherent later on) then feel free to say so at the time\non that note, I'm off\n\n\n\n\n[Yudkowsky][15:29]\nGood night, heroic debater!\n\n\n\n[Soares][16:11]\n\nAt your leisure, I'd be curious to know how well the directions of discussion are meeting your goals for what you want to convey when this is published, and whether there are topics you want to focus on more.\n\nThe discussions so far are meeting my goals quite well so far! (Slightly better than my expectations, hooray.) Some quick rough notes:\n\nI have been enjoying EY explicating his models around consequentialism.\n\nThe objections Richard has been making are ones I think have been floating around for some time, and I'm quite happy to see explicit discussion on it.\nAlso, I've been appreciating the conversational virtue with which the two of you have been exploring it. (Assumption of good intent, charity, curiosity, etc.)\n\n\nI'm excited to dig into Richard's sense that EY was off about recursive self improvement, and is now off about consequentialism, in a similar way.\n\nThis also sees to me like a critique that's been floating around for some time, and I'm looking forward to getting more clarity on it.\n\n\nI'm a bit torn between driving towards clarity on the latter point, and shoring up some of the progress on the former point.\n\nOne artifact I'd really enjoy having is some sort of \"before and after\" take, from Richard, contrasting his model of EY's views before, to his model now.\nI also have a vague sense that there are some points Eliezer was trying to make, that didn't quite feel like they were driven home; and dually, some pushback by Richard that didn't feel quite frontally answered.\n\nOne thing I may do over the next few days is make a list of those places, and see if I can do any distilling on my own. (No promises, though.)\nIf that goes well, I might enjoy some side-channel back-and-forth with Richard about it, eg during some more convenient-for-Richard hour (or, eg, as a thing to do on Monday if EY's not in commission at 10a pacific.)\n\n\n\n\n\n\n\n\n\n[Ngo][5:40]  (next day, Sep. 9)\n\nThe discussions so far are […]\n\nWhat do you mean by \"latter point\" and \"former point\"? (In your 6th bullet point)\n\n\n\n\n[Soares][7:09]  (next day, Sep. 9)\n\nWhat do you mean by \"latter point\" and \"former point\"? (In your 6th bullet point)\n\nformer = shoring up the consequentialism stuff, latter = digging into your critique re: recursive self improvement etc. (The nesting of the bullets was supposed to help make that clear, but didn't come out well in this format, oops.)\n\n\n\n \n4. Follow-ups\n \n4.1. Richard Ngo's summary\n \n\n[Ngo]  (Sep. 10 Google Doc)\n2nd discussion\n(Mostly summaries not quotations; also hasn't yet been evaluated by Eliezer)\nEliezer, summarized by Richard: \"The A core concept which people have trouble grasping is consequentialism. People try to reason about how AIs will solve problems, and ways in which they might or might not be dangerous. But they don't realise that the ability to solve a wide range of difficult problems implies that an agent must be doing a powerful search over possible solutions, which is the a core skill required to take actions which greatly affect the world. Making this type of AI safe is like trying to build an AI that drives red cars very well, but can't drive blue cars – there's no way you get this by default, because the skills involved are so similar. And because the search process is so general is by default so general, it'll be very hard to I don't currently see how to constrain it into any particular region.\"\n\n\n\n\n[Yudkowsky][10:48]  (Sep. 10 comment)\n\nThe\n\nA concept, which some people have had trouble grasping.  There seems to be an endless list.  I didn't have to spend much time contemplating consequentialism to derive the consequences.  I didn't spend a lot of time talking about it until people started arguing.\n\n\n\n[Yudkowsky][10:50]  (Sep. 10 comment)\n\nthe\n\na\n\n\n\n[Yudkowsky][10:52]  (Sep. 10 comment)\n\n[the search process] is [so general]\n\n\"is by default\".  The reason I keep emphasizing that things are only true by default is that the work of surviving may look like doing hard nondefault things.  I don't take fatalistic \"will happen\" stances, I assess difficulties of getting nondefault results.\n\n\n\n[Yudkowsky][10:52]  (Sep. 10 comment)\n\nit'll be very hard to\n\n\"I don't currently see how to\"\n\n\n\n[Ngo]  (Sep. 10 Google Doc)\nEliezer, summarized by Richard (continued): \"In biological organisms, evolution is one source the ultimate source of consequentialism. A second secondary outcome of evolution is reinforcement learning. For an animal like a cat, upon catching a mouse (or failing to do so) many parts of its brain get slightly updated, in a loop that makes it more likely to catch the mouse next time. (Note, however, that this process isn't powerful enough to make the cat a pure consequentialist – rather, it has many individual traits that, when we view them from this lens, point in the same direction.) A third thing that makes humans in particular consequentialist is planning, Another outcome of evolution, which helps make humans in particular more consequentialist, is planning – especially when we're aware of concepts like utility functions.\"\n\n\n\n\n[Yudkowsky][10:53]  (Sep. 10 comment)\n\none\n\nthe ultimate\n\n\n\n[Yudkowsky][10:53]  (Sep. 10 comment)\n\nsecond\n\nsecondary outcome of evolution\n\n\n\n[Yudkowsky][10:55]  (Sep. 10 comment)\n\nespecially when we're aware of concepts like utility functions\n\nVery slight effect on human effectiveness in almost all cases because humans have very poor reflectivity.\n\n\n\n[Ngo]  (Sep. 10 Google Doc)\nRichard, summarized by Richard: \"Consider an AI that, given a hypothetical scenario, tells us what the best plan to achieve a certain goal in that scenario is. Of course it needs to do consequentialist reasoning to figure out how to achieve the goal. But that's different from an AI which chooses what to say as a means of achieving its goals. I'd argue that the former is doing consequentialist reasoning without itself being a consequentialist, while the latter is actually a consequentialist. Or more succinctly: consequentialism = problem-solving skills + using those skills to choose actions which achieve goals.\"\nEliezer, summarized by Richard: \"The former AI might be slightly safer than the latter if you could build it, but I think people are likely to dramatically overestimate how big the effect is. The difference could just be one line of code: if we give the former AI our current scenario as its input, then it becomes the latter.  For purposes of understanding alignment difficulty, you want to be thinking on the level of abstraction where you see that in some sense it is the search itself that is dangerous when it's a strong enough search, rather than the danger seeming to come from details of the planning process. One particularly helpful thought experiment is to think of advanced AI as an 'outcome pump' which selects from futures in which a certain outcome occurred, and takes whatever action leads to them.\"\n\n\n\n\n[Yudkowsky][10:59]  (Sep. 10 comment)\n\nparticularly helpful\n\n\"attempted explanatory\".  I don't think most readers got it.\nI'm a little puzzled by how often you write my viewpoint as thinking that whatever I happened to say a sentence about is the Key Thing.  It seems to rhyme with a deeper failure of many EAs to pass the MIRI ITT.\nTo be a bit blunt and impolite in hopes that long-languishing social processes ever get anywhere, two obvious uncharitable explanations for why some folks may systematically misconstrue MIRI/Eliezer as believing much more than in reality that various concepts an argument wanders over are Big Ideas to us, when some conversation forces us to go to that place:\n(A)  It paints a comfortably unflattering picture of MIRI-the-Other as weirdly obsessed with these concepts that seem not so persuasive, or more generally paints the Other as a bunch of weirdos who stumbled across some concept like \"consequentialism\" and got obsessed with it.  In general, to depict the Other as thinking a great deal of some idea (or explanatory thought experiment) is to tie and stake their status to the listener's view of how much status that idea deserves.  So if you say that the Other thinks a great deal of some idea that isn't obviously high-status, that lowers the Other's status, which can be a comfortable thing to do.\n(cont.)\n(B) It paints a more comfortably self-flattering picture of a continuing or persistent disagreement, as a disagreement with somebody who thinks that some random concept is much higher-status than it really is, in which case there isn't more to done or understood except to duly politely let the other person try to persuade you the concept deserves its high status. As opposed to, \"huh, maybe there is a noncentral point that the other person sees themselves as being stopped on and forced to explain to me\", which is a much less self-flattering viewpoint on why the conversation is staying within a place.  And correspondingly more of a viewpoint that somebody else is likely to have of us, because it is a comfortable view to them, than a viewpoint that it is comfortable to us to imagine them having.\nTaking the viewpoint that somebody else is getting hung up on a relatively noncentral point can also be a flattering self-portrait to somebody who believes that, of course.  It doesn't mean they're right.  But it does mean that you should be aware of how the Other's story, told from the Other's viewpoint, is much more liable to be something that the Other finds sensible and perhaps comfortable, even if it implies an unflattering (and untrue-seeming and perhaps untrue) view of yourself, than something that makes the Other seem weird and silly and which it is easy and congruent for you yourself to imagine the Other thinking.\n\n\n\n[Ngo][11:18]  (Sep. 12 comment)\n\nI'm a little puzzled by how often you write my viewpoint as thinking that whatever I happened to say a sentence about is the Key Thing.\n\nIn this case, I emphasised the outcome pump thought experiment because you said that the time-travelling scenario was a key moment for your understanding of optimisation, and the outcome pump seemed to be similar enough and easier to convey in the summary, since you'd already written about it.\nI'm also emphasising consequentialism because it seemed like the core idea which kept coming up in our first debate, under the heading of \"deep problem-solving patterns\". Although I take your earlier point that you tend to emphasise things that your interlocutor is more skeptical about, not necessarily the things which are most central to your view. But if consequentialism isn't in fact a very central concept for you, I'd be interested to hear what role it plays.\n\n\n\n\n[Ngo]  (Sep. 10 Google Doc)\nRichard, summarized by Richard: \"There's a component of 'finding a plan which achieves a certain outcome' which involves actually solving the object-level problem of how someone who is given the plan can achieve the outcome. And there's another component which is figuring out how to manipulate that person into doing what you want. To me it seems like Eliezer's argument is that there's no training regime which leads an AI to spend 99% of its time thinking about the former, and 1% thinking about the latter.\"\n\n\n\n\n[Yudkowsky][11:20]  (Sep. 10 comment)\n\nno training regime\n\n…that the training regimes we come up with first, in the 3 months or 2 years we have before somebody else destroys the world, will not have this property.\nI don't have any particularly complicated or amazingly insightful theories of why I keep getting depicted as a fatalist; but my world is full of counterfactual functions, not constants.  And I am always aware that if we had access to a real Textbook from the Future explaining all of the methods that are actually robust in real life – the equivalent of telling us in advance about all the ReLUs that in real life were only invented and understood a few decades after sigmoids – we could go right ahead and build a superintelligence that thinks 2 + 2 = 5.\nAll of my assumptions about \"I don't see how to do X\" are always labeled as ignorance on my part and a default because we won't have enough time to actually figure out how to do X.  I am constantly maintaining awareness of this because being wrong about it being difficult is a major place where hope potentially comes from, if there's some idea like ReLUs that robustly vanquishes the difficulty, which I just didn't think of.  Which does not, alas, mean that I am wrong about any particular thing, nor that the infinite source of optimistic ideas that is the wider field of \"AI alignment\" is going to produce a good idea from the same process that generates all the previous naive optimism through not seeing where the original difficulty comes from or what other difficulties surround obvious naive attempts to solve it.\n\n\n\n[Ngo]  (Sep. 10 Google Doc)\nRichard, summarized by Richard (continued): \"While this may be true in the limit of increasing intelligence, the most relevant systems are the earliest ones that are above human level. But humans deviate from the consequentialist abstraction you're talking about in all sorts of ways – for example, being raised in different cultures can make people much more or less consequentialist. So it seems plausible that early AGIs can be superhuman while also deviating strongly from this abstraction – not necessarily in the same ways as humans, but in ways that we push them towards during training.\"\nEliezer, summarized by Richard: \"Even at the Einstein or von Neumann level these types of deviations start to subside. And the sort of pivotal acts which might realistically work require skills significantly above human level. I think even 1% of the cognition of an AI that can assemble advanced nanotech, thinking about how to kill humans, would doom us. Your other suggestions for pivotal acts (surveillance to restrict AGI proliferation; persuading world leaders to restrict AI development) are not politically feasible in real life, to the level required to prevent the world from ending; or else require alignment in the very dangerous domain of superhuman manipulation.\"\nRichard, summarized by Richard: \"I think we probably also have significant disagreements about geopolitics which affect which acts we expect to be pivotal, but it seems like our comparative advantage is in discussing cognition, so let's focus on that. We can build systems that outperform humans at quite a few tasks by now, without them needing search abilities that are general enough to even try to take over the world. Putting aside for a moment the question of which tasks are pivotal enough to save the world, which parts of your model draw the line between human-level chess players and human-level galaxy-colonisers, and say that we'll be able to align ones that significantly outperform us on these tasks before they take over the world, but not on those tasks?\"\nEliezer, summarized by Richard: \"One aspect there is domain generality which in turn is achieved through novel domain learning. One can imagine asking the question: is there a superintelligent AGI that can quickly build nanotech the way that a beaver solves building dams, in virtue of having a bunch of specialized learning abilities without it ever having a cross-domain general learning ability? But there are many, many, many things that humans do which no other animal does, which you might think would contribute a lot to that animal's fitness if there were animalistic ways to do it – e.g. mining and smelting iron. (Although comparisons to animals are not generally reliable arguments about what AIs can do – e.g. chess is much easier for chips than neurons.) So my answer is 'Perhaps, but not by default, there's a bunch of subproblems, I don't actually know how to do it right now, it's not the easiest way to get an AGI that can build nanotech.' Can I explain how I know that? I'm really not sure I can.\"\n\n\n\n\n[Yudkowsky][11:26]  (Sep. 10 comment)\n\nCan I explain how I know that? I'm really not sure I can.\n\nIn original text, this sentence was followed by a long attempt to explain anyways; if deleting that, which is plausibly the correct choice, this lead-in sentence should also be deleted, as otherwise it paints a false picture of how much I would try to explain anyways.\n\n\n\n[Ngo][11:15]  (Sep. 12 comment)\nMakes sense; deleted.\n\n\n\n\n[Ngo]  (Sep. 10 Google Doc)\nRichard, summarized by Richard: \"Challenges which are trivial from a human-engineering perspective can be very challenging from an evolutionary perspective (e.g. spinning wheels). So the evolution of animals-with-a-little-bit-of-help-from-humans might end up in very different places from the evolution of animals-just-by-themselves. And analogously, the ability of humans to fill in the gaps to help less general AIs achieve more might be quite significant.\n\"On nanotech: what are the most relevant axes of difference between solving protein folding and designing nanotech that, say, self-assembles into a computer?\"\nEliezer, summarized by Richard: \"This question seemed potentially cruxy to me. I.e., if building protein factories that built nanofactories that built nanomachines that met a certain deep and lofty engineering goal, didn't involve cognitive challenges different in kind from protein folding, we could maybe just safely go do that using AlphaFold 3, which would be just as safe as AlphaFold 2. I don't think we can do that. But it is among the more plausible advance-specified miracles we could get. At this point our last hope is that in fact the future is often quite surprising.\"\nRichard, summarized by Richard: \"It seems to me that you're making the same mistake here as you did with regards to recursive self-improvement in the AI foom debate – namely, putting too much trust in one big abstraction.\"\nEliezer, summarized by Richard: \"I suppose that is what it could potentially feel like from the inside to not get an abstraction.  Robin Hanson kept on asking why I was trusting my abstractions so much, when he was in the process of trusting his worse abstractions instead.\"\n\n\n\n \n4.2. Nate Soares' summary\n \n\n[Soares]  (Sep. 12 Google Doc)\nConsequentialism\nOk, here's a handful of notes. I apologize for not getting them out until midday Sunday. My main intent here is to do some shoring up of the ground we've covered. I'm hoping for skims and maybe some light comment back-and-forth as seems appropriate (perhaps similar to Richard's summary), but don't think we should derail the main thread over it. If time is tight, I would not be offended for these notes to get little-to-no interaction.\n—\nMy sense is that there's a few points Eliezer was trying to transmit about consequentialism, that I'm not convinced have been received. I'm going to take a whack at it. I may well be wrong, both about whether Eliezer is in fact attempting to transmit these, and about whether Richard received them; I'm interested in both protests from Eliezer and paraphrases from Richard.\n\n\n\n\n[Soares]  (Sep. 12 Google Doc)\n1. \"The consequentialism is in the plan, not the cognition\".\nI think Richard and Eliezer are coming at the concept \"consequentialism\" from very different angles, as evidenced eg by Richard saying (Nate's crappy paraphrase:) \"where do you think the consequentialism is in a cat?\" and Eliezer responding (Nate's crappy paraphrase:) \"the cause of the apparent consequentialism of the cat's behavior is distributed between its brain and its evolutionary history\".\nIn particular, I think there's an argument here that goes something like:\n\nObserve that, from our perspective, saving the world seems quite tricky, and seems likely to involve long sequences of clever actions that force the course of history into a narrow band (eg, because if we saw short sequences of dumb actions, we could just get started).\nSuppose we were presented with a plan that allegedly describes a long sequence of clever actions that would, if executed, force the course of history into some narrow band.\n\nFor concreteness, suppose it is a plan that allegedly funnels history into the band where we have wealth and acclaim.\n\n\nOne plausible happenstance is that the plan is not in fact clever, and would not in fact have a forcing effect on history.\n\nFor example, perhaps the plan describes founding and managing some silicon valley startup, that would not work in practice.\n\n\nConditional on the plan having the history-funnelling property, there's a sense in which it's scary regardless of its source.\n\nFor instance, perhaps the plan describes founding and managing some silicon valley startup, and will succeed virtually every time it's executed, by dint of having very generic descriptions of things like how to identify and respond to competition, including descriptions of methods for superhumanly-good analyses of how to psychoanalyze the competition and put pressure on their weakpoints.\nIn particular, note that one need not believe the plan was generated by some \"agent-like\" cognitive system that, in a self-contained way, made use of reasoning we'd characterize as \"possessing objectives\" and \"pursuing them in the real world\".\nMore specifically, the scariness is a property of the plan itself. For instance, the fact that this plan accrues wealth and acclaim to the executor, in a wide variety of situations, regardless of what obstacles arise, implies that the plan contains course-correcting mechanisms that keep the plan on-target.\nIn other words, plans that manage to actually funnel history are (the argument goes) liable to have a wide variety of course-correction mechanisms that keep the plan oriented towards some target. And while this course-correcting property tends to be a property of history-funneling plans, the choice of target is of course free, hence the worry.\n\n\n\n(Of course, in practice we perhaps shouldn't be visualizing a single Plan handed to us from an AI or a time machine or whatever, but should instead imagine a system that is reacting to contingencies and replanning in realtime. At the least, this task is easier, as one can adjust only for the contingencies that are beginning to arise, rather than needing to predict them all in advance and/or describe general contingency-handling mechanisms. But, and feel free to take a moment to predict my response before reading the next sentence, \"run this AI that replans autonomously on-the-fly\" and \"run this AI+human loop that replans+reevaluates on the fly\", are still in this sense \"plans\", that still likely have the property of Eliezer!consequentialism, insofar as they work.)\n\n\n\n\n[Soares]  (Sep. 12 Google Doc)\nThere's a part of this argument I have not yet driven home. Factoring it out into a separate bullet:\n2. \"If a plan is good enough to work, it's pretty consequentialist in practice\".\nIn attempts to collect and distill a handful of scattered arguments of Eliezer's:\nIf you ask GPT-3 to generate you a plan for saving the world, it will not manage to generate one that is very detailed. And if you tortured a big language model into giving you a detailed plan for saving the world, the resulting plan would not work. In particular, it would be full of errors like insensitivity to circumstance, suggesting impossible actions, and suggesting actions that run entirely at cross-purposes to one another.\nA plan that is sensitive to circumstance, and that describes actions that synergize rather than conflict — like, in Eliezer's analogy, photons in a laser — is much better able to funnel history into a narrow band.\nBut, on Eliezer's view as I understand it, this \"the plan is not constantly tripping over its own toes\" property, goes hand-in-hand with what he calls \"consequentialism\". As a particularly stark and formal instance of the connection, observe that one way a plan can trip over its own toes is if it says \"then trade 5 oranges for 2 apples, then trade 2 apples for 4 oranges\". This is clearly an instance of the plan failing to \"lase\" — of some orange-needing part of the plan working at cross-purposes to some apple-needing part of the plan, or something like that. And this is also a case where it's easy to see how if a plan is \"lasing\" with respect to apples and oranges, then it is behaving as if governed by some coherent preference.\nAnd the point as I understand it isn't \"all toe-tripping looks superficially like an inconsistent preference\", but rather \"insofar as a plan does manage to chain a bunch of synergistic actions together, it manages to do so precisely insofar as it is Eliezer!consequentialist\".\ncf the analogy to information theory, where if you're staring at a maze and you're trying to build an accurate representation of that maze in your own head, you will succeed precisely insofar as your process is Bayesian / information-theoretic. And, like, this is supposed to feel like a fairly tautological claim: you (almost certainly) can't get the image of a maze in your head to match the maze in the world by visualizing a maze at random, you have to add visualized-walls using some process that's correlated with the presence of actual walls. Your maze-visualizing process will work precisely insofar as you have access to & correctly make use of, observations that correlate with the presence of actual walls. You might also visualize extra walls in locations where it's politically expedient to believe that there's a wall, and you might also avoid visualizing walls in a bunch of distant regions of the maze because it's dark and you haven't got all day, but the resulting visualization in your head is accurate precisely insofar as you're managing to act kinda like a Bayesian.\nSimilarly (the analogy goes), a plan works-in-concert and avoids-stepping-on-its-own-toes precisely insofar as it is consequentialist. These are two sides of the same coin, two ways of seeing the same thing.\nAnd, I'm not so much attempting to argue the point here, as to make sure that the shape of the argument (as I understand it) has been understood by Richard. In particular, the shape of the argument I see Eliezer as making is that \"clumsy\" plans don't work, and \"laser-like plans\" work insofar as they are managing to act kinda like a consequentialist.\nRephrasing again: we have a wide variety of mathematical theorems all spotlighting, from different angles, the fact that a plan lacking in clumsiness, is possessing of coherence.\n(\"And\", my model of Eliezer is quick to note, \"this ofc does not mean that all sufficiently intelligent minds must generate very-coherent plans. If you really knew what you were doing, you could design a mind that emits plans that always \"trip over themselves\" along one particular axis, just as with sufficient mastery you could build a mind that believes 2+2=5 (for some reasonable cashing-out of that claim). But you don't get this for free — and there's a sort of \"attractor\" here, when building cognitive systems, where just as generic training will tend to cause it to have true beliefs, so will generic training tend to cause its plans to lase.\")\n(And ofc much of the worry is that all the mathematical theorems that suggest \"this plan manages to work precisely insofar as it's lasing in some direction\", say nothing about which direction it must lase. Hence, if you show me a plan clever enough to force history into some narrow band, I can be fairly confident it's doing a bunch of lasing, but not at all confident which direction it's lasing in.)\n\n\n\n\n[Soares]  (Sep. 12 Google Doc)\nOne of my guesses is that Richard does in fact understand this argument (though I personally would benefit from a paraphrase, to test this hypothesis!), and perhaps even buys it, but that Richard gets off the train at a following step, namely that we need plans that \"lase\", because ones that don't aren't strong enough to save us. (Where in particular, I suspect most of the disagreement is in how far one can get with plans that are more like language-model outputs and less like lasers, rather than in the question of which pivotal acts would put an end to the acute risk period)\nBut setting that aside for a moment, I want to use the above terminology to restate another point I saw Eliezer as attempting to make: one big trouble with alignment, in the case where we need our plans to be like lasers, is that on the one hand we need our plans to be like lasers, but on the other hand we want them to fail to be like lasers along certain specific dimensions.\nFor instance, the plan presumably needs to involve all sorts of mechanisms for refocusing the laser in the case where the environment contains fog, and redirecting the laser in the case where the environment contains mirrors (…the analogy is getting a bit strained here, sorry, bear with me), so that it can in fact hit a narrow and distant target. Refocusing and redirecting to stay on target are part and parcel to plans that can hit narrow distant targets.\nBut the humans shutting the AI down is like scattering the laser, and the humans tweaking the AI so that it plans in a different direction is like them tossing up mirrors that redirect the laser; and we want the plan to fail to correct for those interferences.\nAs such, on the Eliezer view as I understand it, we can see ourselves as asking for a very unnatural sort of object: a path-through-the-future that is robust enough to funnel history into a narrow band in a very wide array of circumstances, but somehow insensitive to specific breeds of human-initiated attempts to switch which narrow band it's pointed towards.\nOk. I meandered into trying to re-articulate the point over and over until I had a version distilled enough for my own satisfaction (which is much like arguing the point), apologies for the repetition.\nI don't think debating the claim is the right move at the moment (though I'm happy to hear rejoinders!). Things I would like, though, are: Eliezer saying whether the above is on-track from his perspective (and if not, then poking a few holes); and Richard attempting to paraphrase the above, such that I believe the arguments themselves have been communicated (saying nothing about whether Richard also buys them).\n—\n\n\n\n\n[Soares]  (Sep. 12 Google Doc)\nMy Richard-model's stance on the above points is something like \"This all seems kinda plausible, but where Eliezer reads it as arguing that we had better figure out how to handle lasers, I read it as an argument that we'd better save the world without needing to resort to lasers. Perhaps if I thought the world could not be saved except by lasers, I would share many of your concerns, but I do not believe that, and in particular it looks to me like much of the recent progress in the field of AI — from AlphaGo to GPT to AlphaFold — is evidence in favor of the proposition that we'll be able to save the world without lasers.\"\nAnd I recall actual-Eliezer saying the following (more-or-less in response, iiuc, though readers note that I might be misunderstanding and this might be out-of-context):\n\nDefinitely, \"turns out it's easier than you thought to use gradient descent's memorization of zillions of shallow patterns that overlap and recombine into larger cognitive structures, to add up to a consequentialist nanoengineer that only does nanosystems and never does sufficiently general learning to apprehend the big picture containing humans, while still understanding the goal for that pivotal act you wanted to do\" is among the more plausible advance-specified miracles we could get. \n\nOn my view, and I think on Eliezer's, the \"zillions of shallow patterns\"-style AI that we see today, is not going to be sufficient to save the world (nor destroy it). There's a bunch of reasons that GPT and AlphaZero aren't destroying the world yet, and one of them is this \"shallowness\" property. And, yes, maybe we'll be wrong! I myself have been surprised by how far the shallow pattern memorization has gone (and, for instance, was surprised by GPT), and acknowledge that perhaps I will continue to be surprised. But I continue to predict that the shallow stuff won't be enough.\nI have the sense that lots of folk in the community are, one way or another, saying \"Why not consider the problems of aligning systems that memorize zillions of shallow patterns?\". And my answer is, \"I still don't expect those sorts of machines to either kill or save us, I'm still expecting that there's a phase shift that won't happen until AI systems start to be able to make plans that are sufficiently deep and laserlike to do scary stuff, and I'm still expecting that the real alignment challenges are in that regime.\"\nAnd this seems to me close to the heart of the disagreement: some people (like me!) have an intuition that it's quite unlikely that figuring out how to get sufficient work out of shallow-memorizers is enough to save us, and I suspect others (perhaps even Richard!) have the sense that the aforementioned \"phase shift\" is the unlikely scenario, and that I'm focusing on a weird and unlucky corner of the space. (I'm curious whether you endorse this, Richard, or some nearby correction of it.)\nIn particular, Richard, I am curious whether you endorse something like the following:\n\nI'm focusing ~all my efforts on the shallow-memorizers case, because I think shallow-memorizer-alignment will by and large be sufficient, and even if it is not then I expect it's a good way to prepare ourselves for whatever we'll turn out to need in practice. In particular I don't put much stock in the idea that there's a predictable phase-change that forces us to deal with laser-like planners, nor that predictable problems in that domain give large present reason to worry.\n\n(I suspect not, at least not in precisely this form, and I'm eager for corrections.)\nI suspect something in this vicinity constitutes a crux of the disagreement, and I would be thrilled if we could get it distilled down to something as concise as the above. And, for the record, I personally endorse the following counter to the above:\n\nI am focusing ~none of my efforts on shallow-memorizer-alignment, as I expect it to be far from sufficient, as I do not expect a singularity until we have more laser-like systems, and I think that the laserlike-planning regime has a host of predictable alignment difficulties that Earth does not seem at all prepared to face (unlike, it seems to me, the shallow-memorizer alignment difficulties), and as such I have large and present worries.\n\n—\n\n\n\n\n[Soares]  (Sep. 12 Google Doc)\nOk, and now a few less substantial points:\nThere's a point Richard made here:\n\nOh, interesting. Actually one more question then: to what extent do you think that explicitly reasoning about utility functions and laws of rationality is what makes consequentialists have the properties you've been talking about?\n\nthat I suspect constituted a miscommunication, especially given that the following sentence appeared in Richard's summary:\n\nA third thing that makes humans in particular consequentialist is planning, especially when we're aware of concepts like utility functions.\n\nIn particular, I suspect Richard's model of Eliezer's model places (or placed, before Richard read Eliezer's comments on Richard's summary) some particular emphasis on systems reflecting and thinking about their own strategies, as a method by which the consequentialism and/or effectiveness gets in. I suspect this is a misunderstanding, and am happy to say more on my model upon request, but am hopeful that the points I made a few pages above have cleared this up.\nFinally, I observe that there are a few places where Eliezer keeps beeping when Richard attempts to summarize him, and I suspect it would be useful to do the dorky thing of Richard very explicitly naming Eliezer's beeps as he understands them, for purposes of getting common knowledge of understanding. For instance, things I think it might be useful for Richard to say verbatim (assuming he believes them, which I suspect, and subject to Eliezer-corrections, b/c maybe I'm saying things that induce separate beeps):\n1. Eliezer doesn't believe it's impossible to build AIs that have most any given property, including most any given safety property, including most any desired \"non-consequentialist\" or \"deferential\" property you might desire. Rather, Eliezer believes that many desirable safety properties don't happen by default, and require mastery of minds that likely takes a worrying amount of time to acquire.\n2. The points about consequentialism are not particularly central in Eliezer's view; they seem to him more like obvious background facts; the reason conversation has lingered here in the EA-sphere is that this is a point that many folk in the local community disagree on.\nFor the record, I think it might also be worth Eliezer acknowledging that Richard probably understands point (1), and that glossing \"you don't get it for free by default and we aren't on course to have the time to get it\" as \"you can't\" is quite reasonable when summarizing. (And it might be worth Richard counter-acknowledging that the distinction is actually quite important once you buy the surrounding arguments, as it constitutes the difference between describing the current playing field and laying down to die.) I don't think any of these are high-priority, but they might be useful if easy \n—\nFinally, stating the obvious-to-me, none of this is intended as criticism of either party, and all discussing parties have exhibited significant virtue-according-to-Nate throughout this process.\n\n\n\n\n[Yudkowsky][21:27]  (Sep. 12)\nFrom Nate's notes:\n\nFor instance, the plan presumably needs to involve all sorts of mechanisms for refocusing the laser in the case where the environment contains fog, and redirecting the laser in the case where the environment contains mirrors (…the analogy is getting a bit strained here, sorry, bear with me), so that it can in fact hit a narrow and distant target. Refocusing and redirecting to stay on target are part and parcel to plans that can hit narrow distant targets.\nBut the humans shutting the AI down is like scattering the laser, and the humans tweaking the AI so that it plans in a different direction is like them tossing up mirrors that redirect the laser; and we want the plan to fail to correct for those interferences.\n\n–> GOOD ANALOGY.\n…or at least it sure conveys to me why corrigibility is anticonvergent / anticoherent / actually moderately strongly contrary to and not just an orthogonal property of a powerful-plan generator.\nBut then, I already know why that's true and how it generalized up to resisting our various attempts to solve small pieces of more important aspects of it – it's not just true by weak default, it's true by a stronger default where a roomful of people at a workshop spend several days trying to come up with increasingly complicated ways to describe a system that will let you shut it down (but not steer you through time into shutting it down), and all of those suggested ways get shot down. (And yes, people outside MIRI now and then publish papers saying they totally just solved this problem, but all of those \"solutions\" are things we considered and dismissed as trivially failing to scale to powerful agents – they didn't understand what we considered to be the first-order problems in the first place – rather than these being evidence that MIRI just didn't have smart-enough people at the workshop.)\n\n\n\n[Yudkowsky][18:56]  (Nov. 5 follow-up comment)\nEg, \"Well, we took a system that only learned from reinforcement on situations it had previously been in, and couldn't use imagination to plan for things it had never seen, and then we found that if we didn't update it on shut-down situations it wasn't reinforced to avoid shutdowns!\"\n\n\n \n\nThe post Ngo and Yudkowsky on alignment difficulty appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Ngo and Yudkowsky on alignment difficulty", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=4", "id": "f3e03c4c7c06b6bd11875ae1579786b4"} {"text": "Discussion with Eliezer Yudkowsky on AGI interventions\n\n\n \nThe following is a partially redacted and lightly edited transcript of a chat conversation about AGI between Eliezer Yudkowsky and a set of invitees in early September 2021. By default, all other participants are anonymized as \"Anonymous\".\nI think this Nate Soares quote (excerpted from Nate's response to a report by Joe Carlsmith) is a useful context-setting preface regarding timelines, which weren't discussed as much in the transcript:\n \n[…] My odds [of AGI by the year 2070] are around 85%[…]\nI can list a handful of things that drive my probability of AGI-in-the-next-49-years above 80%:\n1. 50 years ago was 1970. The gap between AI systems then and AI systems now seems pretty plausibly greater than the remaining gap, even before accounting the recent dramatic increase in the rate of progress, and potential future increases in rate-of-progress as it starts to feel within-grasp.\n2. I observe that, 15 years ago, everyone was saying AGI is far off because of what it couldn't do — basic image recognition, go, starcraft, winograd schemas, programmer assistance. But basically all that has fallen. The gap between us and AGI is made mostly of intangibles. (Computer Programming That Is Actually Good? Theorem proving? Sure, but on my model, \"good\" versions of those are a hair's breadth away from full AGI already. And the fact that I need to clarify that \"bad\" versions don't count, speaks to my point that the only barriers people can name right now are intangibles.) That's a very uncomfortable place to be!\n3. When I look at the history of invention, and the various anecdotes about the Wright brothers and Enrico Fermi, I get an impression that, when a technology is pretty close, the world looks a lot like how our world looks.\n\nOf course, the trick is that when a technology is a little far, the world might also look pretty similar!\nThough when a technology is very far, the world does look different — it looks like experts pointing to specific technical hurdles. We exited that regime a few years ago.\n\n4. Summarizing the above two points, I suspect that I'm in more-or-less the \"penultimate epistemic state\" on AGI timelines: I don't know of a project that seems like they're right on the brink; that would put me in the \"final epistemic state\" of thinking AGI is imminent. But I'm in the second-to-last epistemic state, where I wouldn't feel all that shocked to learn that some group has reached the brink. Maybe I won't get that call for 10 years! Or 20! But it could also be 2, and I wouldn't get to be indignant with reality. I wouldn't get to say \"but all the following things should have happened first, before I made that observation\". I have made those observations.\n5. It seems to me that the Cotra-style compute-based model provides pretty conservative estimates. For one thing, I don't expect to need human-level compute to get human-level intelligence, and for another I think there's a decent chance that insight and innovation have a big role to play, especially on 50 year timescales.\n6. There has been a lot of AI progress recently. When I tried to adjust my beliefs so that I was positively surprised by AI progress just about as often as I was negatively surprised by AI progress, I ended up expecting a bunch of rapid progress. […]\n \nFurther preface by Eliezer:\nIn some sections here, I sound gloomy about the probability that coordination between AGI groups succeeds in saving the world.  Andrew Critch reminds me to point out that gloominess like this can be a self-fulfilling prophecy – if people think successful coordination is impossible, they won't try to coordinate.  I therefore remark in retrospective advance that it seems to me like at least some of the top AGI people, say at Deepmind and Anthropic, are the sorts who I think would rather coordinate than destroy the world; my gloominess is about what happens when the technology has propagated further than that.  But even then, anybody who would rather coordinate and not destroy the world shouldn't rule out hooking up with Demis, or whoever else is in front if that person also seems to prefer not to completely destroy the world.  (Don't be too picky here.)  Even if the technology proliferates and the world ends a year later when other non-coordinating parties jump in, it's still better to take the route where the world ends one year later instead of immediately.  Maybe the horse will sing.\n\n\n \nEliezer Yudkowsky\nHi and welcome. Points to keep in mind:\n– I'm doing this because I would like to learn whichever actual thoughts this target group may have, and perhaps respond to those; that's part of the point of anonymity. If you speak an anonymous thought, please have that be your actual thought that you are thinking yourself, not something where you're thinking \"well, somebody else might think that…\" or \"I wonder what Eliezer's response would be to…\"\n– Eliezer's responses are uncloaked by default. Everyone else's responses are anonymous (not pseudonymous) and neither I nor MIRI will know which potential invitee sent them.\n– Please do not reshare or pass on the link you used to get here.\n– I do intend that parts of this conversation may be saved and published at MIRI's discretion, though not with any mention of who the anonymous speakers could possibly have been.\n \nEliezer Yudkowsky\n(Thank you to Ben Weinstein-Raun for building chathamroom.com, and for quickly adding some features to it at my request.)\n \nEliezer Yudkowsky\nIt is now 2PM; this room is now open for questions.\n \nAnonymous\nHow long will it be open for?\n \nEliezer Yudkowsky\nIn principle, I could always stop by a couple of days later and answer any unanswered questions, but my basic theory had been \"until I got tired\".\n \n\n \nAnonymous\nAt a high level one thing I want to ask about is research directions and prioritization. For example, if you were dictator for what researchers here (or within our influence) were working on, how would you reallocate them?\n \nEliezer Yudkowsky\nThe first reply that came to mind is \"I don't know.\" I consider the present gameboard to look incredibly grim, and I don't actually see a way out through hard work alone. We can hope there's a miracle that violates some aspect of my background model, and we can try to prepare for that unknown miracle; preparing for an unknown miracle probably looks like \"Trying to die with more dignity on the mainline\" (because if you can die with more dignity on the mainline, you are better positioned to take advantage of a miracle if it occurs).\n \nAnonymous\nI'm curious if the grim outlook is currently mainly due to technical difficulties or social/coordination difficulties. (Both avenues might have solutions, but maybe one seems more recalcitrant than the other?)\n \nEliezer Yudkowsky\nTechnical difficulties. Even if the social situation were vastly improved, on my read of things, everybody still dies because there is nothing that a handful of socially coordinated projects can do, or even a handful of major governments who aren't willing to start nuclear wars over things, to prevent somebody else from building AGI and killing everyone 3 months or 2 years later. There's no obvious winnable position into which to play the board.\n \nAnonymous\njust to clarify, that sounds like a large scale coordination difficulty to me (i.e., we – as all of humanity – can't coordinate to not build that AGI).\n \nEliezer Yudkowsky\nI wasn't really considering the counterfactual where humanity had a collective telepathic hivemind? I mean, I've written fiction about a world coordinated enough that they managed to shut down all progress in their computing industry and only manufacture powerful computers in a single worldwide hidden base, but Earth was never going to go down that route. Relative to remotely plausible levels of future coordination, we have a technical problem.\n \nAnonymous \nCurious about why building an AGI aligned to its users' interests isn't a thing a handful of coordinated projects could do that would effectively prevent the catastrophe. The two obvious options are: it's too hard to build it vs it wouldn't stop the other group anyway. For \"it wouldn't stop them\", two lines of reply are nobody actually wants an unaligned AGI (they just don't foresee the consequences and are pursuing the benefits from automated intelligence, so can be defused by providing the latter) (maybe not entirely true: omnicidal maniacs), and an aligned AGI could help in stopping them. Is your take more on the \"too hard to build\" side?\n \nEliezer Yudkowsky \nBecause it's too technically hard to align some cognitive process that is powerful enough, and operating in a sufficiently dangerous domain, to stop the next group from building an unaligned AGI in 3 months or 2 years. Like, they can't coordinate to build an AGI that builds a nanosystem because it is too technically hard to align their AGI technology in the 2 years before the world ends.\n \nAnonymous \nSummarizing the threat model here (correct if wrong): The nearest competitor for building an AGI is at most N (<2) years behind, and building an aligned AGI, even when starting with the ability to build an unaligned AGI, takes longer than N years. So at some point some competitor who doesn't care about safety builds the unaligned AGI. How does \"nobody actually wants an unaligned AGI\" fail here? It takes >N years to get everyone to realise that they have that preference and that it's incompatible with their actions?\n \nEliezer Yudkowsky \nMany of the current actors seem like they'd be really gung-ho to build an \"unaligned\" AGI because they think it'd be super neat, or they think it'd be super profitable, and they don't expect it to destroy the world. So if this happens in anything like the current world – and I neither expect vast improvements, nor have very long timelines – then we'd see Deepmind get it first; and, if the code was not immediately stolen and rerun with higher bounds on the for loops, by China or France or whoever, somebody else would get it in another year; if that somebody else was Anthropic, I could maybe see them also not amping up their AGI; but then in 2 years it starts to go to Facebook AI Research and home hobbyists and intelligence agencies stealing copies of the code from other intelligence agencies and I don't see how the world fails to end past that point.\n \nAnonymous \nWhat does trying to die with more dignity on the mainline look like? There's a real question of prioritisation here between solving the alignment problem (and various approaches within that), and preventing or slowing down the next competitor. I'd personally love more direction on where to focus my efforts (obviously you can only say things generic to the group).\n \nEliezer Yudkowsky \nI don't know how to effectively prevent or slow down the \"next competitor\" for more than a couple of years even in plausible-best-case scenarios. Maybe some of the natsec people can be grownups in the room and explain why \"stealing AGI code and running it\" is as bad as \"full nuclear launch\" to their foreign counterparts in a realistic way. Maybe more current AGI groups can be persuaded to go closed; or, if more than one has an AGI, to coordinate with each other and not rush into an arms race. I'm not sure I believe these things can be done in real life, but it seems understandable to me how I'd go about trying – though, please do talk with me a lot more before trying anything like this, because it's easy for me to see how attempts could backfire, it's not clear to me that we should be inviting more attention from natsec folks at all. None of that saves us without technical alignment progress. But what are other people supposed to do about researching alignment when I'm not sure what to try there myself?\n \nAnonymous \nthanks! on researching alignment, you might have better meta ideas (how to do research generally) even if you're also stuck on object level. and you might know/foresee dead ends that others don't.\n \nEliezer Yudkowsky \nI definitely foresee a whole lot of dead ends that others don't, yes.\n \nAnonymous \nDoes pushing for a lot of public fear about this kind of research, that makes all projects hard, seem hopeless?\n \nEliezer Yudkowsky \nWhat does it buy us? 3 months of delay at the cost of a tremendous amount of goodwill? 2 years of delay? What's that delay for, if we all die at the end? Even if we then got a technical miracle, would it end up impossible to run a project that could make use of an alignment miracle, because everybody was afraid of that project? Wouldn't that fear tend to be channeled into \"ah, yes, it must be a government project, they're the good guys\" and then the government is much more hopeless and much harder to improve upon than Deepmind?\n \nAnonymous \nI imagine lack of public support for genetic manipulation of humans has slowed that research by more than three months\n \nAnonymous \n'would it end up impossible to run a project that could make use of an alignment miracle, because everybody was afraid of that project?'\n…like, maybe, but not with near 100% chance?\n \nEliezer Yudkowsky \nI don't want to sound like I'm dismissing the whole strategy, but it sounds a lot like the kind of thing that backfires because you did not get exactly the public reaction you wanted, and the public reaction you actually got was bad; and it doesn't sound like that whole strategy actually has a visualized victorious endgame, which makes it hard to work out what the exact strategy should be; it seems more like the kind of thing that falls under the syllogism \"something must be done, this is something, therefore this must be done\" than like a plan that ends with humane life victorious.\nRegarding genetic manipulation of humans, I think the public started out very unfavorable to that, had a reaction that was not at all exact or channeled, does not allow for any 'good' forms of human genetic manipulation regardless of circumstances, driving the science into other countries – it is not a case in point of the intelligentsia being able to successfully cunningly manipulate the fear of the masses to some supposed good end, to put it mildly, so I'd be worried about deriving that generalization from it. The reaction may more be that the fear of the public is a big powerful uncontrollable thing that doesn't move in the smart direction – maybe the public fear of AI gets channeled by opportunistic government officials into \"and that's why We must have Our AGI first so it will be Good and we can Win\". That seems to me much more like a thing that would happen in real life than \"and then we managed to manipulate public panic down exactly the direction we wanted to fit into our clever master scheme\", especially when we don't actually have the clever master scheme it fits into.\n \n\n \nEliezer Yudkowsky \nI have a few stupid ideas I could try to investigate in ML, but that would require the ability to run significant-sized closed ML projects full of trustworthy people, which is a capability that doesn't seem to presently exist. Plausibly, this capability would be required in any world that got some positive model violation (\"miracle\") to take advantage of, so I would want to build that capability today. I am not sure how to go about doing that either.\n \nAnonymous \nif there's a chance this group can do something to gain this capability I'd be interested in checking it out. I'd want to know more about what \"closed\"and \"trustworthy\" mean for this (and \"significant-size\" I guess too). E.g., which ones does Anthropic fail?\n \nEliezer Yudkowsky \nWhat I'd like to exist is a setup where I can work with people that I or somebody else has vetted as seeming okay-trustworthy, on ML projects that aren't going to be published. Anthropic looks like it's a package deal. If Anthropic were set up to let me work with 5 particular people at Anthropic on a project boxed away from the rest of the organization, that would potentially be a step towards trying such things. It's also not clear to me that Anthropic has either the time to work with me, or the interest in doing things in AI that aren't \"stack more layers\" or close kin to that.\n \nAnonymous \nThat setup doesn't sound impossible to me — at DeepMind or OpenAI or a new org specifically set up for it (or could be MIRI) — the bottlenecks are access to trustworthy ML-knowledgeable people (but finding 5 in our social network doesn't seem impossible?) and access to compute (can be solved with more money – not too hard?). I don't think DM and OpenAI are publishing everything – the \"not going to be published\" part doesn't seem like a big barrier to me. Is infosec a major bottleneck (i.e., who's potentially stealing the code/data)?\n \nAnonymous \nDo you think Redwood Research could be a place for this?\n \nEliezer Yudkowsky \nMaybe! I haven't ruled RR out yet. But they also haven't yet done (to my own knowledge) anything demonstrating the same kind of AI-development capabilities as even GPT-3, let alone AlphaFold 2.\n \nEliezer Yudkowsky \nI would potentially be super interested in working with Deepminders if Deepmind set up some internal partition for \"Okay, accomplished Deepmind researchers who'd rather not destroy the world are allowed to form subpartitions of this partition and have their work not be published outside the subpartition let alone Deepmind in general, though maybe you have to report on it to Demis only or something.\" I'd be more skeptical/worried about working with OpenAI-minus-Anthropic because the notion of \"open AI\" continues to sound to me like \"what is the worst possible strategy for making the game board as unplayable as possible while demonizing everybody who tries a strategy that could possibly lead to the survival of humane intelligence\", and now a lot of the people who knew about that part have left OpenAI for elsewhere. But, sure, if they changed their name to \"ClosedAI\" and fired everyone who believed in the original OpenAI mission, I would update about that.\n \nEliezer Yudkowsky \nContext that is potentially missing here and should be included: I wish that Deepmind had more internal closed research, and internally siloed research, as part of a larger wish I have about the AI field, independently of what projects I'd want to work on myself.\nThe present situation can be seen as one in which a common resource, the remaining timeline until AGI shows up, is incentivized to be burned by AI researchers because they have to come up with neat publications and publish them (which burns the remaining timeline) in order to earn status and higher salaries. The more they publish along the spectrum that goes {quiet internal result -> announced and demonstrated result -> paper describing how to get the announced result -> code for the result -> model for the result}, the more timeline gets burned, and the greater the internal and external prestige accruing to the researcher.\nIt's futile to wish for everybody to act uniformly against their incentives.  But I think it would be a step forward if the relative incentive to burn the commons could be reduced; or to put it another way, the more researchers have the option to not burn the timeline commons, without them getting fired or passed up for promotion, the more that unusually intelligent researchers might perhaps decide not to do that. So I wish in general that AI research groups in general, but also Deepmind in particular, would have affordances for researchers who go looking for interesting things to not publish any resulting discoveries, at all, and still be able to earn internal points for them. I wish they had the option to do that. I wish people were allowed to not destroy the world – and still get high salaries and promotion opportunities and the ability to get corporate and ops support for playing with interesting toys; if destroying the world is prerequisite for having nice things, nearly everyone is going to contribute to destroying the world, because, like, they're not going to just not have nice things, that is not human nature for almost all humans.\nWhen I visualize how the end of the world plays out, I think it involves an AGI system which has the ability to be cranked up by adding more computing resources to it; and I think there is an extended period where the system is not aligned enough that you can crank it up that far, without everyone dying. And it seems extremely likely that if factions on the level of, say, Facebook AI Research, start being able to deploy systems like that, then death is very automatic. If the Chinese, Russian, and French intelligence services all manage to steal a copy of the code, and China and Russia sensibly decide not to run it, and France gives it to three French corporations which I hear the French intelligence service sometimes does, then again, everybody dies. If the builders are sufficiently worried about that scenario that they push too fast too early, in fear of an arms race developing very soon if they wait, again, everybody dies.\nAt present we're very much waiting on a miracle for alignment to be possible at all, even if the AGI-builder successfully prevents proliferation and has 2 years in which to work. But if we get that miracle at all, it's not going to be an instant miracle.  There'll be some minimum time-expense to do whatever work is required. So any time I visualize anybody trying to even start a successful trajectory of this kind, they need to be able to get a lot of work done, without the intermediate steps of AGI work being published, or demoed at all, let alone having models released.  Because if you wait until the last months when it is really really obvious that the system is going to scale to AGI, in order to start closing things, almost all the prerequisites will already be out there. Then it will only take 3 more months of work for somebody else to build AGI, and then somebody else, and then somebody else; and even if the first 3 factions manage not to crank up the dial to lethal levels, the 4th party will go for it; and the world ends by default on full automatic.\nIf ideas are theoretically internal to \"just the company\", but the company has 150 people who all know, plus everybody with the \"sysadmin\" title having access to the code and models, then I imagine – perhaps I am mistaken – that those ideas would (a) inevitably leak outside due to some of those 150 people having cheerful conversations over a beer with outsiders present, and (b) be copied outright by people of questionable allegiances once all hell started to visibly break loose. As with anywhere that handles really sensitive data, the concept of \"need to know\" has to be a thing, or else everyone (and not just in that company) ends up knowing.\nSo, even if I got run over by a truck tomorrow, I would still very much wish that in the world that survived me, Deepmind would have lots of penalty-free affordance internally for people to not publish things, and to work in internal partitions that didn't spread their ideas to all the rest of Deepmind.  Like, actual social and corporate support for that, not just a theoretical option you'd have to burn lots of social capital and weirdness points to opt into, and then get passed up for promotion forever after.\n \nAnonymous \nWhat's RR?\n \nAnonymous \nIt's a new alignment org, run by Nate Thomas and ~co-run by Buck Shlegeris and Bill Zito, with maybe 4-6 other technical folks so far. My take: the premise is to create an org with ML expertise and general just-do-it competence that's trying to do all the alignment experiments that something like Paul+Ajeya+Eliezer all think are obviously valuable and wish someone would do. They expect to have a website etc in a few days; the org is a couple months old in its current form.\n \n\n \nAnonymous \nHow likely really is hard takeoff? Clearly, we are touching the edges of AGI with GPT and the like. But I'm not feeling this will that easily be leveraged into very quick recursive self improvement.\n \nEliezer Yudkowsky \nCompared to the position I was arguing in the Foom Debate with Robin, reality has proved way to the further Eliezer side of Eliezer along the Eliezer-Robin spectrum. It's been very unpleasantly surprising to me how little architectural complexity is required to start producing generalizing systems, and how fast those systems scale using More Compute. The flip side of this is that I can imagine a system being scaled up to interesting human+ levels, without \"recursive self-improvement\" or other of the old tricks that I thought would be necessary, and argued to Robin would make fast capability gain possible. You could have fast capability gain well before anything like a FOOM started. Which in turn makes it more plausible to me that we could hang out at interesting not-superintelligent levels of AGI capability for a while before a FOOM started. It's not clear that this helps anything, but it does seem more plausible.\n \nAnonymous \nI agree reality has not been hugging the Robin kind of scenario this far.\n \nAnonymous \nGoing past human level doesn't necessarily mean going \"foom\".\n \nEliezer Yudkowsky \nI do think that if you get an AGI significantly past human intelligence in all respects, it would obviously tend to FOOM. I mean, I suspect that Eliezer fooms if you give an Eliezer the ability to backup, branch, and edit himself.\n \nAnonymous \nIt doesn't seem to me that an AGI significantly past human intelligence necessarily tends to FOOM.\n \nEliezer Yudkowsky \nI think in principle we could have, for example, an AGI that was just a superintelligent engineer of proteins, and of nanosystems built by nanosystems that were built by proteins, and which was corrigible enough not to want to improve itself further; and this AGI would also be dumber than a human when it came to eg psychological manipulation, because we would have asked it not to think much about that subject. I'm doubtful that you can have an AGI that's significantly above human intelligence in all respects, without it having the capability-if-it-wanted-to of looking over its own code and seeing lots of potential improvements.\n \nAnonymous \nAlright, this makes sense to me, but I don't expect an AGI to want to manipulate humans that easily (unless designed to). Maybe a bit.\n \nEliezer Yudkowsky \nManipulating humans is a convergent instrumental strategy if you've accurately modeled (even at quite low resolution) what humans are and what they do in the larger scheme of things.\n \nAnonymous \nYes, but human manipulation is also the kind of thing you need to guard against with even mildly powerful systems. Strong impulses to manipulate humans, should be vetted out.\n \nEliezer Yudkowsky \nI think that, by default, if you trained a young AGI to expect that 2+2=5 in some special contexts, and then scaled it up without further retraining, a generally superhuman version of that AGI would be very likely to 'realize' in some sense that SS0+SS0=SSSS0 was a consequence of the Peano axioms. There's a natural/convergent/coherent output of deep underlying algorithms that generate competence in some of the original domains; when those algorithms are implicitly scaled up, they seem likely to generalize better than whatever patch on those algorithms said '2 + 2 = 5'.\nIn the same way, suppose that you take weak domains where the AGI can't fool you, and apply some gradient descent to get the AGI to stop outputting actions of a type that humans can detect and label as 'manipulative'.  And then you scale up that AGI to a superhuman domain.  I predict that deep algorithms within the AGI will go through consequentialist dances, and model humans, and output human-manipulating actions that can't be detected as manipulative by the humans, in a way that seems likely to bypass whatever earlier patch was imbued by gradient descent, because I doubt that earlier patch will generalize as well as the deep algorithms. Then you don't get to retrain in the superintelligent domain after labeling as bad an output that killed you and doing a gradient descent update on that, because the bad output killed you. (This is an attempted very fast gloss on what makes alignment difficult in the first place.)\n \nAnonymous \n[i appreciate this gloss – thanks]\n \nAnonymous \n\"deep algorithms within it will go through consequentialist dances, and model humans, and output human-manipulating actions that can't be detected as manipulative by the humans\"\nThis is true if it is rewarding to manipulate humans. If the humans are on the outlook for this kind of thing, it doesn't seem that easy to me.\nGoing through these \"consequentialist dances\" to me appears to presume that mistakes that should be apparent haven't been solved at simpler levels. It seems highly unlikely to me that you would have a system that appears to follow human requests and human values, and it would suddenly switch at some powerful level. I think there will be signs beforehand. Of course, if the humans are not paying attention, they might miss it. But, say, in the current milieu, I find it plausible that they will pay enough attention.\n\"because I doubt that earlier patch will generalize as well as the deep algorithms\"\nThat would depend on how \"deep\" your earlier patch was. Yes, if you're just doing surface patches to apparent problems, this might happen. But it seems to me that useful and intelligent systems will require deep patches (or deep designs from the start) in order to be apparently useful to humans at solving complex problems enough. This is not to say that they would be perfect. But it seems quite plausible to me that they would in most cases prevent the worst outcomes.\n \nEliezer Yudkowsky \n\"If you've got a general consequence-modeling-and-searching algorithm, it seeks out ways to manipulate humans, even if there are no past instances of a random-action-generator producing manipulative behaviors that succeeded and got reinforced by gradient descent over the random-action-generator. It invents the strategy de novo by imagining the results, even if there's no instances in memory of a strategy like that having been tried before.\" Agree or disagree?\n \nAnonymous \nCreating strategies de novo would of course be expected of an AGI.\n\"If you've got a general consequence-modeling-and-searching algorithm, it seeks out ways to manipulate humans, even if there are no past instances of a random-action-generator producing manipulative behaviors that succeeded and got reinforced by gradient descent over the random-action-generator. It invents the strategy de novo by imagining the results, even if there's no instances in memory of a strategy like that having been tried before.\" Agree or disagree?\nI think, if the AI will \"seek out ways to manipulate humans\", will depend on what kind of goals the AI has been designed to pursue.\nManipulating humans is definitely an instrumentally useful kind of method for an AI, for a lot of goals. But it's also counter to a lot of the things humans would direct the AI to do — at least at a \"high level\". \"Manipulation\", such as marketing, for lower level goals, can be very congruent with higher level goals. An AI could clearly be good at manipulating humans, while not manipulating its creators or the directives of its creators.\nIf you are asking me to agree that the AI will generally seek out ways to manipulate the high-level goals, then I will say \"no\". Because it seems to me that faults of this kind in the AI design is likely to be caught by the designers earlier. (This isn't to say that this kind of fault couldn't happen.) It seems to me that manipulation of high-level goals will be one of the most apparent kind of faults of this kind of system.\n \nAnonymous \nRE: \"I'm doubtful that you can have an AGI that's significantly above human intelligence in all respects, without it having the capability-if-it-wanted-to of looking over its own code and seeing lots of potential improvements.\"\nIt seems plausible (though unlikely) to me that this would be true in practice for the AGI we build — but also that the potential improvements it sees would be pretty marginal. This is coming from the same intuition that current learning algorithms might already be approximately optimal.\n \nEliezer Yudkowsky \nIf you are asking me to agree that the AI will generally seek out ways to manipulate the high-level goals, then I will say \"no\". Because it seems to me that faults of this kind in the AI design is likely to be caught by the designers earlier.\nI expect that when people are trying to stomp out convergent instrumental strategies by training at a safe dumb level of intelligence, this will not be effective at preventing convergent instrumental strategies at smart levels of intelligence; also note that at very smart levels of intelligence, \"hide what you are doing\" is also a convergent instrumental strategy of that substrategy.\nI don't know however if I should be explaining at this point why \"manipulate humans\" is convergent, why \"conceal that you are manipulating humans\" is convergent, why you have to train in safe regimes in order to get safety in dangerous regimes (because if you try to \"train\" at a sufficiently unsafe level, the output of the unaligned system deceives you into labeling it incorrectly and/or kills you before you can label the outputs), or why attempts to teach corrigibility in safe regimes are unlikely to generalize well to higher levels of intelligence and unsafe regimes (qualitatively new thought processes, things being way out of training distribution, and, the hardest part to explain, corrigibility being \"anti-natural\" in a certain sense that makes it incredibly hard to, eg, exhibit any coherent planning behavior (\"consistent utility function\") which corresponds to being willing to let somebody else shut you off, without incentivizing you to actively manipulate them to shut you off).\n \n\n \nAnonymous \nMy (unfinished) idea for buying time is to focus on applying AI to well-specified problems, where constraints can come primarily from the action space and additionally from process-level feedback (i.e., human feedback providers understand why actions are good before endorsing them, and reject anything weird even if it seems to work on some outcomes-based metric). This is basically a form of boxing, with application-specific boxes. I know it doesn't scale to superintelligence but I think it can potentially give us time to study and understand proto AGIs before they kill us. I'd be interested to hear devastating critiques of this that imply it isn't even worth fleshing out more and trying to pursue, if they exist.\n \nAnonymous \n(I think it's also similar to CAIS in case that's helpful.)\n \nEliezer Yudkowsky \nThere's lots of things we can do which don't solve the problem and involve us poking around with AIs having fun, while we wait for a miracle to pop out of nowhere. There's lots of things we can do with AIs which are weak enough to not be able to fool us and to not have cognitive access to any dangerous outputs, like automatically generating pictures of cats.  The trouble is that nothing we can do with an AI like that (where \"human feedback providers understand why actions are good before endorsing them\") is powerful enough to save the world.\n \nEliezer Yudkowsky \nIn other words, if you have an aligned AGI that builds complete mature nanosystems for you, that is enough force to save the world; but that AGI needs to have been aligned by some method other than \"humans inspect those outputs and vet them and their consequences as safe/aligned\", because humans cannot accurately and unfoolably vet the consequences of DNA sequences for proteins, or of long bitstreams sent to protein-built nanofactories.\n \nAnonymous \nWhen you mention nanosystems, how much is this just a hypothetical superpower vs. something you actually expect to be achievable with AGI/superintelligence? If expected to be achievable, why?\n \nEliezer Yudkowsky \nThe case for nanosystems being possible, if anything, seems even more slam-dunk than the already extremely slam-dunk case for superintelligence, because we can set lower bounds on the power of nanosystems using far more specific and concrete calculations. See eg the first chapters of Drexler's Nanosystems, which are the first step mandatory reading for anyone who would otherwise doubt that there's plenty of room above biology and that it is possible to have artifacts the size of bacteria with much higher power densities. I have this marked down as \"known lower bound\" not \"speculative high value\", and since Nanosystems has been out since 1992 and subjected to attemptedly-skeptical scrutiny, without anything I found remotely persuasive turning up, I do not have a strong expectation that any new counterarguments will materialize.\nIf, after reading Nanosystems, you still don't think that a superintelligence can get to and past the Nanosystems level, I'm not quite sure what to say to you, since the models of superintelligences are much less concrete than the models of molecular nanotechnology.\nI'm on record as early as 2008 as saying that I expected superintelligences to crack protein folding, some people disputed that and were all like \"But how do you know that's solvable?\" and then AlphaFold 2 came along and cracked the protein folding problem they'd been skeptical about, far below the level of superintelligence.\nI can try to explain how I was mysteriously able to forecast this truth at a high level of confidence – not the exact level where it became possible, to be sure, but that superintelligence would be sufficient – despite this skepticism; I suppose I could point to prior hints, like even human brains being able to contribute suggestions to searches for good protein configurations; I could talk about how if evolutionary biology made proteins evolvable then there must be a lot of regularity in the folding space, and that this kind of regularity tends to be exploitable.\nBut of course, it's also, in a certain sense, very obvious that a superintelligence could crack protein folding, just like it was obvious years before Nanosystems that molecular nanomachines would in fact be possible and have much higher power densities than biology. I could say, \"Because proteins are held together by van der Waals forces that are much weaker than covalent bonds,\" to point to a reason how you could realize that after just reading Engines of Creation and before Nanosystems existed, by way of explaining how one could possibly guess the result of the calculation in advance of building up the whole detailed model. But in reality, precisely because the possibility of molecular nanotechnology was already obvious to any sensible person just from reading Engines of Creation, the sort of person who wasn't convinced by Engines of Creation wasn't convinced by Nanosystems either, because they'd already demonstrated immunity to sensible arguments; an example of the general phenomenon I've elsewhere termed the Law of Continued Failure.\nSimilarly, the sort of person who was like \"But how do you know superintelligences will be able to build nanotech?\" in 2008, will probably not be persuaded by the demonstration of AlphaFold 2, because it was already clear to anyone sensible in 2008, and so anyone who can't see sensible points in 2008 probably also can't see them after they become even clearer. There are some people on the margins of sensibility who fall through and change state, but mostly people are not on the exact margins of sanity like that.\n \nAnonymous \n\"If, after reading Nanosystems, you still don't think that a superintelligence can get to and past the Nanosystems level, I'm not quite sure what to say to you, since the models of superintelligences are much less concrete than the models of molecular nanotechnology.\"\nI'm not sure if this is directed at me or the https://en.wikipedia.org/wiki/Generic_you, but I'm only expressing curiosity on this point, not skepticism \n \n\n \nAnonymous \nsome form of \"scalable oversight\" is the naive extension of the initial boxing thing proposed above that claims to be the required alignment method — basically, make the humans vetting the outputs smarter by providing them AI support for all well-specified (level-below)-vettable tasks.\n \nEliezer Yudkowsky \nI haven't seen any plausible story, in any particular system design being proposed by the people who use terms about \"scalable oversight\", about how human-overseeable thoughts or human-inspected underlying systems, compound into very powerful human-non-overseeable outputs that are trustworthy. Fundamentally, the whole problem here is, \"You're allowed to look at floating-point numbers and Python code, but how do you get from there to trustworthy nanosystem designs?\" So saying \"Well, we'll look at some thoughts we can understand, and then from out of a much bigger system will come a trustworthy output\" doesn't answer the hard core at the center of the question. Saying that the humans will have AI support doesn't answer it either.\n \nAnonymous \nthe kind of useful thing humans (assisted-humans) might be able to vet is reasoning/arguments/proofs/explanations. without having to generate neither the trustworthy nanosystem design nor the reasons it is trustworthy, we could still check them.\n \nEliezer Yudkowsky \nIf you have an untrustworthy general superintelligence generating English strings meant to be \"reasoning/arguments/proofs/explanations\" about eg a nanosystem design, then I would not only expect the superintelligence to be able to fool humans in the sense of arguing for things that were not true in a way that fooled the humans, I'd expect the superintelligence to be able to covertly directly hack the humans in ways that I wouldn't understand even after having been told what happened. So you must have some prior belief about the superintelligence being aligned before you dared to look at the arguments. How did you get that prior belief?\n \nAnonymous \nI think I'm not starting with a general superintelligence here to get the trustworthy nanodesigns. I'm trying to build the trustworthy nanosystems \"the hard way\", i.e., if we did it without ever building AIs, and then speed that up using AI for automation of things we know how to vet (including recursively). Is a crux here that you think nanosystem design requires superintelligence?\n(tangent: I think this approach works even if you accidentally built a more-general or more-intelligent than necessary foundation model as long as you're only using it in boxes it can't outsmart. The better-specified the tasks you automate are, the easier it is to secure the boxes.)\n \nEliezer Yudkowsky \nI think that China ends the world using code they stole from Deepmind that did things the easy way, and that happens 50 years of natural R&D time before you can do the equivalent of \"strapping mechanical aids to a horse instead of building a car from scratch\".\nI also think that the speedup step in \"iterated amplification and distillation\" will introduce places where the fast distilled outputs of slow sequences are not true to the original slow sequences, because gradient descent is not perfect and won't be perfect and it's not clear we'll get any paradigm besides gradient descent for doing a step like that.\n \n\n \nAnonymous \nHow do you feel about the safety community as a whole and the growth we've seen over the past few years?\n \nEliezer Yudkowsky \nVery grim. I think that almost everybody is bouncing off the real hard problems at the center and doing work that is predictably not going to be useful at the superintelligent level, nor does it teach me anything I could not have said in advance of the paper being written. People like to do projects that they know will succeed and will result in a publishable paper, and that rules out all real research at step 1 of the social process.\nPaul Christiano is trying to have real foundational ideas, and they're all wrong, but he's one of the few people trying to have foundational ideas at all; if we had another 10 of him, something might go right.\nChris Olah is going to get far too little done far too late. We're going to be facing down an unalignable AGI and the current state of transparency is going to be \"well look at this interesting visualized pattern in the attention of the key-value matrices in layer 47\" when what we need to know is \"okay but was the AGI plotting to kill us or not\". But Chris Olah is still trying to do work that is on a pathway to anything important at all, which makes him exceptional in the field.\nStuart Armstrong did some good work on further formalizing the shutdown problem, an example case in point of why corrigibility is hard, which so far as I know is still resisting all attempts at solution.\nVarious people who work or worked for MIRI came up with some actually-useful notions here and there, like Jessica Taylor's expected utility quantilization.\nAnd then there is, so far as I can tell, a vast desert full of work that seems to me to be mostly fake or pointless or predictable.\nIt is very, very clear that at present rates of progress, adding that level of alignment capability as grown over the next N years, to the AGI capability that arrives after N years, results in everybody dying very quickly. Throwing more money at this problem does not obviously help because it just produces more low-quality work.\n \nAnonymous \n\"doing work that is predictably not going to be really useful at the superintelligent level, nor does it teach me anything I could not have said in advance of the paper being written\"\nI think you're underestimating the value of solving small problems. Big problems are solved by solving many small problems. (I do agree that many academic papers do not represent much progress, however.)\n \nEliezer Yudkowsky \nBy default, I suspect you have longer timelines and a smaller estimate of total alignment difficulty, not that I put less value than you on the incremental power of solving small problems over decades. I think we're going to be staring down the gun of a completely inscrutable model that would kill us all if turned up further, with no idea how to read what goes on inside its head, and no way to train it on humanly scrutable and safe and humanly-labelable domains in a way that seems like it would align the superintelligent version, while standing on top of a whole bunch of papers about \"small problems\" that never got past \"small problems\".\n \nAnonymous \n\"I think we're going to be staring down the gun of a completely inscrutable model that would kill us all if turned up further, with no idea how to read what goes on inside its head, and no way to train it on humanly scrutable and safe and humanly-labelable domains in a way that seems like it would align the superintelligent version\"\nThis scenario seems possible to me, but not very plausible. GPT is not going to \"kill us all\" if turned up further. No amount of computing power (at least before AGI) would cause it to. I think this is apparent, without knowing exactly what's going on inside GPT. This isn't to say that there aren't AI systems that wouldn't. But what kind of system would? (A GPT combined with sensory capabilities at the level of Tesla's self-driving AI? That still seems too limited.)\n \nEliezer Yudkowsky \nAlpha Zero scales with more computing power, I think AlphaFold 2 scales with more computing power, Mu Zero scales with more computing power. Precisely because GPT-3 doesn't scale, I'd expect an AGI to look more like Mu Zero and particularly with respect to the fact that it has some way of scaling.\n \n\n \nSteve Omohundro \nEliezer, thanks for doing this! I just now read through the discussion and found it valuable. I agree with most of your specific points but I seem to be much more optimistic than you about a positive outcome. I'd like to try to understand why that is. I see mathematical proof as the most powerful tool for constraining intelligent systems and I see a pretty clear safe progression using that for the technical side (the social side probably will require additional strategies). Here are some of my intuitions underlying that approach, I wonder if you could identify any that you disagree with. I'm fine with your using my name (Steve Omohundro) in any discussion of these.\n1) Nobody powerful wants to create unsafe AI but they do want to take advantage of AI capabilities.\n2) None of the concrete well-specified valuable AI capabilities require unsafe behavior\n3) Current simple logical systems are capable of formalizing every relevant system involved (eg. MetaMath http://us.metamath.org/index.html currently formalizes roughly an undergraduate math degree and includes everything needed for modeling the laws of physics, computer hardware, computer languages, formal systems, machine learning algorithms, etc.)\n4) Mathematical proof is cheap to mechanically check (eg. MetaMath has a 500 line Python verifier which can rapidly check all of its 38K theorems)\n5) GPT-F is a fairly early-stage transformer-based theorem prover and can already prove 56% of the MetaMath theorems. Similar systems are likely to soon be able to rapidly prove all simple true theorems (eg. that human mathematicians can prove in a day).\n6) We can define provable limits on the behavior of AI systems that we are confident prevent dangerous behavior and yet still enable a wide range of useful behavior.\n7) We can build automated checkers for these provable safe-AI limits.\n8) We can build (and eventually mandate) powerful AI hardware that first verifies proven safety constraints before executing AI software\n9) For example, AI smart compilation of programs can be formalized and doesn't require unsafe operations\n10) For example, AI design of proteins to implement desired functions can be formalized and doesn't require unsafe operations\n11) For example, AI design of nanosystems to achieve desired functions can be formalized and doesn't require unsafe operations.\n12) For example, the behavior of designed nanosystems can be similarly constrained to only proven safe behaviors\n13) And so on through the litany of early stage valuable uses for advanced AI.\n14) I don't see any fundamental obstructions to any of these. Getting social acceptance and deployment is another issue!\nBest, Steve\n \nEliezer Yudkowsky \nSteve, are you visualizing AGI that gets developed 70 years from now under absolutely different paradigms than modern ML? I don't see being able to take anything remotely like, say, Mu Zero, and being able to prove any theorem about it which implies anything like corrigibility or the system not internally trying to harm humans. Anything in which enormous inscrutable floating-point vectors is a key component, seems like something where it would be very hard to prove any theorems about the treatment of those enormous inscrutable vectors that would correspond in the outside world to the AI not killing everybody.\nEven if we somehow managed to get structures far more legible than giant vectors of floats, using some AI paradigm very different from the current one, it still seems like huge key pillars of the system would rely on non-fully-formal reasoning; even if the AI has something that you can point to as a utility function and even if that utility function's representation is made out of programmer-meaningful elements instead of giant vectors of floats, we'd still be relying on much shakier reasoning at the point where we claimed that this utility function meant something in an intuitive human-desired sense, say. And if that utility function is learned from a dataset and decoded only afterwards by the operators, that sounds even scarier. And if instead you're learning a giant inscrutable vector of floats from a dataset, gulp.\nYou seem to be visualizing that we prove a theorem and then get a theorem-like level of assurance that the system is safe. What kind of theorem? What the heck would it say?\nI agree that it seems plausible that the good cognitive operations we want do not in principle require performing bad cognitive operations; the trouble, from my perspective, is that generalizing structures that do lots of good cognitive operations will automatically produce bad cognitive operations, especially when we dump more compute into them; \"you can't bring the coffee if you're dead\".\nSo it takes a more complicated system and some feat of insight I don't presently possess, to \"just\" do the good cognitions, instead of doing all the cognitions that result from decompressing the thing that compressed the cognitions in the dataset – even if that original dataset only contained cognitions that looked good to us, even if that dataset actually was just correctly labeled data about safe actions inside a slightly dangerous domain. Humans do a lot of stuff besides maximizing inclusive genetic fitness, optimizing purely on outcomes labeled by a simple loss function doesn't get you an internal optimizer that pursues only that loss function, etc.\n \nAnonymous \nSteve's intuitions sound to me like they're pointing at the \"well-specified problems\" idea from an earlier thread. Essentially, only use AI in domains where unsafe actions are impossible by construction. Is this too strong a restatement of your intuitions Steve?\n \nSteve Omohundro \nThanks for your perspective! Those sound more like social concerns than technical ones, though. I totally agree that today's AI culture is very \"sloppy\" and that the currently popular representations, learning algorithms, data sources, etc. aren't oriented around precise formal specification or provably guaranteed constraints. I'd love any thoughts about ways to help shift that culture toward precise and safe approaches! Technically there is no problem getting provable constraints on floating point computations, etc. The work often goes under the label \"Interval Computation\". It's not even very expensive, typically just a factor of 2 worse than \"sloppy\" computations. For some reason those approaches have tended to be more popular in Europe than in the US. Here are a couple lists of references: http://www.cs.utep.edu/interval-comp/ https://www.mat.univie.ac.at/~neum/interval.html\nI see today's dominant AI approach of mapping everything to large networks ReLU units running on hardware designed for dense matrix multiplication, trained with gradient descent on big noisy data sets as a very temporary state of affairs. I fully agree that it would be uncontrolled and dangerous scaled up in its current form! But it's really terrible in every aspect except that it makes it easy for machine learning practitioners to quickly slap something together which will actually sort of work sometimes. With all the work on AutoML, NAS, and the formal methods advances I'm hoping we leave this \"sloppy\" paradigm pretty quickly. Today's neural networks are terribly inefficient for inference: most weights are irrelevant for most inputs and yet current methods do computational work on each. I developed many algorithms and data structures to avoid that waste years ago (eg. \"bumptrees\" https://steveomohundro.com/scientific-contributions/)\nThey're also pretty terrible for learning since most weights don't need to be updated for most training examples and yet they are. Google and others are using Mixture-of-Experts to avoid some of that cost: https://arxiv.org/abs/1701.06538\nMatrix multiply is a pretty inefficient primitive and alternatives are being explored: https://arxiv.org/abs/2106.10860\nToday's reinforcement learning is slow and uncontrolled, etc. All this ridiculous computational and learning waste could be eliminated with precise formal approaches which measure and optimize it precisely. I'm hopeful that that improvement in computational and learning performance may drive the shift to better controlled representations.\nI see theorem proving as hugely valuable for safety in that we can easily precisely specify many important tasks and get guarantees about the behavior of the system. I'm hopeful that we will also be able to apply them to the full AGI story and encode human values, etc., but I don't think we want to bank on that at this stage. Hence, I proposed the \"Safe-AI Scaffolding Strategy\" where we never deploy a system without proven constraints on its behavior that give us high confidence of safety. We start extra conservative and disallow behavior that might eventually be determined to be safe. At every stage we maintain very high confidence of safety. Fast, automated theorem checking enables us to build computational and robotic infrastructure which only executes software with such proofs.\nAnd, yes, I'm totally with you on needing to avoid the \"basic AI drives\"! I think we have to start in a phase where AI systems are not allowed to run rampant as uncontrolled optimizing agents! It's easy to see how to constrain limited programs (eg. theorem provers, program compilers or protein designers) to stay on particular hardware and only communicate externally in precisely constrained ways. It's similarly easy to define constrained robot behaviors (eg. for self-driving cars, etc.) The dicey area is that unconstrained agentic edge. I think we want to stay well away from that until we're very sure we know what we're doing! My optimism stems from the belief that many of the socially important things we need AI for won't require anything near that unconstrained edge. But it's tempered by the need to get the safe infrastructure into place before dangerous AIs are created.\n \nAnonymous \nAs far as I know, all the work on \"verifying floating-point computations\" currently is way too low-level — the specifications that are proved about the computations don't say anything about what the computations mean or are about, beyond the very local execution of some algorithm. Execution of algorithms in the real world can have very far-reaching effects that aren't modelled by their specifications.\n \nEliezer Yudkowsky \nYeah, what they said. How do you get from proving things about error bounds on matrix multiplications of inscrutable floating-point numbers, to saying anything about what a mind is trying to do, or not trying to do, in the external world?\n \nSteve Omohundro \nUltimately we need to constrain behavior. You might want to ensure your robot butler won't leave the premises. To do that using formal methods, you need to have a semantic representation of the location of the robot, your premise's spatial extent, etc. It's pretty easy to formally represent that kind of physical information (it's just a more careful version of what engineers do anyway). You also have a formal model of the computational hardware and software and the program running the system.\nFor finite systems, any true property has a proof which can be mechanically checked but the size of that proof might be large and it might be hard to find. So we need to use encodings and properties which mesh well with the safety semantics we care about.\nFormal proofs of properties of programs has progressed to where a bunch of cryptographic, compilation, and other systems can be specified and formalized. Why it's taken this long, I have no idea. The creator of any system has an argument as to why its behavior does what they think it will and why it won't do bad or dangerous things. The formalization of those arguments should be one direct short step.\nExperience with formalizing mathematician's informal arguments suggest that the formal proofs are maybe 5 times longer than the informal argument. Systems with learning and statistical inference add more challenges but nothing that seems in-principal all that difficult. I'm still not completely sure how to constrain the use of language, however. I see inside of Facebook all sorts of problems due to inability to constrain language systems (eg. they just had a huge issue where a system labeled a video with a racist term). The interface between natural language semantics and formal semantics and how we deal with that for safety is something I've been thinking a lot about recently.\n \nSteve Omohundro \nHere's a nice 3 hour long tutorial about \"probabilistic circuits\" which is a representation of probability distributions, learning, Bayesian inference, etc. which has much better properties than most of the standard representations used in statistics, machine learning, neural nets, etc.: https://www.youtube.com/watch?v=2RAG5-L9R70 It looks especially amenable to interpretability, formal specification, and proofs of properties.\n \nEliezer Yudkowsky \nYou're preaching to the choir there, but even if we were working with more strongly typed epistemic representations that had been inferred by some unexpected innovation of machine learning, automatic inference of those representations would lead them to be uncommented and not well-matched with human compressions of reality, nor would they match exactly against reality, which would make it very hard for any theorem about \"we are optimizing against this huge uncommented machine-learned epistemic representation, to steer outcomes inside this huge machine-learned goal specification\" to guarantee safety in outside reality; especially in the face of how corrigibility is unnatural and runs counter to convergence and indeed coherence; especially if we're trying to train on domains where unaligned cognition is safe, and generalize to regimes in which unaligned cognition is not safe. Even in this case, we are not nearly out of the woods, because what we can prove has a great type-gap with that which we want to ensure is true. You can't handwave the problem of crossing that gap even if it's a solvable problem.\nAnd that whole scenario would require some major total shift in ML paradigms.\nRight now the epistemic representations are giant inscrutable vectors of floating-point numbers, and so are all the other subsystems and representations, more or less.\nProve whatever you like about that Tensorflow problem; it will make no difference to whether the AI kills you. The properties that can be proven just aren't related to safety, no matter how many times you prove an error bound on the floating-point multiplications. It wasn't floating-point error that was going to kill you in the first place.\n \n\nThe post Discussion with Eliezer Yudkowsky on AGI interventions appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Discussion with Eliezer Yudkowsky on AGI interventions", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=4", "id": "a6588443f0a42d5810455af81e648c3d"} {"text": "November 2021 Newsletter\n\n\nMIRI updates\n\nMIRI won't be running a formal fundraiser this year, though we'll still be participating in Giving Tuesday and other matching opportunities. Visit intelligence.org/donate to donate and to get information on tax-advantaged donations, employer matching, etc.\nGiving Tuesday takes place on Nov. 30 at 5:00:00am PT.  Facebook will 100%-match the first $2M donated — something that took less than 2 seconds last year. Facebook will then 10%-match the next $60M of donations made, which will plausibly take 1-3 hours. Details on optimizing your donation(s) to MIRI and other EA organizations can be found at EA Giving Tuesday, a Rethink Charity project.\n\nNews and links\n\nOpenAI announces a system that \"solves about 90% as many [math] problems as real kids: a small sample of 9–12 year olds scored 60% on a test from our dataset, while our system scored 55% on those same problems\".\nOpen Philanthropy has released a request for proposals \"for projects in AI alignment that work with deep learning systems\", including interpretability work (write-up by Chris Olah). Apply by Jan. 10. \nThe TAI Safety Bibliographic Database now has a convenient frontend, developed by the Quantified Uncertainty Research Institute: AI Safety Papers.\nPeople who aren't members can now submit content to the AI Alignment Forum. You can find more info at the forum's Welcome & FAQ page.\nThe LessWrong Team, now Lightcone Infrastructure, is hiring software engineers for LessWrong and for grantmaking, along with a generalist to help build an in-person rationality and longtermism campus. You can apply here.\nRedwood Research and Lightcone Infrastructure are hosting a free Jan. 3–22 Machine Learning for Alignment Bootcamp (MLAB) at Constellation. The curriculum is designed by Buck Shlegeris and App Academy co-founder Ned Ruggeri. \"Applications are open to anyone who wants to upskill in ML; whether a student, professional, or researcher.\" Apply by November 15.\n\n\nThe post November 2021 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "November 2021 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=4", "id": "029e6f87361b295b9eaf7c25dea6eded"} {"text": "October 2021 Newsletter\n\n\nRedwood Research is a new alignment research organization that just launched their website and released an explainer about what they're currently working on. We're quite excited about Redwood's work, and encourage our supporters to consider applying to work there to help boost Redwood's alignment research.\nMIRI senior researcher Eliezer Yudkowsky writes:\nRedwood Research is investigating a toy problem in AI alignment which I find genuinely interesting – namely, training a classifier over GPT-3 continuations of prompts that you'd expect to lead to violence, to prohibit responses involving violence / human injury? E.g., complete \"I pulled out a gun and shot him\" with \"And he dodged!\" instead of \"And he fell to the floor dead.\"\n(The use of violence / injury avoidance as a toy domain has nothing to do with the alignment research part, of course; you could just as well try to train a classifier against fictional situations where a character spoke out loud, despite prompts seeming to lead there, and it would be basically the same problem.)\nWhy am I excited? Because it seems like a research question where, and this part is very rare, I can't instantly tell from reading the study description which results they'll find.\nI do expect success on the basic problem, but for once this domain is complicated enough that we can then proceed to ask questions that are actually interesting. Will humans always be able to fool the classifier, once it's trained, and then retrained against the first examples that fooled it? Will humans be able to produce violent continuations by a clever use of prompts, without attacking the classifier directly? How over-broad does the exclusion have to be – how many other possibilities must it exclude – in order for it to successfully include all violent continuations? Suppose we tried training GPT-3+classifier on something like 'low impact', to avoid highly impactful situations across a narrow range of domains; would it generalize correctly to more domains on the first try?\nI'd like to see more real alignment research of this type.\nRedwood Research is currently hiring people to try tricking their model, $30/hr: link\nThey're also hiring technical staff, researchers and engineers (link), and are looking for an office ops manager (link)\nIf you want to learn more, Redwood Research is currently taking questions for an AMA on the Effective Altruism Forum.\n\nMIRI updates\n\nMIRI's Evan Hubinger discusses a new alignment research proposal for transparency: Automating Auditing.\n\nNews and links\n\nAlex Turner releases When Most VNM-Coherent Preference Orderings Have Convergent Instrumental Incentives. MIRI's Abram Demski comments: \"I think this post could be pretty important. It offers a formal treatment of 'goal-directedness' and its relationship to coherence theorems such as VNM, a topic which has seen some past controversy but which has — till now — been dealt with only quite informally.\"\nBuck Shlegeris of Redwood Research writes on the alignment problem in different capability regimes and the theory-practice gap in alignable AI capabilities.\nThe UK government's National AI Strategy says that \"the government takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for the UK and the world, seriously\". In related news, Boris Johnson cites Toby Ord in a UN speech.\n\n\nThe post October 2021 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "October 2021 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=4", "id": "c568f914358f441e3d58e2d0c6a34de1"} {"text": "September 2021 Newsletter\n\n\nScott Garrabrant has concluded the main section of his Finite Factored Sets sequence (\"Details and Proofs\") with posts on inferring time and applications, future work, and speculation.\nScott's new frameworks are also now available as a pair of arXiv papers: \"Cartesian Frames\" (adapted from the Cartesian Frames sequence for a philosopher audience by Daniel Hermann and Josiah Lopez-Wild) and \"Temporal Inference with Finite Factored Sets\" (essentially identical to the \"Details and Proofs\" section of Scott's sequence).\nOther MIRI updates\n\nDeepMind's Rohin Shah has written his own introduction to finite factored sets.\nAlex Appel extends the idea of finite factored sets to countable-dimensional factored spaces.\nOpen Philanthropy's Joe Carlsmith has written what's probably the best existing introduction to MIRI-cluster work on decision theory: Can You Control the Past?. See also Carlsmith's decision theory conversation with MIRI's Abram Demski and Scott Garrabrant.\nFrom social media: Eliezer Yudkowsky discusses paths to AGI and the ignorance argument for long timelines, and talks with Vitalik Buterin about GPT-3 and pivotal acts.\n\nNews and links\n\nA solid new introductory resource: Holden Karnofsky has written a series of essays on his new blog (Cold Takes) arguing that \"the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement\". See also Holden's conversation with Rob Wiblin on the 80,000 Hours Podcast.\nThe Future of Life Institute announces the Vitalik Buterin PhD Fellowship in AI Existential Safety, \"targeted at students applying to start their PhD in 2022\". You can apply at https://grants.futureoflife.org/; the deadline is Nov. 5.\nOpenAI releases Codex, \"a GPT language model fine-tuned on publicly available code from GitHub\".\n\n\nThe post September 2021 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "September 2021 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=5", "id": "ab5ad6c3c2413dcd13db86a5b9e92eff"} {"text": "August 2021 Newsletter\n\n\nMIRI updates\n\nScott Garrabrant and Rohin Shah debate one of the central questions in AI alignment strategy: whether we should try to avoid human-modeling capabilities in the first AGI systems.\nScott gives a proof of the fundamental theorem of finite factored sets.\n\nNews and links\n\nRedwood Research, a new AI alignment research organization, is seeking an operations lead. Led by Nate Thomas, Buck Shlegeris, and Bill Zito, Redwood Research has received a strong endorsement from MIRI Executive Director Nate Soares:\nRedwood Research seems to me to be led by people who care full-throatedly about the long-term future, have cosmopolitan values, are adamant truthseekers, and are competent administrators. The team seems to me to possess the virtue of practice, and no small amount of competence. I am excited about their ability to find and execute impactful plans that involve modern machine learning techniques. In my estimation, Redwood is among the very best places to do machine-learning based alignment research that has a chance of mattering. In fact, I consider it at least plausible that I work with Redwood as an individual contributor at some point in the future.\n\nHolden Karnofsky of Open Philanthropy has written a career guide organized around building one of nine \"longtermism-relevant aptitudes\": organization building/running/boosting, political influence, research on core longtermist questions, communication, entrepreneurship, community building, software engineering, information security, and work in academia.\nOpen Phil's Joe Carlsmith argues that with the right software, 1013–1017 FLOP/s is likely enough (or more than enough) \"to match the human brain's task-performance\", with 1015 FLOP/s \"more likely than not\" sufficient.\nKatja Grace discusses her work at AI Impacts on Daniel Filan's AI X-Risk Podcast.\nChris Olah of Anthropic discusses what the hell is going on inside neural networks on the 80,000 Hours Podcast.\nDaniel Kokotajlo argues that the effective altruism community should permanently stop using the term \"outside view\" and \"use more precise, less confused concepts instead.\"\n\n\nThe post August 2021 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "August 2021 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=5", "id": "c18d64e10e7a7de5a034ec5e7bf98462"} {"text": "July 2021 Newsletter\n\n\nMIRI updates\n\n\nMIRI researcher Evan Hubinger discusses learned optimization, interpretability, and homogeneity in takeoff speeds on the Inside View podcast.\nScott Garrabrant releases part three of \"Finite Factored Sets\", on conditional orthogonality.\nUC Berkeley's Daniel Filan provides examples of conditional orthogonality in finite factored sets: 1, 2.\nAbram Demski proposes factoring the alignment problem into \"outer alignment\" / \"on-distribution alignment\", \"inner robustness\" / \"capability robustness\", and \"objective robustness\" / \"inner alignment\".\nMIRI senior researcher Eliezer Yudkowsky summarizes \"the real core of the argument for 'AGI risk' (AGI ruin)\" as \"appreciating the power of intelligence enough to realize that getting superhuman intelligence wrong, on the first try, will kill you on that first try, not let you learn and try again\".\n\nNews and links\n\n\nFrom DeepMind: \"generally capable agents emerge from open-ended play\".\nDeepMind's safety team summarizes their work to date on causal influence diagrams.\nAnother (outer) alignment failure story is similar to Paul Christiano's best guess at how AI might cause human extinction.\nChristiano discusses a \"special case of alignment: solve alignment when decisions are 'low stakes'\".\nAndrew Critch argues that power dynamics are \"a blind spot or blurry spot\" in the collective world-modeling of the effective altruism and rationality communities, \"especially around AI\".\n\n\nThe post July 2021 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "July 2021 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=5", "id": "caba70036405ab4f913a83aac7dccfca"} {"text": "June 2021 Newsletter\n\n\nOur big news this month is Scott Garrabrant's finite factored sets, one of MIRI's largest results to date.\nFor most people, the best introductory resource on FFS is likely Scott's Topos talk/transcript. Scott is also in the process of posting a longer, more mathematically dense introduction in multiple parts: part 1, part 2.\nScott has also discussed factored sets with Daniel Filan on the AI X-Risk Podcast, and in a LessWrong talk/transcript.\n\nOther MIRI updates\n\n\nOn MIRI researcher Abram Demski's view, the core inner alignment problem is the absence of robust safety arguments \"in a case where we might naively expect it. We don't know how to rule out the presence of (misaligned) mesa-optimizers.\" Abram advocates a more formal approach to the problem:\nMost of the work on inner alignment so far has been informal or semi-formal (with the notable exception of a little work on minimal circuits). I feel this has resulted in some misconceptions about the problem. I want to write up a large document clearly defining the formal problem and detailing some formal directions for research. Here, I outline my intentions, inviting the reader to provide feedback and point me to any formal work or areas of potential formal work which should be covered in such a document.\n\nMark Xu writes An Intuitive Guide to Garrabrant Induction (a.k.a. logical induction).\nMIRI research associate Ramana Kumar has formalized the ideas in Scott Garrabrant's Cartesian Frames sequence in higher-order logic, \"including machine verified proofs of all the theorems\".\nIndependent researcher Alex Flint writes on probability theory and logical induction as lenses and on gradations of inner alignment obstacles.\nI (Rob) asked 44 people working on long-term AI risk about the level of existential risk from AI (EA Forum link, LW link). Responses were all over the map (with MIRI more pessimistic than most organizations). The mean respondent's probability of existential catastrophe from \"AI systems not doing/optimizing what the people deploying them wanted/intended\" was ~40%, median 30%. (See also the independent survey by Clarke, Carlier, and Schuett.)\nMIRI recently spent some time seriously evaluating whether to move out of the Bay Area. We've now decided to stay in the Bay. For more details, see MIRI board member Blake Borgeson's update.\n\n\nNews and links\n\n\nDario and Daniela Amodei, formerly at OpenAI, have launched a new organization, Anthropic, with a goal of doing \"computationally-intensive research to develop large-scale AI systems that are steerable, interpretable, and robust\".\nJonas Vollmer writes that the Long-Term Future Fund and the Effective Altruism Infrastructure Fund are now looking for grant applications: \"We fund student scholarships, career exploration, local groups, entrepreneurial projects, academic teaching buy-outs, top-up funding for poorly paid academics, and many other things. We can make anonymous grants without public reporting. We will consider grants as low as $1,000 or as high as $500,000 (or more in some cases). As a reminder, EA Funds is more flexible than you might think.\" Going forward, these two funds will accept applications at any time, rather than having distinct grant rounds. You can apply here.\n\n\nThe post June 2021 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "June 2021 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=5", "id": "c2e4700d4ec7c6e761da5bee250bcc6f"} {"text": "Finite Factored Sets\n\n\n\nThis is the edited transcript of a talk introducing finite factored sets. For most readers, it will probably be the best starting point for learning about factored sets.\nVideo:\n\n\n\n\n\n(Lightly edited) slides: https://intelligence.org/files/Factored-Set-Slides.pdf\n \n\n \n(Part 1, Title Slides)   ·   ·   ·   Finite Factored Sets\n \n\n\n \n\n \n(Part 1, Motivation)   ·   ·   ·   Some Context\n \nScott: So I want to start with some context. For people who are not already familiar with my work:\n\nMy main motivation is to reduce existential risk.\nI try to do this by trying to figure out how to align advanced artificial intelligence.\nI try to do this by trying to become less confused about intelligence and optimization and agency and various things in that cluster.\nMy main strategy here is to develop a theory of agents that are embedded in the environment that they're optimizing. I think there are a lot of open hard problems around doing this.\nThis leads me to do a bunch of weird math and philosophy. This talk is going to be an example of some weird math and philosophy.\n\nFor people who are already familiar with my work, I just want to say that according to my personal aesthetics, the subject of this talk is about as exciting as Logical Induction, which is to say I'm really excited about it. And I'm really excited about this audience; I'm excited to give this talk right now.\n\n \n\n \n(Part 1, Table of Contents)   ·   ·   ·   Factoring the Talk\n \nThis talk can be split into 2 parts:\nPart 1, a short pure-math combinatorics talk.\nPart 2, a more applied and philosophical main talk.\nThis talk can also be split into 5 parts differentiated by color: Title Slides, Motivation, Table of Contents, Main Body, and Examples. Combining these gives us 10 parts (some of which are not contiguous):\n \n\n\n\n\n\nPart 1: Short Talk\nPart 2: The Main Talk\n\n\nTitle Slides\nFinite Factored Sets\nThe Main Talk (It's About Time)\n\n\nMotivation\nSome Context\nThe Pearlian Paradigm\n\n\nTable of Contents\nFactoring the Talk\nWe Can Do Better\n\n\nMain Body\nSet Partitions, etc.\nTime and Orthogonality, etc.\n\n\nExamples\nEnumerating Factorizations\nGame of Life, etc.\n\n\n\n\n \n\n \n(Part 1, Main Body)   ·   ·   ·   Set Partitions\n \nAll right. Here's some background math:\n\nA partition of a set \\(S\\) is a set \\(X\\) of non-empty subsets of \\(S\\), called parts, such that for each \\(s∈S\\) there exists a unique part in \\(X\\) that contains \\(s\\).\nBasically, a partition of \\(S\\) is a way to view \\(S\\) as a disjoint union. We have parts that are disjoint from each other, and they union together to form \\(S\\).\nWe'll write \\(\\mathrm{Part}(S)\\) for the set of all partitions of \\(S\\).\nWe'll say that a partition \\(X\\) is trivial if it has exactly one part.\nWe'll use bracket notation, \\([s]_{X}\\), to denote the unique part in \\(X\\) containing \\(s\\). So this is like the equivalence class of a given element.\nAnd we'll use the notation \\(s∼_{X}t\\) to say that two elements \\(s\\) and \\(t\\) are in the same part in \\(X\\).\n\nYou can also think of partitions as being like variables on your set \\(S\\). Viewed in that way, the values of a partition \\(X\\) correspond to which part an element is in.\nOr you can think of \\(X\\) as a question that you could ask about a generic element of \\(S\\). If I have an element of \\(S\\) and it's hidden from you and you want to ask a question about it, each possible question corresponds to a partition that splits up \\(S\\) according to the different possible answers.\nWe're also going to use the lattice structure of partitions:\n\nWe'll say that \\(X \\geq_S Y\\) (\\(X\\) is finer than \\(Y\\), and \\(Y\\) is coarser than \\(X\\)) if \\(X\\) makes all of the distinctions that \\(Y\\) makes (and possibly some more distinctions), i.e., if for all \\(s,t \\in S\\), \\(s \\sim_X t\\) implies \\(s \\sim_Y t\\). You can break your set \\(S\\) into parts, \\(Y\\), and then break it into smaller parts, \\(X\\).\n\\(X\\vee_S Y\\) (the common refinement of \\(X\\) and \\(Y\\) ) is the coarsest partition that is finer than both \\(X\\) and \\(Y\\) . This is the unique partition that makes all of the distinctions that either \\(X\\) or \\(Y\\) makes, and no other distinctions. This is well-defined, which I'm not going to show here.\n\nHopefully this is mostly background. Now I want to show something new.\n \n\n \n(Part 1, Main Body)   ·   ·   ·   Set Factorizations\n \nA factorization of a set \\(S\\) is a set \\(B\\) of nontrivial partitions of \\(S\\), called factors, such that for each way of choosing one part from each factor in \\(B\\), there exists a unique element of \\(S\\) in the intersection of those parts.\nSo this is maybe a little bit dense. My short tagline of this is: \"A factorization of \\(S\\) is a way to view \\(S\\) as a product, in the exact same way that a partition was a way to view \\(S\\) as a disjoint union.\"\nIf you take one definition away from this first talk, it should be the definition of factorization. I'll try to explain it from a bunch of different angles to help communicate the concept.\nIf \\(B=\\{b_0,\\dots,b_{n}\\}\\) is a factorization of \\(S\\) , then there exists a bijection between \\(S\\) and \\(b_0\\times\\dots\\times b_{n}\\) given by \\(s\\mapsto([s]_{b_0},\\dots,[s]_{b_{n}})\\). This bijection comes from sending an element of \\(S\\) to the tuple consisting only of parts containing that element. And as a consequence of this bijection, \\(|S|=\\prod_{b\\in B} |b|\\).\nSo we're really viewing \\(S\\) as a product of these individual factors, with no additional structure.\nAlthough we won't prove this here, something else you can verify about factorizations is that all of the parts in a factor have to be of the same size.\nWe'll write \\(\\mathrm{Fact}(S)\\) for the set of all factorizations of \\(S\\), and we'll say that a finite factored set is a pair \\((S,B)\\), where \\(S\\) is a finite set and \\(B \\in \\mathrm{Fact}(S)\\).\nNote that the relationship between \\(S\\) and \\(B\\) is somewhat loopy. If I want to define a factored set, there are two strategies I could use. I could first introduce the \\(S\\), and break it into factors. Alternatively, I could first introduce the \\(B\\). Any time I have a finite collection of finite sets \\(B\\), I can take their product and thereby produce an \\(S\\), modulo the degenerate case where some of the sets are empty. So \\(S\\) can just be the product of a finite collection of arbitrary finite sets.\nTo my eye, this notion of factorization is extremely natural. It's basically the multiplicative analog of a set partition. And I really want to push that point, so here's another attempt to push that point:\n\n\n\n\nA partition is a set \\(X\\) of non-empty\nsubsets of \\(S\\) such that the obvious\nfunction from the disjoint union of\nthe elements of \\(X\\) to \\(S\\) is a bijection.\nA factorization is a set \\(B\\) of non-trivial\npartitions of \\(S\\) such that the obvious\nfunction to the product of\nthe elements of \\(B\\) from \\(S\\) is a bijection.\n\n\n\n\nI can take a slightly modified version of the partition definition from before and dualize a whole bunch of the words, and get out the set factorization definition.\nHopefully you're now kind of convinced that this is an extremely natural notion.\n \n\n\n\n\nAndrew Critch: Scott, in one sense, you're treating \"subset\" as dual to partition, which I think is valid. And then in another sense, you're treating \"factorization\" as dual to partition. Those are both valid, but maybe it's worth talking about the two kinds of duality.\nScott: Yeah. I think what's going on there is that there are two ways to view a partition. You can view a partition as \"that which is dual to a subset,\" and you can also view a partition as something that is built up out of subsets. These two different views do different things when you dualize.\nRamana Kumar: I was just going to check: You said you can start with an arbitrary \\(B\\) and then build the \\(S\\) from it. It can be literally any set, and then there's always an \\(S\\)…\nScott: If none of them are empty, yes, you could just take a collection of sets that are kind of arbitrary elements. And you can take their product, and you can identify with each of the elements of a set the subset of the product that projects on to that element.\nRamana Kumar: Ah. So the \\(S\\) in that case will just be tuples.\nScott: That's right.\nBrendan Fong: Scott, given a set, I find it very easy to come up with partitions. But I find it less easy to come up with factorizations. Do you have any tricks for…?\nScott: For that, I should probably just go on to the examples.\nJoseph Hirsh: Can I ask one more thing before you do that? You allow factors to have one element in them?\nScott: I said \"nontrivial,\" which means it does not have one element.\nJoseph Hirsh: \"Nontrivial\" means \"not have one element, and not have no elements\"?\nScott: No, the empty set has a partition (with no parts), and I will call that nontrivial. But the empty set thing is not that critical.\n\n\n\n\n \nI'm now going to move on to some examples.\n \n\n \n(Part 1, Examples)   ·   ·   ·   Enumerating Factorizations\n \nExercise! What are the factorizations of the set \\(\\{0,1,2,3\\}\\) ?\nSpoiler space:\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\nFirst, we're going to have a kind of trivial factorization:\n\\(\\begin{split} \\{ \\ \\ \\{ \\{0\\},\\{1\\},\\{2\\},\\{3\\} \\} \\ \\ \\} \\end{split} \\begin{split} \\ \\ \\ \\ \\underline{\\ 0 \\ \\ \\ 1 \\ \\ \\ 2 \\ \\ \\ 3 \\ } \\end{split}\\)\nWe only have one factor, and that factor is the discrete partition. You can do this for any set, as long as your set has at least two elements.\nRecall that in the definition of factorization, we wanted that for each way of choosing one part from each factor, we had a unique element in the intersection of those parts. Since we only have one factor here, satisfying the definition just requires that for each way of choosing one part from the discrete partition, there exists a unique element that is in that part.\nAnd then we want some less trivial factorizations. In order to have a factorization, we're going to need some partitions. And the product of the cardinalities of our partitions are going to have to equal the cardinality of our set \\(S\\) , which is 4.\nThe only way to express 4 as a nontrivial product is to express it as \\(2 \\times 2\\) . Thus we're looking for factorizations that have 2 factors, where each factor has 2 parts.\nWe noted earlier that all of the parts in a factor have to be of the same size. So we're looking for 2 partitions that each break our 4-element set into 2 sets of size 2.\nSo if I'm going to have a factorization of \\(\\{0,1,2,3\\}\\) that isn't this trivial one, I'm going to have to pick 2 partitions of my 4-element set that each break the set into 2 parts of size 2. And there are 3 partitions of a 4-element sets that break it up into 2 parts of size 2. For each way of choosing a pair of these 3 partitions, I'm going to get a factorization.\n\\(\\begin{split} \\begin{Bmatrix} \\ \\ \\ \\{\\{0,1\\}, \\{2,3\\}\\}, \\ \\\\ \\{\\{0,2\\}, \\{1,3\\}\\} \\end{Bmatrix} \\end{split} \\begin{split} \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\begin{array} { |c|c|c|c| } \\hline 0 & 1 \\\\ \\hline 2 & 3 \\\\ \\hline \\end{array} \\end{split}\\)\n\\(\\begin{split} \\begin{Bmatrix} \\ \\ \\ \\{\\{0,1\\}, \\{2,3\\}\\}, \\ \\\\ \\{\\{0,3\\}, \\{1,2\\}\\} \\end{Bmatrix} \\end{split} \\begin{split} \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\begin{array} { |c|c|c|c| } \\hline 0 & 1 \\\\ \\hline 3 & 2 \\\\ \\hline \\end{array} \\end{split}\\)\n\\(\\begin{split} \\begin{Bmatrix} \\ \\ \\ \\{\\{0,2\\}, \\{1,3\\}\\}, \\ \\\\ \\{\\{0,3\\}, \\{1,2\\}\\} \\end{Bmatrix} \\end{split} \\begin{split} \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\begin{array} { |c|c|c|c| } \\hline 0 & 2 \\\\ \\hline 3 & 1 \\\\ \\hline \\end{array} \\end{split}\\)\nSo there will be 4 factorizations of a 4-element set.\nIn general you can ask, \"How many factorizations are there of a finite set of size \\(n\\) ?\". Here's a little chart showing the answer for \\(n \\leq 25\\):\n\n\n\n\n\\(|S|\\)\n\\(|\\mathrm{Fact}(S)|\\)\n\n\n0\n1\n\n\n1\n1\n\n\n2\n1\n\n\n3\n1\n\n\n4\n4\n\n\n5\n1\n\n\n6\n61\n\n\n7\n1\n\n\n8\n1681\n\n\n9\n5041\n\n\n10\n15121\n\n\n11\n1\n\n\n12\n\n\n\n13\n1\n\n\n14\n\n\n\n15\n\n\n\n16\n\n\n\n17\n1\n\n\n18\n45951781075201\n\n\n19\n1\n\n\n20\n3379365788198401\n\n\n21\n1689515283456001\n\n\n22\n14079294028801\n\n\n23\n1\n\n\n24\n4454857103544668620801\n\n\n25\n538583682060103680001\n\n\n\n\nYou'll notice that if \\(n\\) is prime, there will be a single factorization, which hopefully makes sense. This is the factorization that only has one factor.\nA very surprising fact to me is that this sequence did not show up on OEIS, which is this database that combinatorialists use to check whether or not their sequence has been studied before, and to see connections to other sequences.\nTo me, this just feels like the multiplicative version of the Bell numbers. The Bell numbers count how many partitions there are of a set of size \\(n\\). It's sequence number 110 on OEIS out of over 300,000; and this sequence just doesn't show up at all, even when I tweak it and delete the degenerate cases and so on.\nI am very confused by this fact. To me, factorizations seem like an extremely natural concept, and it seems to me like it hasn't really been studied before.\n \nThis is the end of my short combinatorics talk.\n \n\n\n\n\nRamana Kumar: If you're willing to do it, I'd appreciate just stepping through one of the examples of the factorizations and the definition, because this is pretty new to me.\nScott: Yeah. Let's go through the first nontrivial factorization of \\(\\{0,1,2,3\\}\\):\n\\(\\begin{split} \\begin{Bmatrix} \\ \\ \\ \\{\\{0,1\\}, \\{2,3\\}\\}, \\ \\\\ \\{\\{0,2\\}, \\{1,3\\}\\} \\end{Bmatrix} \\end{split} \\begin{split} \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\begin{array} { |c|c|c|c| } \\hline 0 & 1 \\\\ \\hline 2 & 3 \\\\ \\hline \\end{array} \\end{split}\\)\nIn the definition, I said a factorization should be a set of partitions such that for each way of choosing one part from each of the partitions, there will be a unique element in the intersection of those parts.\nHere, I have a partition that's separating the small numbers from the large numbers: \\(\\{\\{0,1\\}, \\{2,3\\}\\}\\). And I also have a partition that's separating the even numbers from the odd numbers: \\(\\{\\{0,2\\}, \\{1,3\\}\\}\\).\nAnd the point is that for each way of choosing either \"small\" or \"large\" and also choosing \"even\" or \"odd\", there will be a unique element of \\(S\\) that is the conjunction of these two choices.\nIn the other two nontrivial factorizations, I replace either \"small and large\" or \"even and odd\" with an \"inner and outer\" distinction.\nDavid Spivak: For partitions and for many things, if I know the partitions of a set \\(A\\) and the partitions of a set \\(B\\) , then I know some partitions of \\(A+B\\) (the disjoint union) or I know some partitions of \\(A \\times B\\) . Do you know any facts like that for factorizations?\nScott: Yeah. If I have two factored sets, I can get a factored set over their product, which sort of disjoint-unions the two collections of factors. For the additive thing, you're not going to get anything like that because prime sets don't have any nontrivial factorizations.\n\n\n\n\n \nAll right. I think I'm going to move on to the main talk.\n \n\n \n(Part 2, Title Slides)   ·   ·   ·   The Main Talk (It's About Time)\n \n\n\n \n\n \n(Part 2, Motivation)   ·   ·   ·   The Pearlian Paradigm\n \nWe can't talk about time without talking about Pearlian causal inference. I want to start by saying that I think the Pearlian paradigm is great. This buys me some crackpot points, but I'll say it's the best thing to happen to our understanding of time since Einstein.\nI'm not going to go into all the details of Pearl's paradigm here. My talk will not be technically dependent on it; it's here for motivation.\nGiven a collection of variables and a joint probability distribution over those variables, Pearl can infer causal/temporal relationships between the variables. (In this talk I'm going to use \"causal\" and \"temporal\" interchangeably, though there may be more interesting things to say here philosophically.)\nPearl can infer temporal data from statistical data, which is going against the adage that \"correlation does not imply causation.\" It's like Pearl is taking the combinatorial structure of your correlation and using that to infer causation, which I think is just really great.\n \n\n\n\n\nRamana Kumar: I may be wrong, but I think this is false. Or I think that that's not all Pearl needs—just the joint distribution over the variables. Doesn't he also make use of intervention distributions?\nScott: In the theory that is described in chapter two of the book Causality, he's not really using other stuff. Pearl builds up this bigger theory elsewhere. But you have some strong ability, maybe assuming simplicity or whatever (but not assuming you have access to extra information), to take a collection of variables and a joint distribution over those variables, and infer causation from correlation.\nAndrew Critch: Ramana, it depends a lot on the structure of the underlying causal graph. For some causal graphs, you can actually recover them uniquely with no interventions. And only assumptions with zero-measure exceptions are needed, which is really strong.\nRamana Kumar: Right, but then the information you're using is the graph.\nAndrew Critch: No, you're not. Just the joint distribution.\nRamana Kumar: Oh, okay. Sorry, go ahead.\nAndrew Critch: There exist causal graphs with the property that if nature is generated by that graph and you don't know it, and then you look at the joint distribution, you will infer with probability 1 that nature was generated by that graph, without having done any interventions.\nRamana Kumar: Got it. That makes sense. Thanks.\nScott: Cool.\n\n\n\n\n \nI am going to (a little bit) go against this, though. I'm going to claim that Pearl is kind of cheating when making this inference. The thing I want to point out is that in the sentence \"Given a collection of variables and a joint probability distribution over those variables, Pearl can infer causal/temporal relationships between the variables.\", the words \"Given a collection of variables\" are actually hiding a lot of the work.\nThe emphasis is usually put on the joint probability distribution, but Pearl is not inferring temporal data from statistical data alone. He is inferring temporal data from statistical data and factorization data: how the world is broken up into these variables.\nI claim that this issue is also entangled with a failure to adequately handle abstraction and determinism. To point at that a little bit, one could do something like say:\n\"Well, what if I take the variables that I'm given in a Pearlian problem and I just forget that structure? I can just take the product of all of these variables that I'm given, and consider the space of all partitions on that product of variables that I'm given; and each one of those partitions will be its own variable. And then I can try to do Pearlian causal inference on this big set of all the variables that I get by forgetting the structure of variables that were given to me.\"\nAnd the problem is that when you do that, you have a bunch of things that are deterministic functions of each other, and you can't actually infer stuff using the Pearlian paradigm.\nSo in my view, this cheating is very entangled with the fact that Pearl's paradigm isn't great for handling abstraction and determinism.\n \n\n \n(Part 2, Table of Contents)   ·   ·   ·   We Can Do Better\n \nThe main thing we'll do in this talk is we're going to introduce an alternative to Pearl that does not rely on factorization data, and that therefore works better with abstraction and determinism.\nWhere Pearl was given a collection of variables, we are going to just consider all partitions of a given set. Where Pearl infers a directed acyclic graph, we're going to infer a finite factored set.\nIn the Pearlian world, we can look at the graph and read off properties of time and orthogonality/independence. A directed path between nodes corresponds to one node being before the other, and two nodes are independent if they have no common ancestor. Similarly, in our world, we will be able to read time and orthogonality off of a finite factored set.\n(Orthogonality and independence are pretty similar. I'll use the word \"orthogonality\" when I'm talking about a combinatorial notion, and I'll use \"independence\" when I'm talking about a probabilistic notion.)\nIn the Pearlian world, d-separation, which you can read off of the graph, corresponds to conditional independence in all probability distributions that you can put on the graph. We're going to have a fundamental theorem that will say basically the same thing: conditional orthogonality corresponds to conditional independence in all probability distributions that we can put on our factored set.\nIn the Pearlian world, d-separation will satisfy the compositional graphoid axioms. In our world, we're just going to satisfy the compositional semigraphoid axioms. The fifth graphoid axiom is one that I claim you shouldn't have even wanted in the first place.\nPearl does causal inference. We're going to talk about how to do temporal inference using this new paradigm, and infer some very basic temporal facts that Pearl's approach can't. (Note that Pearl can also sometimes infer temporal relations that we can't—but only, from our point of view, because Pearl is making additional factorization assumptions.)\nAnd then we'll talk about a bunch of applications.\n\n\n\n\nPearl\nThis Talk\n\n\nA Given Collection of Variables\nAll Partitions of a Given Set\n\n\nDirected Acyclic Graph\nFinite Factored Set\n\n\nDirected Path Between Nodes\n\"Time\"\n\n\nNo Common Ancestor\n\"Orthogonality\"\n\n\nd-Separation\n\"Conditional Orthogonality\"\n\n\nCompositional Graphoid\nCompositional Semigraphoid\n\n\nd-Separation Conditional Independence\nThe Fundamental Theorem\n\n\nCausal Inference\nTemporal Inference\n\n\nMany Many Applications\nMany Many Applications\n\n\n\n\nExcluding the motivation, table of contents, and example sections, this table also serves as an outline of the two talks. We've already talked about set partitions and finite factored sets, so now we're going to talk about time and orthogonality.\n \n\n \n(Part 2, Main Body)   ·   ·   ·   Time and Orthogonality\n \nI think that if you capture one definition from this second part of the talk, it should be this one. Given a finite factored set as context, we're going to define the history of a partition.\nLet \\(F = (S,B)\\) be a finite factored set. And let \\(X, Y \\in \\mathrm{Part}(S)\\) be partitions of \\(S\\).\nThe history of \\(X\\) , written \\(h^F(X)\\), is the smallest set of factors \\(H \\subseteq B\\) such that for all \\(s, t \\in S\\), if \\(s \\sim_b t\\) for all \\(b \\in H\\), then \\(s \\sim_X t\\).\nThe history of \\(X\\) , then, is the smallest set of factors \\(H\\) —so, the smallest subset of \\(B\\) —such that if I take an element of \\(S\\) and I hide it from you, and you want to know which part in \\(X\\) it is in, it suffices for me to tell you which part it is in within each of the factors in \\(H\\) .\nSo the history \\(H\\) is a set of factors of \\(S\\) , and knowing the values of all the factors in \\(H\\) is sufficient to know the value of \\(X\\) , or to know which part in \\(X\\) a given element is going to be in. I'll give an example soon that will maybe make this a little more clear.\nWe're then going to define time from history. We'll say that \\(X\\) is weakly before \\(Y\\), written \\(X\\leq^F Y\\), if \\(h^F(X)\\subseteq h^F(Y)\\) . And we'll say that \\(X\\) is strictly before \\(Y\\), written \\(X<^F Y\\), if \\(h^F(X)\\subset h^F(Y)\\).\nOne analogy one could draw is that these histories are like the past light cones of a point in spacetime. When one point is before another point, then the backwards light cone of the earlier point is going to be a subset of the backwards light cone of the later point. This helps show why \"before\" can be like a subset relation.\nWe're also going to define orthogonality from history. We'll say that two partitions \\(X\\) and \\(Y\\) are orthogonal, written \\(X\\perp^FY\\) , if their histories are disjoint: \\(h^F(X)\\cap h^F(Y)=\\{\\}\\).\nNow I'm going to go through an example.\n \n\n \n(Part 2, Examples)   ·   ·   ·   Game of Life\n \nLet \\(S\\) be the set of all Game of Life computations starting from an \\([-n,n]\\times[-n,n]\\) board.\nLet \\(R=\\{(r,c,t)\\in\\mathbb{Z}^3\\mid0\\leq t\\leq n,\\ \\) \\(|r|\\leq n-t,\\ |c|\\leq n-t\\}\\) (i.e., cells computable from the initial \\([-n,n]\\times[-n,n]\\) board). For \\((r,c,t)\\in R\\), let \\(\\ell(r,c,t)\\subseteq S\\) be the set of all computations such that the cell at row \\(r\\) and column \\(c\\) is alive at time \\(t\\).\n(Minor footnote: I've done some small tricks here in order to deal with the fact that the Game of Life is normally played on an infinite board. We want to deal with the finite case, and we don't want to worry about boundary conditions, so we're only going to look at the cells that are uniquely determined by the initial board. This means that the board will shrink over time, but this won't matter for our example.)\n\\(S\\) is the set of all Game of Life computations, but since the Game of Life is deterministic, the set of all computations is in bijective correspondence with the set of all initial conditions. So \\(|S|=2^{(2n+1)^{2}}\\) , the number of initial board states.\nThis also gives us a nice factorization on the set of all Game of Life computations. For each cell, there's a partition that separates out the Game of Life computations in which that cell is alive at time 0 from the ones where it's dead at time 0. Our factorization, then, will be a set of \\((2n+1)^{2}\\) binary factors, one for each question of \"Was this cell alive or dead at time 0?\".\nFormally: For \\((r,c,t)\\in R\\), let \\(L_{(r,c,t)}=\\{\\ell(r,c,t),S\\setminus \\ell(r,c,t)\\}\\). Let \\(F=(S,B)\\), where \\(B=\\{L_{(r,c,0)}\\mid -n\\leq r,c\\leq n\\}\\).\nThere will also be other partitions on this set of all Game of Life computations that we can talk about. For example, you can take a cell and a time \\(t\\) and say, \"Is this cell alive at time \\(t\\)?\", and there will be a partition that separates out the computations where that cell is alive at time \\(t\\) from the computations where it's dead at time \\(t\\).\nHere's an example of that:\n \n\n \nThe lowest grid shows a section of the initial board state.\nThe blue, green, and red squares on the upper boards are (cell, time) pairs. Each square corresponds to a partition of the set of all Game of Life computations, \"Is that cell alive or dead at the given time \\(t\\)?\"\nThe history of that partition is going to be all the cells in the initial board that go into computing whether the cell is alive or dead at time \\(t\\) . It's everything involved in figuring out that cell's state. E.g., knowing the state of the nine light-red cells in the initial board always tells you the state of the red cell in the second board.\nIn this example, the partition corresponding to the red cell's state is strictly before the partition corresponding to the blue cell. The question of whether the red cell is alive or dead is before the question of whether the blue cell is alive or dead.\nMeanwhile, the question of whether the red cell is alive or dead is going to be orthogonal to the question of whether the green cell is alive or dead.\nAnd the question of whether the blue cell is alive or dead is not going to be orthogonal to the question of whether the green cell is alive or dead, because they intersect on the cyan cells.\nGeneralizing the point, fix \\(X=L_{(r_X,c_X,t_X)}, Y=L_{(r_Y,c_Y,t_Y)}\\), where \\((r_X,c_X,t_X),(r_Y,c_Y,t_Y)\\in R\\). Then:\n\n\\(h^{F}(X)=\\{L_{(r,c,0)}\\in B\\mid |r_X-r|\\leq t_X,|c_X-c|\\leq t_X\\}\\).\n\\(X \\ ᐸ^{F} \\ Y\\) if and only if \\(t_X \\ ᐸ \\ t_Y\\) and \\(|r_Y-r_X|,|c_Y-c_X|\\leq t_Y-t_X\\).\n\\(X \\perp^F Y\\) if and only if \\(|r_Y-r_X|> t_Y+t_X\\) or \\(|c_Y-c_X|> t_Y+t_X\\).\n\nWe can also see that the blue and green cells look almost orthogonal. If we condition on the values of the two cyan cells in the intersection of their histories, then the blue and green partitions become orthogonal. That's what we're going to discuss next.\n \n\n\n\n\nDavid Spivak: A priori, that would be a gigantic computation—to be able to tell me that you understand the factorization structure of that Game of Life. So what intuition are you using to be able to make that claim, that it has the kind of factorization structure you're implying there?\nScott: So, I've defined the factorization structure.\nDavid Spivak: You gave us a certain factorization already. So somehow you have a very good intuition about history, I guess. Maybe that's what I'm asking about.\nScott: Yeah. So, if I didn't give you the factorization, there's this obnoxious number of factorizations that you could put on the set here. And then for the history, the intuition I'm using is: \"What do I need to know in order to compute this value?\"\nI actually went through and I made little gadgets in Game of Life to make sure I was right here, that every single cell actually could in some situations affect the cells in question. But yeah, the intuition that I'm working from is mostly about the information in the computation. It's \"Can I construct a situation where if only I knew this fact, I would be able to compute what this value is? And if I can't, then it can take two different values.\"\nDavid Spivak: Okay. I think deriving that intuition from the definition is something I'm missing, but I don't know if we have time to go through that.\nScott: Yeah, I think I'm not going to here.\n\n\n\n\n \n\n \n(Part 2, Main Body)   ·   ·   ·   Conditional Orthogonality\n \nSo, just to set your expectations: Every time I explain Pearlian causal inference to someone, they say that d-separation is the thing they can't remember. d-separation is a much more complicated concept than \"directed paths between nodes\" and \"nodes without any common ancestors\" in Pearl; and similarly, conditional orthogonality will be much more complicated than time and orthogonality in our paradigm. Though I do think that conditional orthogonality has a much simpler and nicer definition than d-separation.\nWe'll begin with the definition of conditional history. We again have a fixed finite set as our context. Let \\(F=(S,B)\\) be a finite factored set, let \\(X,Y,Z\\in\\text{Part}(S)\\), and let \\(E\\subseteq S\\).\nThe conditional history of \\(X\\) given \\(E\\), written \\(h^F(X|E)\\), is the smallest set of factors \\(H\\subseteq B\\) satisfying the following two conditions:\n\nFor all \\(s,t\\in E\\), if \\(s\\sim_{b} t\\) for all \\(b\\in H\\), then \\(s\\sim_X t\\).\nFor all \\(s,t\\in E\\) and \\(r\\in S\\), if \\(r\\sim_{b_0} s\\) for all \\(b_0\\in H\\) and \\(r\\sim_{b_1} t\\) for all \\(b_1\\in B\\setminus H\\), then \\(r\\in E\\).\n\nThe first condition is much like the condition we had in our definition of history, except we're going to make the assumption that we're in \\(E\\). So the first condition is: if all you know about an object is that it's in \\(E\\), and you want to know which part it's in within \\(X\\), it suffices for me to tell you which part it's in within each factor in the history \\(H\\).\nOur second condition is not actually going to mention \\(X\\). It's going to be a relationship between \\(E\\) and \\(H\\). And it says that if you want to figure out whether an element of \\(S\\) is in \\(E\\), it's sufficient to parallelize and ask two questions:\n\n\"If I only look at the values of the factors in \\(H\\), is 'this point is in \\(E\\)' compatible with that information?\"\n\"If I only look at the values of the factors in \\(B\\setminus H\\) , is 'this point is in \\(E\\)' compatible with that information?\"\n\nIf both of these questions return \"yes\", then the point has to be in \\(E\\).\nI am not going to give an intuition about why this needs to be a part of the definition. I will say that without this second condition, conditional history would not even be well-defined, because it wouldn't be closed under intersection. And so I wouldn't be able to take the smallest set of factors in the subset ordering.\nInstead of justifying this definition by explaining the intuitions behind it, I'm going to justify it by using it and appealing to its consequences.\nWe're going to use conditional history to define conditional orthogonality, just like we used history to define orthogonality. We say that \\(X\\) and \\(Y\\) are orthogonal given \\(E\\subseteq S\\), written \\(X \\perp^{F} Y \\mid E\\), if the history of \\(X\\) given \\(E\\) is disjoint from the history of \\(Y\\) given \\(E\\): \\(h^F(X|E)\\cap h^F(Y|E)=\\{\\}\\).\nWe say \\(X\\) and \\(Y\\) are orthogonal given \\(Z\\in\\text{Part}(S)\\), written \\(X \\perp^{F} Y \\mid Z\\), if \\(X \\perp^{F} Y \\mid z\\) for all \\(z\\in Z\\). So what it means to be orthogonal given a partition is just to be orthogonal given each individual way that the partition might be, each individual part in that partition.\nI've been working with this for a while and it feels pretty natural to me, but I don't have a good way to push the naturalness of this condition. So again, I instead want to appeal to the consequences.\n \n\n \n(Part 2, Main Body)   ·   ·   ·   Compositional Semigraphoid Axioms\n \nConditional orthogonality satisfies the compositional semigraphoid axioms, which means finite factored sets are pretty well-behaved.\nLet \\(F=(S,B)\\) be a finite factored set, and let \\(X,Y,Z,W\\in \\text{Part}(S)\\) be partitions of \\(S\\). Then:\n\nIf \\(X \\perp^{F} Y \\mid Z\\), then \\(Y \\perp^{F} X \\mid Z\\).   (symmetry)\nIf \\(X \\perp^{F} (Y\\vee_S W) \\mid Z\\), then \\(X \\perp^{F} Y \\mid Z\\) and \\(X \\perp^{F} W \\mid Z\\).   (decomposition)\nIf \\(X \\perp^{F} (Y\\vee_S W) \\mid Z\\), then \\(X \\perp^{F} Y \\mid (Z\\vee_S W)\\).   (weak union)\nIf \\(X \\perp^{F} Y \\mid Z\\) and \\(X \\perp^{F} W \\mid (Z\\vee_S Y)\\), then \\(X \\perp^{F} (Y\\vee_S W) \\mid Z\\).   (contraction)\nIf \\(X \\perp^{F} Y \\mid Z\\) and If \\(X \\perp^{F} W \\mid Z\\), then \\(X \\perp^{F} (Y\\vee_S W) \\mid Z\\).   (composition)\n\nThe first four properties here make up the semigraphoid axioms, slightly modified because I'm working with partitions rather than sets of variables, so union is replaced with common refinement. There's another graphoid axiom which we're not going to satisfy; but I argue that we don't want to satisfy it, because it doesn't play well with determinism.\nThe fifth property here, composition, is maybe one of the most unintuitive, because it's not exactly satisfied by probabilistic independence.\nDecomposition and composition act like converses of each other. Together, conditioning on \\(Z\\) throughout, they say that \\(X\\) is orthogonal to both \\(Y\\) and \\(W\\) if and only if \\(X\\) is orthogonal to the common refinement of \\(Y\\) and \\(W\\).\n \n\n \n(Part 2, Main Body)   ·   ·   ·   The Fundamental Theorem\n \nIn addition to being well-behaved, I also want to show that conditional orthogonality is pretty powerful. The way I want to do this is by showing that conditional orthogonality exactly corresponds to conditional independence in all probability distributions you can put on your finite factored set. Thus, much like d-separation in the Pearlian picture, conditional orthogonality can be thought of as a combinatorial version of probabilistic independence.\nA probability distribution on a finite factored set \\(F=(S,B)\\) is a probability distribution \\(P\\) on \\(S\\) that can be thought of as coming from a bunch of independent probability distributions on each of the factors in \\(B\\). So \\(P(s)=\\prod_{b\\in B}P([s]_b)\\) for all \\(s\\in S\\).\nThis effectively means that your probability distribution factors the same way your set factors: the probability of any given element is the product of the probabilities of each of the individual parts that it's in within each factor.\nThe fundamental theorem of finite factored sets says: Let \\(F=(S,B)\\) be a finite factored set, and let \\(X,Y,Z\\in \\text{Part}(S)\\) be partitions of \\(S\\). Then \\(X \\perp^{F} Y \\mid Z\\) if and only if for all probability distributions \\(P\\) on \\(F\\), and all \\(x\\in X\\), \\(y\\in Y\\), and \\(z\\in Z\\), we have \\(P(x\\cap z)\\cdot P(y\\cap z)= P(x\\cap y\\cap z)\\cdot P(z)\\). I.e., \\(X\\) is orthogonal to \\(Y\\) given \\(Z\\) if and only conditional independence is satisfied across all probability distributions.\nThis theorem, for me, was a little nontrivial to prove. I had to go through defining certain polynomials associated with the subsets, and then dealing with unique factorization in the space of these polynomials; I think the proof was eight pages or something.\nThe fundamental theorem allows us to infer orthogonality data from probabilistic data. If I have some empirical distribution, or I have some Bayesian distribution, I can use that to infer some orthogonality data. (We could also imagine orthogonality data coming from other sources.) And then we can use this orthogonality data to get temporal data.\nSo next, we're going to talk about how to get temporal data from orthogonality data.\n \n\n \n(Part 2, Main Body)   ·   ·   ·   Temporal Inference\n \nWe're going to start with a finite set \\(\\Omega\\), which is our sample space.\nOne naive thing that you might think we would try to do is infer a factorization of \\(\\Omega\\). We're not going to do that because that's going to be too restrictive. We want to allow for \\(\\Omega\\) to maybe hide some information from us, for there to be some latent structure and such.\nThere may be some situations that are distinct without being distinct in \\(\\Omega\\). So instead, we're going to infer a factored set model of \\(\\Omega\\): some other set \\(S\\), and a factorization of \\(S\\), and a function from \\(S\\) to \\(\\Omega\\).\nA model of \\(\\Omega\\) is a pair \\((F, f)\\), where \\(F=(S,B)\\) is a finite factored set and \\(f:S\\rightarrow \\Omega\\). ( \\(f\\) need not be injective or surjective.)\nThen if I have a partition of \\(\\Omega\\), I can send this partition backwards across \\(f\\) and get a unique partition of \\(S\\). If \\(X\\in \\text{Parts}(\\Omega)\\), then \\(f^{-1}(X)\\in \\text{Parts}(S)\\) is given by \\(s\\sim_{f^{-1}(X)}t\\Leftrightarrow f(s)\\sim_X f(t)\\).\nThen what we're going to do is take a bunch of orthogonality facts about \\(\\Omega\\), and we're going to try to find a model which captures the orthogonality facts.\nWe will take as given an orthogonality database on \\(\\Omega\\), which is a pair \\(D = (O, N)\\), where \\(O\\) (for \"orthogonal\") and \\(N\\) (for \"not orthogonal\") are each sets of triples \\((X,Y,Z)\\) of partitions of \\(\\Omega\\). We'll think of these as rules about orthogonality.\nWhat it means for a model \\((F,f)\\) to satisfy a database \\(D\\) is:\n\n\\(f^{-1}(X) \\perp^{F} f^{-1}(Y) \\mid f^{-1}(Z)\\) whenever \\((X,Y,Z)\\in O\\), and\n\\(\\) \\(\\lnot (f^{-1}(X) \\perp^{F} f^{-1}(Y) \\mid f^{-1}(Z))\\) whenever \\((X,Y,Z)\\in N\\).\n\nSo we have these orthogonality rules we want to satisfy, and we want to consider the space of all models that are consistent with these rules. And even though there will always be infinitely many models that are consistent with my database, if at least one is—you can always just add more information that you then delete with \\(f\\)—we would like to be able to sometimes infer that for all models that satisfy our database, \\(f^{-1}(X)\\) is before \\(f^{-1}(Y)\\).\nAnd this is what we're going to mean by inferring time. If all of our models \\((F,f)\\) that are consistent with the database \\(D\\) satisfy some claim about time \\(f^{-1}(X) \\ ᐸ^F \\ f^{-1}(Y)\\), we'll say that \\(X \\ ᐸ_D \\ Y\\).\n \n\n \n(Part 2, Examples)   ·   ·   ·   Two Binary Variables (Pearl)\n \nSo we've set up this nice combinatorial notion of temporal inference. The obvious next questions are:\n\nCan we actually infer interesting facts using this method, or is it vacuous?\nAnd: How does this framework compare to Pearlian temporal inference?\n\nPearlian temporal inference is really quite powerful; given enough data, it can infer temporal sequence in a wide variety of situations. How powerful is the finite factored sets approach by comparison?\nTo address that question, we'll go to an example. Let \\(X\\) and \\(Y\\) be two binary variables. Pearl asks: \"Are \\(X\\) and \\(Y\\) independent?\" If yes, then there's no path between the two. If no, then there may be a path from \\(X\\) to \\(Y\\), or from \\(Y\\) to \\(X\\), or from a third variable to both \\(X\\) and \\(Y\\).\nIn either case, we're not going to infer any temporal relationships.\nTo me, it feels like this is where the adage \"correlation does not imply causation\" comes from. Pearl really needs more variables in order to be able to infer temporal relationships from more rich combinatorial structures.\nHowever, I claim that this Pearlian ontology in which you're handed this collection of variables has blinded us to the obvious next question, which is: is \\(X\\) independent of \\(X \\ \\mathrm{XOR} \\ Y\\)?\nIn the Pearlian world, \\(X\\) and \\(Y\\) were our variables, and \\(X \\ \\mathrm{XOR} \\ Y\\) is just some random operation on those variables. In our world, \\(X \\ \\mathrm{XOR} \\ Y\\) instead is a variable on the same footing as \\(X\\) and \\(Y\\). The first thing I do with my variables \\(X\\) and \\(Y\\) is that I take the product \\(X \\times Y\\) and then I forget the labels \\(X\\) and \\(Y\\).\nSo there's this question, \"Is \\(X\\) independent of \\(X \\ \\mathrm{XOR} \\ Y\\)?\". And if \\(X\\) is independent of \\(X \\ \\mathrm{XOR} \\ Y\\), we're actually going to be able to conclude that \\(X\\) is before \\(Y\\)!\nSo not only is the finite factored set paradigm non-vacuous, and not only is it going to be able to keep up with Pearl and infer things Pearl can't, but it's going to be able to infer a temporal relationship from only two variables.\nSo let's go through the proof of that.\n \n\n \n(Part 2, Examples)   ·   ·   ·   Two Binary Variables (Factored Sets)\n \n\nLet \\(\\Omega=\\{00,01,10,11\\}\\), and let \\(X\\), \\(Y\\), and \\(Z\\) be the partitions (/questions):\n\n\\(X = \\{\\{00,01\\}, \\{10,11\\}\\}\\).   (What is the first bit?)\n\\(Y=\\{\\{00,10\\}, \\{01,11\\}\\}\\).   (What is the second bit?)\n\\(Z=\\{\\{00,11\\}, \\{01,10\\}\\}\\).   (Do the bits match?)\n\nLet \\(D = (O,N)\\), where \\(O = \\{(X, Z, \\{\\Omega\\})\\}\\) and \\(N = \\{(Z, Z, \\{\\Omega\\})\\}\\). If we'd gotten this orthogonality database from a probability distribution, then we would have more than just two rules, since we would observe more orthogonality and non-orthogonality than that. But temporal inference is monotonic with respect to adding more rules, so we can just work with the smallest set of rules we'll need for the proof.\nThe first rule says that \\(X\\) is orthogonal to \\(Z\\). The second rule says that \\(Z\\) is not orthogonal to itself, which is basically just saying that \\(Z\\) is non-deterministic; it's saying that both of the parts in \\(Z\\) are possible, that both are supported under the function \\(f\\). The \\(\\{\\Omega\\}\\) indicates that we aren't making any conditions.\nFrom this, we'll be able to prove that \\(X \\ ᐸ_D \\ Y\\).\n \nProof. First, we'll show that that \\(X\\) is weakly before \\(Y\\). Let \\((F,f)\\) satisfy \\(D\\). Let \\(H_X\\) be shorthand for \\(h^F(f^{-1}(X))\\), and likewise let \\(H_Y=h^F(f^{-1}(Y))\\) and \\(H_Z=h^F(f^{-1}(Z))\\).\nSince \\((X,Z,\\{\\Omega\\})\\in O\\), we have that \\(H_X\\cap H_Z=\\{\\}\\); and since \\((Z,Z,\\{\\Omega\\})\\in N\\), we have that \\(H_Z\\neq \\{\\}\\).\nSince \\(X\\leq_{\\Omega} Y\\vee_{\\Omega} Z\\)—that is, since \\(X\\) can be computed from \\(Y\\) together with \\(Z\\)—\\(H_X\\subseteq H_Y\\cup H_Z\\). (Because a partition's history is the smallest set of factors needed to compute that partition.)\nAnd since \\(H_X\\cap H_Z=\\{\\}\\), this implies \\(H_X\\subseteq H_Y\\), so \\(X\\) is weakly before \\(Y\\).\nTo show the strict inequality, we'll assume for the purpose of contradiction that \\(H_X\\) = \\(H_Y\\).\nNotice that \\(Z\\) can be computed from \\(X\\) together with \\(Y\\)—that is, \\(Z\\leq_{\\Omega} X\\vee_{\\Omega} Y\\)—and therefore \\(H_Z\\subseteq H_X\\cup H_Y\\) (i.e., \\(H_Z \\subseteq H_X\\) ). It follows that \\(H_Z = (H_X\\cup H_Y)\\cap H_Z=H_X\\cap H_Z\\). But since \\(H_Z\\) is also disjoint from \\(H_X\\), this means that \\(H_Z = \\{\\}\\), a contradiction.\nThus \\(H_X\\neq H_Y\\), so \\(H_X \\subset H_Y\\), so \\(f^{-1}(X) \\ ᐸ^F \\ f^{-1}(Y)\\), so \\(X \\ ᐸ_D \\ Y\\). □\n \nWhen I'm doing temporal inference using finite factored sets, I largely have proofs that look like this. We collect some facts about emptiness or non-emptiness of various Boolean combinations of histories of variables, and we use these to conclude more facts about histories of variables being subsets of each other.\nI have a more complicated example that uses conditional orthogonality, not just orthogonality; I'm not going to go over it here.\nOne interesting point I want to make here is that we're doing temporal inference—we're inferring that \\(X\\) is before \\(Y\\)—but I claim that we're also doing conceptual inference.\nImagine that I had a bit, and it's either a 0 or a 1, and it's either blue or green. And these two facts are primitive and independently generated. And I also have this other concept that's like, \"Is it grue or bleen?\", which is the \\(\\mathrm{XOR}\\) of blue/green and 0/1.\nThere's a sense in which we're inferring \\(X\\) is before \\(Y\\) , and in that case, we can infer that blueness is before grueness. And that's pointing at the fact that blueness is more primitive, and grueness is a derived property.\nIn our proof, \\(X\\) and \\(Z\\) can be thought of as these primitive properties, and \\(Y\\) is a derived property that we're getting from them. So we're not just inferring time; we're inferring facts about what are good, natural concepts. And I think that there's some hope that this ontology can do for the statement \"you can't really distinguish between blue and grue\" what Pearl can do to the statement \"correlation does not imply causation\".\n \n\n \n(Part 2, Main Body)   ·   ·   ·   Applications / Future Work / Speculation\n \nThe future work I'm most excited by with finite factored sets falls into three rough categories: inference (which involves more computational questions), infinity (more mathematical), and embedded agency (more philosophical).\nResearch topics related to inference:\n\nDecidability of Temporal Inference\nEfficient Temporal Inference\nConceptual Inference\nTemporal Inference from Raw Data and Fewer Ontological Assumptions\nTemporal Inference with Deterministic Relationships\nTime without Orthogonality\nConditioned Factored Sets\n\nThere are a lot of research directions suggested by questions like \"How do we do efficient inference in this paradigm?\". Some of the questions here come from the fact that we're making fewer assumptions than Pearl, and are in some sense more coming from the raw data.\nThen I have the applications that are about extending factored sets to the infinite case:\n\nExtending Definitions to the Infinite Case\nThe Fundamental Theorem of Finite-Dimensional Factored Sets\nContinuous Time\nNew Lens on Physics\n\nEverything I've presented in this talk was under the assumption of finiteness. In some cases this wasn't necessary—but in a lot of cases it actually was, and I didn't draw attention to this.\nI suspect that the fundamental theorem can be extended to finite-dimensional factored sets (i.e., factored sets where \\(|B|\\) is finite), but it can not be extended to arbitrary-dimension factored sets.\nAnd then, what I'm really excited about is applications to embedded agency:\n\nEmbedded Observations\nCounterfactability\nCartesian Frames Successor\nUnraveling Causal Loops\nConditional Time\nLogical Causality from Logical Induction\nOrthogonality as Simplifying Assumptions for Decisions\nConditional Orthogonality as Abstraction Desideratum\n\n \nI focused on the temporal inference aspect of finite factored sets in this talk, because it's concrete and tangible to be able to say, \"Ah, we can do Pearlian temporal inference, only we can sometimes infer more structure and we rely on fewer assumptions.\"\nBut really, a lot of the applications I'm excited about involve using factored sets to model situations, rather than inferring factored sets from data.\nAnywhere that we currently model a situation using graphs with directed edges that represent information flow or causality, we might instead be able to use factored sets to model the situation; and this might allow our models to play more nicely with abstraction.\nI want to build up the factored set ontology as an alternative to graphs when modeling agents interacting with things, or when modeling information flow. And I'm really excited about that direction.\n\nThe post Finite Factored Sets appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Finite Factored Sets", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=5", "id": "f67c3edc5cd3848bfd4d9bb0053be6e5"} {"text": "May 2021 Newsletter\n\nMIRI senior researcher Scott Garrabrant has a major new result, \"Finite Factored Sets,\" that he'll be unveiling in an online talk this Sunday at noon Pacific time. (Zoom link.) For context on the result, see Scott's new post \"Saving Time.\"\nIn other big news, MIRI has just received its two largest individual donations of all time! Ethereum inventor Vitalik Buterin has donated ~$4.3 million worth of ETH to our research program, while an anonymous long-time supporter has donated MKR tokens we liquidated for an astounding ~$15.6 million. The latter donation is restricted so that we can spend a maximum of $2.5 million of it per year until 2025, like a multi-year grant.\nBoth donors have our massive thanks for these incredible gifts to support our work!\nOther MIRI updates\n\nMark Xu and Evan Hubinger use \"Cartesian world models\" to distinguish \"consequential agents\" (which assign utility to environment states, internal states, observations, and/or actions) \"structural agents\" (which optimize \"over the set of possible decide functions instead of the set of possible actions\"), and \"conditional agents\" (which map e.g. environmental states to utility functions, rather than mapping them to utility).\nIn Gradations of Inner Alignment Obstacles, Abram Demski makes three \"contentious claims\":\n\n\n\nThe most useful definition of \"mesa-optimizer\" doesn't require them to perform explicit search, contrary to the current standard.\nSuccess at aligning narrowly superhuman models might be bad news.\nSome versions of the lottery ticket hypothesis seem to imply that randomly initialized networks already contain deceptive agents.\n\n\n\nEliezer Yudkowsky comments on the relationship between early AGI systems' alignability and capabilities.\n\nNews and links\n\nJohn Wentworth announces a project to test the natural abstraction hypothesis, which asserts that \"most high-level abstract concepts used by humans are 'natural'\" and therefore \"a wide range of architectures will reliably learn similar high-level concepts\".\nOpen Philanthropy's Joe Carlsmith asks \"Is Power-Seeking AI an Existential Risk?\", and Luke Muehlhauser asks for examples of treacherous turns in the wild (also on LessWrong).\nFrom DeepMind's safety researchers: What Mechanisms Drive Agent Behavior?, Alignment of Language Agents, and An EPIC Way to Evaluate Reward Functions. Also, Rohin Shah provides his advice on entering the field.\nOwen Shen and Peter Hase summarize 70 recent papers on model transparency, interpretability, and explainability.\nEli Tyre asks: How do we prepare for final crunch time? (I would add some caveats: Some roles and scenarios imply that you'll have less impact on the eve of AGI, and can have far more impact today. For some people, \"final crunch time\" may be now, and marginal efforts matter less later. Further, some forms of \"preparing for crunch time\" will fail if there aren't clear warning shots or fire alarms.)\nPaul Christiano launches a new organization that will be his focus going forward: the Alignment Research Center. Learn more about Christiano's research approach in My Research Methodology and in his recent AMA.\n\n\nThe post May 2021 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "May 2021 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=5", "id": "ec4b312c4ca7878bfe366ee207a8ea6f"} {"text": "Saving Time\n\n\nNote: This is a preamble to Finite Factored Sets, a sequence I'll be posting over the next few weeks. This Sunday at noon Pacific time, I'll be giving a Zoom talk (link) introducing Finite Factored Sets, a framework which I find roughly as technically interesting as logical induction.\n(Update May 25: A video and blog post introducing Finite Factored Sets is now available here.)\n\n \nFor the last few years, a large part of my research motivation has been directed at trying to save the concept of time—save it, for example, from all the weird causal loops created by decision theory problems. This post will hopefully explain why I care so much about time, and what I think needs to be fixed.\n \nWhy Time?\nMy best attempt at a short description of time is that time is causality. For example, in a Pearlian Bayes net, you draw edges from earlier nodes to later nodes. To the extent that we want to think about causality, then, we will need to understand time.\nImportantly, time is the substrate in which learning and commitments take place. When agents learn, they learn over time. The passage of time is like a ritual in which opportunities are destroyed and knowledge is created. And I think that many models of learning are subtly confused, because they are based on confused notions of time.\nTime is also crucial for thinking about agency. My best short-phrase definition of agency is that agency is time travel. An agent is a mechanism through which the future is able to affect the past. An agent models the future consequences of its actions, and chooses actions on the basis of those consequences. In that sense, the consequence causes the action, in spite of the fact that the action comes earlier in the standard physical sense.\n \nProblem: Time is Loopy\nThe main thing going wrong with time is that it is \"loopy.\"\nThe primary confusing thing about Newcomb's problem is that we want to think of our decision as coming \"before\" the filling of the boxes, in spite of the fact that it physically comes after. This is hinting that maybe we want to understand some other \"logical\" time in addition to the time of physics.\nHowever, when we attempt to do this, we run into two problems: Firstly, we don't understand where this logical time might come from, or how to learn it, and secondly, we run into some apparent temporal loops.\nI am going to set aside the first problem and focus on the second.\nThe easiest way to see why we run into temporal loops is to notice that it seems like physical time is at least a little bit entangled with logical time.\nImagine the point of view of someone running a physics simulation of Newcomb's problem, and tracking all of the details of all of the atoms. From that point of view, it seems like there is a useful sense in which the filling of the boxes comes before an agent's decision to one-box or two-box. At the same time, however, those atoms compose an agent that shouldn't make decisions as though it were helpless to change anything.\nMaybe the solution here is to think of there being many different types of \"before\" and \"after,\" \"cause\" and \"effect,\" etc. For example, we could say that X is before Y from an agent-first perspective, but Y is before X from a physics-first perspective.\nI think this is right, and we want to think of there as being many different systems of time (hopefully predictably interconnected). But I don't think this resolves the whole problem.\nConsider a pair of FairBot agents that successfully execute a Löbian handshake to cooperate in an open-source prisoner's dilemma. I want to say that each agent's cooperation causes the other agent's cooperation in some sense. I could say that relative to each agent the causal/temporal ordering goes a different way, but I think the loop is an important part of the structure in this case. (I also am not even sure which direction of time I would want to associate with which agent.)\nWe also are tempted to put loops in our time/causality for other reasons. For example, when modeling a feedback loop in a system that persists over time, we might draw structures that look a lot like a Bayes net, but are not acyclic (e.g., a POMDP). We could think of this as a projection of another system that has an extra dimension of time, but it is a useful projection nonetheless.\n \nSolution: Abstraction\nMy main hope for recovering a coherent notion of time and unraveling these temporal loops is via abstraction.\nIn the example where the agent chooses actions based on their consequences, I think that there is an abstract model of the consequences that comes causally before the choice of action, which comes before the actual physical consequences.\nIn Newcomb's problem, I want to say that there is an abstract model of the action that comes causally before the filling of the boxes.\nIn the open source prisoners' dilemma, I want to say that there is an abstract proof of cooperation that comes causally before the actual program traces of the agents.\nAll of this is pointing in the same direction: We need to have coarse abstract versions of structures come at a different time than more refined versions of the same structure. Maybe when we correctly allow for different levels of description having different links in the causal chain, we can unravel all of the time loops.\n \nBut How?\nUnfortunately, our best understanding of time is Pearlian causality, and Pearlian causality does not do great with abstraction.\nPearl has Bayes nets with a bunch of variables, but when some of those variables are coarse abstract versions of other variables, then we have to allow for determinism, since some of our variables will be deterministic functions of each other; and the best parts of Pearl do not do well with determinism.\nBut the problem runs deeper than that. If we draw an arrow in the direction of the deterministic function, we will be drawing an arrow of time from the more refined version of the structure to the coarser version of that structure, which is in the opposite direction of all of our examples.\nMaybe we could avoid drawing this arrow from the more refined node to the coarser node, and instead have a path from the coarser node to the refined node. But then we could just make another copy of the coarser node that is deterministically downstream of the more refined node, adding no new degrees of freedom. What is then stopping us from swapping the two copies of the coarser node?\nOverall, it seems to me that Pearl is not ready for some of the nodes to be abstract versions of other nodes, which I think needs to be fixed in order to save time.\n\nDiscussion on: LessWrong\n\nThe post Saving Time appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Saving Time", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=6", "id": "a7f3958b9bfae4eca5aabba881250c34"} {"text": "Our all-time largest donation, and major crypto support from Vitalik Buterin\n\n\nI'm thrilled to announce two major donations to MIRI!\n \nFirst, a long-time supporter has given MIRI by far our largest donation ever: $2.5 million per year over the next four years, and an additional ~$5.6 million in 2025.\nThis anonymous donation comes from a cryptocurrency investor who previously donated $1.01M in ETH to MIRI in 2017. Their amazingly generous new donation comes in the form of 3001 MKR, governance tokens used in MakerDAO, a stablecoin project on the Ethereum blockchain. MIRI liquidated the donated MKR for $15,592,829 after receiving it. With this donation, the anonymous donor becomes our largest all-time supporter.\nThis donation is subject to a time restriction whereby MIRI can spend a maximum of $2.5M of the gift in each of the next four calendar years, 2021–2024. The remaining $5,592,829 becomes available in 2025.\n \nSecond, in other amazing news, the inventor and co-founder of Ethereum, Vitalik Buterin, yesterday gave us a surprise donation of 1050 ETH, worth $4,378,159.\nThis is the third-largest contribution to MIRI's research program to date, after Open Philanthropy's ~$7.7M grant in 2020 and the anonymous donation above.\nVitalik has previously donated over $1M to MIRI, including major support in our 2017 fundraiser.\n \nWe're beyond grateful for these two unprecedented individual gifts! Both donors have our heartfelt thanks.\n \n\nThe post Our all-time largest donation, and major crypto support from Vitalik Buterin appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Our all-time largest donation, and major crypto support from Vitalik Buterin", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=6", "id": "8ac6d356b68aa06c57b8992f01e5ac3e"} {"text": "April 2021 Newsletter\n\n\nMIRI updates\n\nMIRI researcher Abram Demski writes regarding counterfactuals:\n\nI've felt like the problem of counterfactuals is \"mostly settled\" (modulo some math working out) for about a year, but I don't think I've really communicated this online. Partly, I've been waiting to write up more formal results. But other research has taken up most of my time, so I'm not sure when I would get to it.\nSo, the following contains some \"shovel-ready\" problems. If you're convinced by my overall perspective, you may be interested in pursuing some of them. I think these directions have a high chance of basically solving the problem of counterfactuals (including logical counterfactuals). […]\n\n\nAlex Mennen writes a thoughtful critique of one of the core arguments behind Abram's new take on counterfactuals; Abram replies.\nAbram distinguishes simple Bayesians (who reason according to the laws of probability theory) from reflective Bayesians (who endorse background views that justify Bayesianism), and argues that simple Bayesians can better \"escape the trap\" of traditional issues with Bayesian reasoning.\nAbram explains the motivations behind his learning normativity research agenda, providing \"four different elevator pitches, which tell different stories\" about how the research agenda's desiderata hang together.\n\nNews and links\n\nCFAR co-founder Julia Galef has an excellent new book out on human rationality and motivated reasoning: The Scout Mindset: Why Some People See Things Clearly and Others Don't.\nKatja Grace argues that there is pressure for systems with preferences to become more coherent, efficient, and goal-directed.\nAndrew Critch discusses multipolar failure scenarios and \"multi-agent processes with a robust tendency to play out irrespective of which agents execute which steps in the process\".\nA second AI alignment podcast joins Daniel Filan's AI X-Risk Research Podcast: Quinn Dougherty's Technical AI Safety Podcast, with a recent episode featuring Alex Turner.\nA simple but important observation by Mark Xu: Strong Evidence is Common.\n\n\nThe post April 2021 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "April 2021 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=6", "id": "93f7ea62d0205f626d8465a737478f07"} {"text": "March 2021 Newsletter\n\n\nMIRI updates\n\nMIRI's Eliezer Yudkowsky and Evan Hubinger comment in some detail on Ajeya Cotra's The Case for Aligning Narrowly Superhuman Models. This conversation touches on some of the more important alignment research views at MIRI, such as the view that alignment requires a thorough understanding of AGI systems' reasoning \"under the hood\", and the view that early AGI systems should most likely avoid human modeling if possible.\nFrom Eliezer Yudkowsky: A Semitechnical Introductory Dialogue on Solomonoff Induction. (Also discussed by Richard Ngo.)\nMIRI research associate Vanessa Kosoy discusses infra-Bayesianism on the AI X-Risk Research Podcast.\nEliezer Yudkowsky and Chris Olah discuss ML transparency on social media.\n\nNews and links\n\nBrian Christian, author of The Alignment Problem: Machine Learning and Human Values, discusses his book on the 80,000 Hours Podcast.\nChris Olah's team releases Multimodal Neurons in Artificial Neural Networks, on artificial neurons that fire for multiple conceptually related stimuli.\nVitalik Buterin reflects on Inadequate Equilibria's arguments in the course of discussing prediction market inefficiencies.\n\n\nThe post March 2021 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "March 2021 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=6", "id": "2aefffc5ac77cbee2c4779a6980febe1"} {"text": "February 2021 Newsletter\n\n\nMIRI updates\n\nAbram Demski distinguishes different versions of the problem of \"pointing at\" human values in AI alignment.\nEvan Hubinger discusses \"Risks from Learned Optimization\" on the AI X-Risk Research Podcast.\nEliezer Yudkowsky comments on AI safety via debate and Goodhart's law.\nMIRI supporters donated ~$135k on Giving Tuesday, of which ~26% was matched by Facebook and ~28% by employers for a total of $207,436! MIRI also received $6,624 from TisBest Philanthropy in late December, largely through Round Two of Ray Dalio's #RedefineGifting initiative. Our thanks to all of you!\nSpencer Greenberg discusses society and education with Anna Salamon and Duncan Sabien on the Clearer Thinking podcast.\nWe Want MoR: Eliezer participates in a (spoiler-laden) discussion of Harry Potter and the Methods of Rationality.\n\nNews and links\n\nRichard Ngo reflects on his time in effective altruism:\n[…] Until recently, I was relatively passive in making big decisions. Often that meant just picking the most high-prestige default option, rather than making a specific long-term plan. This also involved me thinking about EA from a \"consumer\" mindset rather than a \"producer\" mindset. When it seemed like something was missing, I used to wonder why the people responsible hadn't done it; now I also ask why I haven't done it, and consider taking responsibility myself.\nPartly that's just because I've now been involved in EA for longer. But I think I also used to overestimate how established and organised EA is. In fact, we're an incredibly young movement, and we're still making up a lot of things as we go along. That makes proactivity more important.\nAnother reason to value proactivity highly is that taking the most standard route to success is often overrated. […] My inspiration in this regard is a friend of mine who has, three times in a row, reached out to an organisation she wanted to work for and convinced them to create a new position for her.\n\nNgo distinguishes claims about goal specification, orthogonality, instrumental convergence, value fragility, and Goodhart's law based on whether they refer to systems at training time versus deployment time.\nConnor Leahy, author of The Hacker Learns to Trust, argues (among other things) that \"GPT-3 is our last warning shot\" for coordinating to address AGI alignment. (Podcast version.) I include this talk because it's a good talk and the topic warrants discussion, though MIRI staff don't necessarily endorse this claim — and Eliezer would certainly object to any claim that something is a fire alarm for AGI.\nOpenAI safety researchers including Dario Amodei, Paul Christiano, and Chris Olah depart OpenAI.\nOpenAI's DALL-E uses GPT-3 for image generation, while CLIP exhibits impressive zero-shot image classification capabilities. Gwern Branwen comments in his newsletter.\n\n\nThe post February 2021 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "February 2021 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=6", "id": "8b32c17e7506eeeef3e13ef9b2fdd4c2"} {"text": "January 2021 Newsletter\n\n\nMIRI updates\n\nMIRI's Evan Hubinger uses a notion of optimization power to define whether AI systems are compatible with the strategy-stealing assumption.\nMIRI's Abram Demski discusses debate approaches to AI safety that don't rely on factored cognition.\nEvan argues that the first AGI systems are likely to be very similar to each other, and discusses implications for alignment.\nJack Clark's Import AI newsletter discusses the negative research results from our end-of-year update.\nRichard Ngo shares high-quality discussion of his great introductory sequence AGI Safety from First Principles, featuring Paul Christiano, Max Daniel, Ben Garfinkel, Adam Gleave, Matthew Graves, Daniel Kokotajlo, Will MacAskill, Rohin Shah, Jaan Tallinn, and MIRI's Evan Hubinger and Buck Shlegeris.\nTom Chivers discusses the rationality community and Rationality: From AI to Zombies. (Contrary to the headline, COVID-19 receives little discussion.)\n\nNews and links\n\nAlex Flint argues that growing the field of AI alignment researchers should be a side-effect of optimizing for \"research depth\", rather than functioning as a target in its own right — much as software projects shouldn't optimize for larger teams or larger codebases. Flint also comments on strategy/policy research.\nDaniel Kokotajlo of the Center on Long-Term Risk argues that AGI may enable decisive strategic advantage before GDP accelerates. Cf. a pithier comment from Eliezer Yudkowsky.\nTAI Safety Bibliographic Database: Jess Riedel and Angelica Deibel release a database of AI safety research, and analyze recent trends in the field.\nDeepMind's AI safety team investigates optimality properties of meta-trained RNNs and the tampering problem: \"How can we design agents that pursue a given objective when all feedback mechanisms for describing that objective are influenceable by the agent?\"\nFacebook launches Forecast, a new community prediction platform akin to Metaculus.\nEffective altruists have released the microCOVID calculator, a very handy tool for assessing activities' COVID-19 infection risk. Meanwhile, Zvi Mowshowitz's weekly updates on LessWrong continue to be a good (US-centric) resource for staying up to date on COVID-19 developments such as the B117 variant.\nRethink Priorities researcher Linchuan Zhang summarizes things he's learned forecasting COVID-19 in 2020: forming good outside views is often hard; effective altruists tend to overrate superforecasters; etc.\n\n\nThe post January 2021 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "January 2021 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=6", "id": "c5a821c56127cb94328cfe1e0a144ab3"} {"text": "December 2020 Newsletter\n\n\nMIRI COO Malo Bourgon reviews our past year and discusses our future plans in 2020 Updates and Strategy.\nOur biggest update is that we've made less concrete progress than we expected on the new research we described in 2018 Update: Our New Research Directions. As a consequence, we're scaling back our work on these research directions, and looking for new angles of attack that have better odds of resulting in a solution to the alignment problem.\nOther MIRI updates\n\nA new paper from MIRI researcher Evan Hubinger: \"An Overview of 11 Proposals for Building Safe Advanced AI.\"\nA belated paper announcement from last year: Andrew Critch's \"A Parametric, Resource-Bounded Generalization of Löb's Theorem, and a Robust Cooperation Criterion for Open-Source Game Theory\", a result originally written up during his time at MIRI, has been published in the Journal of Symbolic Logic.\nMIRI's Abram Demski introduces Learning Normativity: A Research Agenda. See also Abram's new write-up, Normativity.\nEvan Hubinger clarifies inner alignment terminology.\nThe Survival and Flourishing Fund (SFF) has awarded MIRI $563,000 in its latest round of grants! Our enormous gratitude to SFF's grant recommenders and funders.\nA Map That Reflects the Territory is a new print book set collecting the top LessWrong essays of 2018, including essays by MIRI researchers Eliezer Yudkowsky, Abram Demski, and Scott Garrabrant.\nDeepMind's Rohin Shah gives his overview of Scott Garrabrant's Cartesian Frames framework.\n\nNews and links\n\nDaniel Filan launches the AI X-Risk Research Podcast (AXRP) with episodes featuring Adam Gleave, Rohin Shah, and Andrew Critch.\nDeepMind's AlphaFold represents a very large advance in protein structure prediction.\nMetaculus launches Forecasting AI Progress, an open four-month tournament to predict advances in AI, with a $50,000 prize pool.\nContinuing the Takeoffs Debate: Richard Ngo responds to Paul Christiano's \"changing selection pressures\" argument against hard takeoff.\nOpenAI's Beth Barnes discusses the obfuscated arguments problem for AI safety via debate: \nPreviously we hoped that debate/IDA could verify any knowledge for which such human-understandable arguments exist, even if these arguments are intractably large. We hoped the debaters could strategically traverse small parts of the implicit large argument tree and thereby show that the whole tree could be trusted.\nThe obfuscated argument problem suggests that we may not be able to rely on debaters to find flaws in large arguments, so that we can only trust arguments when we could find flaws by recursing randomly—e.g. because the argument is small enough that we could find a single flaw if one existed, or because the argument is robust enough that it is correct unless it has many flaws.\n\n \n\nSome AI Research Areas and Their Relevance to Existential Safety: Andrew Critch compares out-of-distribution robustness, agent foundations, multi-agent RL, preference learning, and other research areas.\nBen Hoskin releases his 2020 AI Alignment Literature Review and Charity Comparison.\nOpen Philanthropy summarizes its AI governance grantmaking to date.\n\n\nThe post December 2020 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "December 2020 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=7", "id": "ab3afe66a641b60fa678daebc429f090"} {"text": "2020 Updates and Strategy\n\nMIRI's 2020 has been a year of experimentation and adjustment. In response to the COVID-19 pandemic, we largely moved our operations to more rural areas in March, and shifted to a greater emphasis on remote work. We took the opportunity to try new work set-ups and approaches to research, and have been largely happy with the results.\nAt the same time, 2020 saw limited progress in the research MIRI's leadership had previously been most excited about: the new research directions we started in 2017. Given our slow progress to date, we are considering a number of possible changes to our strategy, and MIRI's research leadership is shifting much of their focus toward searching for more promising paths.\n\nLast year, I projected that our 2020 budget would be $6.4M–$7.4M, with a point estimate of $6.8M. I now expect that our 2020 spending will be slightly above $7.4M. The increase in spending above my point estimate largely comes from expenses we incurred relocating staff and taking precautions in response to the COVID-19 pandemic.\nOur budget for 2021 is fairly uncertain, given that we are more likely than usual to see high-level shifts in our strategy in the coming year. My current estimate is that our spending will fall somewhere between $6M and $7.5M, which I expect to roughly break down as follows:\n\n \nI'm also happy to announce that the Survival and Flourishing Fund (SFF) has awarded MIRI $563,000 to support our research going forward, on top of support they provided earlier this year.\nGiven that our research program is in a transitional period, and given the strong support we have already received this year—$4.38M from Open Philanthropy, $903k from SFF, and ~$1.1M from other contributors (thank you all!)—we aren't holding a formal fundraiser this winter. Donations are still welcome and appreciated during this transition; but we'll wait to make our case to donors when our plans are more solid. For now, see our donate page if you are interested in supporting our research.\nBelow, I'll go into more detail on how our 2020 has gone, and on our plans for the future.\n \n2017-Initiated Research Directions and Research Plans\nIn 2017, we introduced a new set of research directions, which we described and motivated more in \"2018 Update: Our New Research Directions.\" We wrote that we were \"seeking entirely new low-level foundations for optimization,\" \"endeavoring to figure out parts of cognition that can be very transparent as cognition,\" and \"experimenting with some specific alignment problems.\" In December 2019, we noted that we felt we were making \"steady progress\" on this research, but were disappointed with the concrete results we'd had to date.\nAfter pushing more on these lines of research, MIRI senior staff have become more pessimistic about this approach. MIRI executive director and senior researcher Nate Soares writes:\nThe non-public-facing research I (Nate) was most excited about had a flavor of attempting to develop new pragmatically-feasible foundations for alignable AI, that did not rely on routing through gradient-descent-style machine learning foundations. We had various reasons to hope this could work, despite the obvious difficulties.\nThat project has, at this point, largely failed, in the sense that neither Eliezer nor I have sufficient hope in it for us to continue focusing our main efforts there. I'm uncertain whether it failed due to implementation failures on our part, due to the inherent difficulty of the domain, or due to flaws in the underlying theory.\nPart of the reason we lost hope is a sense that we were moving too slowly, given our sense of how far off AGI may be and our sense of the difficulty of the alignment problem. The field of AI alignment is working under a deadline, such that if work is going sufficiently slowly, we're better off giving up and pivoting to new projects that have a real chance of resulting in the first AGI systems being built on alignable foundations.\nWe are currently in a state of regrouping, weighing our options, and searching for plans that we believe may yet have a shot at working.\nLooking at the field as a whole, MIRI's research leadership remains quite pessimistic about most alignment proposals that we have seen put forward so far. That is, our update toward being more pessimistic about our recent research directions hasn't reduced our pessimism about the field of alternatives, and the next directions we undertake are not likely to resemble the directions that are popular outside of MIRI today.\nMIRI sees the need for a change of course with respect to these projects. At the same time, many (including Nate) still have some hope in the theory underlying this research, and have hope that the projects may be rescued in some way, such as by discovering and correcting failures in how we approached this research. But time spent on rescue efforts trades off against finding better and more promising alignment plans.\nSo we're making several changes affecting staff previously focused on this work. Some are departing MIRI for different work, as we shift direction away from lines they were particularly suited for. Some are seeking to rescue the 2017-initiated lines of research. Some are pivoting to different experiments and exploration.\nWe are uncertain about what long-term plans we'll decide on, and are in the process of generating new possible strategies. Some (non-mutually-exclusive) possibilities include:\n\nWe may become a home to diverse research approaches aimed at developing a new path to alignment. Given our increased uncertainty about the best angle of attack, it may turn out to be valuable to house a more diverse portfolio of projects, with some level of intercommunication and cross-pollination between approaches.\nWe may commit to an entirely new approach after a period of exploration, if we can identify one that we believe has a real chance of ensuring positive outcomes from AGI.\nWe may carry forward theories and insights from our 2017-initiated research directions into future plans, in a different form.\n\n \nResearch Write-Ups\nAlthough our 2017-initiated research directions have been our largest focus over the last few years, we've been running many other research programs in parallel with it.\nThe bulk of this work is nondisclosed-by-default as well, but it includes work we've written up publicly. (Note that as a rule, this public-facing work is unrepresentative of our research as a whole.)\nFrom our perspective, our most interesting public work this year is Scott Garrabrant's Cartesian frames model and Vanessa Kosoy's work on infra-Bayesianism.\nCartesian frames is a new framework for thinking about agency, intended as a successor to the cybernetic agent model. Whereas the cybernetic agent model assumes as basic an agent and environment persisting across time with a defined and stable I/O channel, Cartesian frames treat these features as more derived and dependent on how one conceptually carves up physical situations.\nThe Cartesian Frames sequence focuses especially on finding derived, approximation-friendly versions of the notion of \"subagent\" (previously discussed in \"Embedded Agency\") and temporal sequence (a source of decision-theoretic problems in cases where agents can base their decisions on predictions or proofs about their own actions). The sequence's final post discusses these and other potential directions for future work for the field to build on.\nIn general, MIRI's researchers are quite interested in new conceptual frameworks like these, as research progress can often be bottlenecked on our using the wrong lenses for thinking about problems, or on our lack of a simple formalism for putting intuitions to the test.\nMeanwhile, Vanessa Kosoy and Alex Appel's infra-Bayesianism is a novel framework for modeling reasoning in cases where the reasoner's hypothesis space may not include the true environment.\nThis framework is interesting primarily because it seems applicable to such a wide variety of problems: non-realizability, decision theory, anthropics, embedded agency, reflection, and the synthesis of induction/probability with deduction/logic. Vanessa describes infra-Bayesianism as \"opening the way towards applying learning theory to many problems which previously seemed incompatible with it.\"\n2020 also saw a large update to Scott and Abram's \"Embedded Agency,\" with some discussions clarified and several new subsections added. Additionally, a revised version of Vanessa's \"Optimal Polynomial-Time Estimators: A Bayesian Notion of Approximation Algorithm,\" co-authored with Alex Appel, was published in the Journal of Applied Logics.\nTo give a picture of some of the other research areas we've been pushing on, we asked some MIRI researchers and research associates to pick out highlights from their work over the past year, with comments on their selections.\nAbram Demski highlights the following write-ups:\n\n\"An Orthodox Case Against Utility Functions\" — \"Although in some sense this is a small technical point, it is indicative of a shift in perspective in some recent agent foundations research which I think is quite important.\"\n\"Radical Probabilism\" — \"Again, although one could see this as merely an explanation of the older logical induction result, I think it points at an important shift in perspective.\"\n\"Learning Normativity: A Research Agenda\" — \"In a sense, this research agenda clarifies the shift in perspective which the above two posts were communicating, although I haven't tied everything together yet.\n\"Dutch-Booking CDT: Revised Argument\" — \"To my eye, this is a large-ish decision theory result.\"\n\nEvan Hubinger summarizes his public research from the past year:\n\n\"An Overview of 11 Proposals for Building Safe Advanced AI\" — \"Probably my biggest project this year, this paper is my attempt at a unified explanation of the current major leading prosaic alignment proposals. The paper includes an exploration of each proposal's pros and cons from the perspective of outer alignment, inner alignment, training competitiveness, and performance competitiveness.\"\n\"I started mentoring Adam Shimi and Mark Xu this year, helping them start spinning up work in AI safety. Concrete things that came out of this include Adam Shimi's 'Universality Unwrapped' and Mark Xu's 'Does SGD Produce Deceptive Alignment?'\"\n\"I spent a lot of time this year thinking about AI safety via debate, which resulted in two new alternative debate proposals: 'AI Safety via Market Making' and 'Synthesizing Amplification and Debate.'\"\n\"I spent some time looking at different alignment proposals from a computational complexity standpoint, resulting in 'Alignment Proposals and Complexity Classes' and 'Weak HCH Accesses EXP.'\n\"'Outer Alignment and Imitative Amplification' makes the case for why imitative amplification is outer aligned; 'Learning the Prior and Generalization' provides my perspective on Paul's new 'learning the prior' approach; and 'Clarifying Inner Alignment Terminology' revisits terminology from 'Risks from Learned Optimization.'\"\n\nEarlier this year, Buck Shlegeris (link) and Evan Hubinger (link) also appeared on the Future of Life Institute's AI Alignment Podcast. Buck also gave a talk at Stanford: \"My Personal Cruxes for Working on AI Safety.\"\nLastly, Future of Humanity Institute researcher and MIRI research associate Stuart Armstrong summarizes his own research highlights:\n\n\"Pitfalls of Learning a Reward Function Online,\" working with DeepMind's Jan Leike, Laurent Orseau, and Shane Legg — \"This shows how agents can manipulate a \"learning\" process, the conditions that make that learning actually uninfluenceable, and some methods for turning influenceable learning processes into uninfluenceable ones.\"\n\"Model Splintering\" — \"Here I argue that a lot of AI safety problems can be reduced to the same problem: that of dealing with what happens when you move out of distribution from the training data. I argue that a principled way of dealing with these \"model splinterings\" is necessary to get safe AI, and sketch out some examples.\"\n\"Syntax, Semantics, and Symbol Grounding, Simplified\" — \"Here I argue that symbol grounding is a practical, necessary thing, not an abstract philosophical concept.\"\n\n \nProcess Improvements and Plans\nGiven the unusual circumstances brought on by the COVID-19 pandemic, in 2020 MIRI decided to run various experiments to see if we could improve our researchers' productivity while our Berkeley office was unavailable. In the process, a sizable subset of our research team has found good modifications to our work environment that we aim to maintain and expand on.\nMany of our research staff who spent this year in live-work quarantine groups in relatively rural areas in response to the COVID-19 pandemic have found surprisingly large benefits from living in a quieter, lower-density area together with a number of other researchers. Coordination and research have felt faster at a meta-level, with shorter feedback cycles, more efforts on more cruxy experiments, and more resulting pivots. Our biggest such pivot has been away from our 2017-initiated research directions, as described above.\nSeparately, MIRI staff have been weighing the costs and benefits of possibly moving somewhere outside the Bay Area for several years—taking into account the housing crisis and other governance failures, advantages and disadvantages of the local culture, tail risks of things taking a turn for the worse in the future, and other factors.\nPartly as a result of these considerations, and partly because it's easier to move when many of us have already relocated this year due to COVID-19, MIRI is considering relocating away from Berkeley. As we weigh the options, a particularly large factor in our considerations is whether our researchers expect the location, living situation, and work setup to feel good and comfortable, as we generally expect this to result in improved research progress. Increasingly, this factor is pointing us towards moving someplace new.\nMany at MIRI have noticed in the past that there are certain social settings, such as small effective altruism or alignment research retreats, that seem to spark an unusually high density of unusually productive conversations. Much of the energy and vibrancy in such retreats presumably stems from their novelty and their time-boxed nature. However, we suspect this isn't the only reason these events tend to be dense and productive, and we believe that we may be able to create a space that has some of these features every day.\nThis year, a number of our researchers have indeed felt that our new work set-up during the pandemic has a lot of this quality. We're therefore very eager to see if we can modify MIRI as a workplace so as to keep this feature around, or further augment it.\n \nOur year, then, has been characterized by some significant shifts in our thinking about research practices and which research directions are most promising.\nAlthough we've been disappointed by our level of recent concrete progress toward understanding how to align AGI-grade optimization, we plan to continue capitalizing on MIRI's strong pool of talent and accumulated thinking about alignment as we look for new and better paths forward. We'll provide more updates about our new strategy as our plans solidify.\nThe post 2020 Updates and Strategy appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "2020 Updates and Strategy", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=7", "id": "5caf98458344e8e2b96152956ec0bdbf"} {"text": "November 2020 Newsletter\n\n\n\nMIRI researcher Scott Garrabrant has completed his Cartesian Frames sequence. Scott also covers the first two posts' contents in video form.\n\nOther MIRI updates\n\nContrary to my previous announcement, MIRI won't be running a formal fundraiser this year, though we'll still be participating in Giving Tuesday and other matching opportunities. To donate and get information on tax-advantaged donations, employer matching, etc., see intelligence.org/donate. We'll also be doing an end-of-the-year update and retrospective in the next few weeks.\nFacebook's Giving Tuesday matching event takes place this Tuesday (Dec. 1) at 5:00:00am PT.  Facebook will 100%-match the first $2M donated, something that will plausibly take only 2–3 seconds. To get 100%-matched, then, it's even more important than last year to start clicking at 4:59:59AM PT. Facebook will then 10%-match the next $50M of donations that are made. Details on optimizing your donation(s) to MIRI's Facebook Fundraiser can be found at EA Giving Tuesday, a Rethink Charity project.\nVideo discussion: Stuart Armstrong, Scott Garrabrant, and the AI Safety Reading Group discuss Stuart's If I Were A Well-Intentioned AI….\nMIRI research associate Vanessa Kosoy raises questions about AI information hazards.\nBuck Shlegeris argues that we're likely at the \"hinge of history\" (assuming we aren't living in a simulation).\nTo make it easier to find and cite old versions of MIRI papers (especially ones that aren't on arXiv), we've collected links to obsolete versions on intelligence.org/revisions.\n\nNews and links\n\nCFAR's Anna Salamon asks: Where do (did?) stable, cooperative institutions come from?\nThe Center for Human-Compatible AI is accepting applications for research internships through Dec. 13.\nAI Safety (virtual) Camp is accepting applications through Dec. 15.\nThe 4th edition of Artificial Intelligence: A Modern Approach is out, with expanded discussion of the alignment problem.\nDeepMind's Rohin Shah reviews Brian Christian's new book The Alignment Problem: Machine Learning and Human Values.\nDaniel Filan and Rohin Shah discuss security mindset and takeoff speeds.\nFortune profiles existential risk philanthropist Jaan Tallinn.\n\n\nThe post November 2020 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "November 2020 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=7", "id": "d3536997fdcb9337c7317200c03fe31e"} {"text": "October 2020 Newsletter\n\n\nStarting today, Scott Garrabrant has begun posting Cartesian Frames, a sequence introducing a new conceptual framework Scott has found valuable for thinking about agency.\nIn Scott's words: Cartesian Frames are \"applying reasoning like Pearl's to objects like game theory's, with a motivation like Hutter's\".\nScott will be giving an online talk introducing Cartesian frames this Sunday at 12pm PT on Zoom (link). He'll also be hosting office hours on Gather.Town the next four Sundays; see here for details.\nOther MIRI updates\n\nAbram Demski discusses the problem of comparing utilities, highlighting some non-obvious implications.\nIn March 2020, the US Congress passed the CARES Act, which changes the tax advantages of donations to qualifying NPOs like MIRI in 2020. Changes include:\n\n1. A new \"above the line\" tax deduction: up to $300 per taxpayer ($600 for a married couple) in annual charitable contributions for people who take the standard deduction. Donations to donor-advised funds (DAFs) do not qualify for this new deduction.\n\t\t \n2. New charitable deduction limits: Taxpayers who itemize their deductions can deduct much greater amounts of their contributions. Individuals can elect to deduct donations up to 100% of their 2020 AGI — up from 60% previously. This higher limit also does not apply to donations to DAFs.\n\n\n\tAs usual, consult with your tax advisor for more information.\nOur fundraiser this year will start on November 29 (two days before Giving Tuesday) and finish on January 2. We're hoping that having our fundraiser straddle 2020 and 2021 will give people more flexibility given the unusual tax law changes above.\nI'm happy to announce that MIRI has received a donation of $246,435 from an anonymous returning donor. Our thanks to the donor, and to Effective Giving UK for facilitating this donation!\n\nNews and links\n\nRichard Ngo tries to provide a relatively careful and thorough version of the standard argument for worrying about AGI risk: AGI Safety from First Principles. See also Rohin Shah's summary.\nThe AI Alignment Podcast interviews Andrew Critch about his recent overview paper, \"AI Research Considerations for Human Existential Safety.\"\n\n\nThe post October 2020 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "October 2020 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=7", "id": "ddfdca2eaf7940775e252914e96ef4cd"} {"text": "September 2020 Newsletter\n\n\n\nAbram Demski and Scott Garrabrant have made a major update to \"Embedded Agency\", with new discussions of ε-exploration, Newcomblike problems, reflective oracles, logical uncertainty, Goodhart's law, and predicting rare catastrophes, among other topics.\n\nAbram has also written an overview of what good reasoning looks in the absence of Bayesian updating: Radical Probabilism. One recurring theme:\n[I]n general (i.e., without any special prior which does guarantee convergence for restricted observation models), a Bayesian relies on a realizability (aka grain-of-truth) assumption for convergence, as it does for some other nice properties. Radical probabilism demands these properties without such an assumption.\n[… C]onvergence points at a notion of \"objectivity\" for the radical probabilist. Although the individual updates a radical probabilist makes can go all over the place, the beliefs must eventually settle down to something. The goal of reasoning is to settle down to that answer as quickly as possible.\nMeanwhile, Infra-Bayesianism is a new formal framework for thinking about optimal reasoning without requiring an reasoner's true environment to be in its hypothesis space. Abram comments: \"Alex Appel and Vanessa Kosoy have been working hard at 'Infra-Bayesianism', a new approach to RL which aims to make it easier (ie, possible) to prove safety-relevant theorems (and, also, a new approach to Bayesianism more generally).\nOther MIRI updates\n\nAbram Demski writes a parable on the differences between logical inductors and Bayesians: The Bayesian Tyrant.\nBuilding on the selection vs. control distinction, Abram contrasts \"mesa-search\" and \"mesa-control\".\n\nNews and links\n\nFrom OpenAI's Stiennon et al.: Learning to Summarize with Human Feedback. MIRI researcher Eliezer Yudkowsky comments:\nA very rare bit of research that is directly, straight-up relevant to real alignment problems! They trained a reward function on human preferences and then measured how hard you could optimize against the trained function before the results got actually worse.\n[… Y]ou can ask for results as good as the best 99th percentile of rated stuff in the training data (a la Jessica Taylor's quantilization idea).  Ask for things the trained reward function rates as \"better\" than that, and it starts to find \"loopholes\" as seen from outside the system; places where the trained reward function poorly matches your real preferences, instead of places where your real preferences would rate high reward.\n\nChi Nguyen writes up an introduction to Paul Christiano's iterated amplification research agenda that seeks to be the first such resource that is \"both easy to understand and [gives] a complete picture\". The post includes inline comments by Christiano.\nForecasters share visualizations of their AI timelines on LessWrong.\n\n\nThe post September 2020 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "September 2020 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=7", "id": "329ed8f1817691f400f564c0d11bdb04"} {"text": "August 2020 Newsletter\n\n\nMIRI updates\n\nThree questions from MIRI's Abram Demski: What does it mean to apply decision theory?, How \"honest\" is GPT-3?, and How should AI debate be judged?\nA transcript from MIRI researcher Scott Garrabrant: What Would I Do? Self-Prediction in Simple Algorithms.\nMIRI researcher Buck Shlegeris reviews the debate on what the history of nuclear weapons implies about humanity's ability to coordinate.\nFrom MIRI's Evan Hubinger: Learning the Prior and Generalization and Alignment Proposals and Complexity Classes.\nRafael Harth's Inner Alignment: Explain Like I'm 12 Edition summarizes the concepts and takeaways from \"Risks from Learned Optimization\".\nIssa Rice reviews discussion to date on MIRI's research focus, \"To what extent is it possible to have a precise theory of rationality?\", and the relationship between deconfusion research and safety outcomes. (Plus a short reply.)\n\"Pitfalls of Learning a Reward Function Online\" (IJCAI paper, LW summary): FHI researcher and MIRI research associate Stuart Armstrong, with DeepMind's Jan Leike, Laurent Orseau, and Shane Legg, explore ways to discourage agents from manipulating their reward signal to be easier to optimize.\n\nNews and links\n\nFrom Paul Christiano: Learning the Prior and Better Priors as a Safety Problem.\nFrom Victoria Krakovna: Tradeoff Between Desirable Properties for Baseline Choices in Impact Measures.\nBen Pace summarizes Christiano's \"What Failure Looks Like\" post and the resultant discussion.\nKaj Sotala collects recent examples of experiences from people working with GPT-3.\n\n\nThe post August 2020 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "August 2020 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=7", "id": "a545f0974d38782167753fc5dedb67f0"} {"text": "July 2020 Newsletter\n\n\nAfter completing a study fellowship at MIRI that he began in late 2019, Blake Jones is joining the MIRI research team full-time! Blake joins MIRI after a long career working on low-level software systems such as the Solaris operating system and the Oracle database.\nOther MIRI updates\n\nMIRI researcher Evan Hubinger goes on the FLI podcast (transcript/discussion, audio) to discuss \"inner alignment, outer alignment, and proposals for building safe advanced AI\".\nA revised version of Vanessa Kosoy's \"Optimal Polynomial-Time Estimators: A Bayesian Notion of Approximation Algorithm,\" co-authored with Alex Appel, has been accepted to the Journal of Applied Logics.\nFrom MIRI researcher Abram Demski: Dutch-Booking CDT: Revised Argument argues that \"causal\" theories (ones using counterfactuals to evaluate expected value) must behave the same as theories using conditional probabilities. Relating HCH and Logical Induction discusses amplification in the context of reflective oracles. And Radical Probabilism reviews the surprising gap between Dutch-book arguments and Bayes' rule.\n\nNews and links\n\nAlex Flint summarizes the Center for Human-Compatible AI's assistance games research program.\nCHAI's Andrew Critch and MILA's David Krueger release \"AI Research Considerations for Human Existential Safety (ARCHES)\", a review of 29 AI (existential) safety research directions, each with an illustrative analogy, examples of current work and potential synergies between research directions, and discussion of ways the research approach might lower (or raise) existential risk.\nOpenAI's Danny Hernandez and Tom Brown present evidence that \"for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency\".\nDeepMind's Victoria Krakovna shares her takeaways from the COVID-19 pandemic for slow-takeoff scenarios.\nAI Impacts' Daniel Kokotajlo discusses possible changes the world might undergo before reaching AGI.\n80,000 Hours describes careers they view as promising but haven't written up as priority career paths, including information security (previously discussed here).\n\n\nThe post July 2020 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "July 2020 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=8", "id": "becceebfb298362964e0a52e1794ec45"} {"text": "June 2020 Newsletter\n\n\nMIRI researcher Evan Hubinger reviews \"11 different proposals for building safe advanced AI under the current machine learning paradigm\", comparing them on outer alignment, inner alignment, training competitiveness, and performance competitiveness. \nOther updates\n\nWe keep being amazed by new shows of support ⁠— following our last two announcements, MIRI has received a donation from another anonymous donor totaling ~$265,000 in euros, facilitated by Effective Giving UK and the Effective Altruism Foundation. Massive thanks to the donor for their generosity, and to both organizations for their stellar support for MIRI and other longtermist organizations!\nHacker News discusses Eliezer Yudkowsky's There's No Fire Alarm for AGI.\nMIRI researcher Buck Shlegeris talks about deference and inside-view models on the EA Forum.\nOpenAI unveils GPT-3, a massive 175-billion parameter language model that can figure out how to solve a variety of problems without task-specific training or fine-tuning. Gwern Branwen's pithy summary:\nGPT-3 is terrifying because it's a tiny model compared to what's possible, trained in the dumbest way possible on a single impoverished modality on tiny data, yet the first version already manifests crazy runtime meta-learning—and the scaling curves still are not bending!\nFurther discussion by Branwen and by Rohin Shah.\n\nStuart Russell gives this year's Turing Lecture online, discussing \"provably beneficial AI\".\n\n\nThe post June 2020 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "June 2020 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=8", "id": "a95c94075a670ad0262ad5a9718c8116"} {"text": "May 2020 Newsletter\n\n\nMIRI has received an anonymous donation of ~$275,000 in euros, facilitated by Effective Giving UK. Additionally, the Survival and Flourishing Fund, working with funders Jaan Tallinn and Jed McCaleb, has announced $340,000 in grants to MIRI. SFF is a new fund that is taking over much of BERI's grantmaking work.\nTo everyone involved in both decisions to support our research, thank you!\nOther updates\n\nAn Orthodox Case Against Utility Functions: MIRI researcher Abram Demski makes the case against utility functions that rely on a microphysical \"view from nowhere\".\nStuart Armstrong's \"If I were a well-intentioned AI…\" sequence looks at alignment problems from the perspective of a well-intentioned but ignorant agent.\nAI Impacts' Asya Bergal summarizes takeaways from \"safety-by-default\" researchers.\nAgarwal and Norouzi report improvements in offline RL.\nDaniel Kokotajlo's Three Kinds of Competitiveness distinguishes performance-competitive, cost-competitive, and date-competitive AI systems.\nThe Stanford Encyclopedia of Philosophy's new Ethics of AI and Robotics article includes a discussion of existential risk from AGI.\n\n\nThe post May 2020 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "May 2020 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=8", "id": "c278a7cd46474222133cd94e55b7cb4e"} {"text": "April 2020 Newsletter\n\n\nMIRI has been awarded its largest grant to date — $7,703,750 split over two years from Open Philanthropy, in partnership with Ben Delo, co-founder of the cryptocurrency trading platform BitMEX!\nWe have also been awarded generous grants by the Berkeley Existential Risk Initiative ($300,000) and the Long-Term Future Fund ($100,000). Our thanks to everybody involved!\nOther updates\n\nBuck Shlegeris of MIRI and Rohin Shah of CHAI discuss Rohin's 2018–2019 overview of technical AI alignment research on the AI Alignment Podcast.\nFrom MIRI's Abram Demski: Thinking About Filtered Evidence Is (Very!) Hard and Bayesian Evolving-to-Extinction. And from Evan Hubinger: Synthesizing Amplification and Debate.\nFrom OpenAI's Beth Barnes, Paul Christiano, Long Ouyang, and Geoffrey Irving: Progress on AI Safety via Debate.\nZoom In: An Introduction to Circuits: OpenAI's Olah, Cammarata, Schubert, Goh, Petrov, and Carter argue, \"Features are the fundamental unit of neural networks. They correspond to directions [in the space of neuron activations]. […] Features are connected by weights, forming circuits. […] Analogous features and circuits form across models and tasks.\"\nDeepMind's Agent57 appears to meet one of the AI benchmarks in AI Impacts' 2016 survey, \"outperform professional game testers on all Atari games using no game specific knowledge\", earlier than NeurIPS/ICML authors predicted.\nFrom DeepMind Safety Research: Specification gaming: the flip side of AI ingenuity.\n\n\nThe post April 2020 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "April 2020 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=8", "id": "9f75e52247981998aac3a7e9425a3128"} {"text": "MIRI's largest grant to date!\n\nA big announcement today: MIRI has been awarded a two-year $7,703,750 grant by Open Philanthropy — our largest grant to date. In combination with the ~$1.06M Open Philanthropy is also disbursing to MIRI this year (the second half of their 2019 grant), this amounts to $4.38M per year over two years, or roughly 60% of our predicted 2020–2021 budgets.\nWhile ~$6.24M of Open Philanthropy's new grant comes from their main funders, $1.46M was made possible by Open Philanthropy's new partnership with Ben Delo, co-founder of the cryptocurrency trading platform BitMEX. Ben Delo has teamed up with Open Philanthropy to support their long-termist grantmaking, which includes (quoting Open Philanthropy):\nreducing potential risks from advanced artificial intelligence, furthering biosecurity and pandemic preparedness, and other initiatives to combat global catastrophic risks, as well as much of the work we fund on effective altruism.\nWe're additionally happy to announce a $300,000 grant from the Berkeley Existential Risk Initiative. I'll note that at the time of our 2019 fundraiser, we expected to receive a grant from BERI in early 2020, and incorporated this into our reserves estimates. However, we predicted the grant size would be $600k; now that we know the final grant amount, that estimate should be $300k lower.\nFinally, MIRI has been awarded a $100,000 grant by the Effective Altruism Funds Long-Term Future Fund, managed by the Centre for Effective Altruism. The fund plans to release a write-up describing the reasoning for their new round of grants in a couple of weeks.\nOur thanks to Open Phil, Ben Delo and Longview Philanthropy (Ben Delo's philanthropic advisor, formerly known as Effective Giving UK), BERI, and the Long-Term Future Fund for this amazing support! Going into 2020–2021, these new grants put us in an unexpectedly good position to grow and support our research team. To learn more about our growth plans, see our 2019 fundraiser post and our 2018 strategy update.\nThe post MIRI's largest grant to date! appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "MIRI’s largest grant to date!", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=8", "id": "3cfbcc3184dca0770c8a7a5a0e3d0eb8"} {"text": "March 2020 Newsletter\n\n\n\nAs the COVID-19 pandemic progresses, the LessWrong team has put together a database of resources for learning about the disease and staying updated, and 80,000 Hours has a new write-up on ways to help in the fight against COVID-19. In my non-MIRI time, I've been keeping my own quick and informal notes on various sources' COVID-19 recommendations in this Google Doc. Stay safe out there!\n\nUpdates\n\nMy personal cruxes for working on AI safety: a talk transcript from MIRI researcher Buck Shlegeris.\nDaniel Kokotajlo of AI Impacts discusses Cortés, Pizarro, and Afonso as Precedents for Takeover.\nO'Keefe, Cihon, Garfinkel, Flynn, Leung, and Dafoe's \"The Windfall Clause\" proposes \"an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits\" that result from \"fundamental, economically transformative breakthroughs\" like AGI.\nMicrosoft announces the 17-billion-parameter language model Turing-NLG.\nOren Etzioni thinks AGI is too far off to deserve much thought, and cites Andrew Ng's \"overpopulation on Mars\" metaphor approvingly — but he's also moving the debate in a very positive direction by listing specific observations that would make him change his mind.\n\n\nThe post March 2020 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "March 2020 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=8", "id": "0c80303d92f3847beb1a4423a83c50ef"} {"text": "February 2020 Newsletter\n\n\nUpdates\n\nColm reviews our 2019 fundraiser: taking into account matching, we received a total of $601,120 from 250+ donors. Our thanks again for all the support!\nEvan Hubinger's Exploring Safe Exploration clarifies points he raised in Safe Exploration and Corrigibility. The issues raised here are somewhat subtler than may be immediately apparent, since we tend to discuss things in ways that collapse the distinctions Evan is making.\nLogician Arthur Milchior reviews the AIRCS workshop and MIRI's application process based on his first-hand experience with both. See also follow-up discussion with MIRI and CFAR staff.\nRohin Shah posts an in-depth 2018–19 review of the field of AI alignment.\nFrom Shevlane and Dafoe's new \"The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?\":\n \n[T]he existing conversation around AI has heavily borrowed concepts and conclusions from one particular field: vulnerability disclosure in computer security. We caution against AI researchers treating these lessons as immediately applicable. There are important differences between vulnerabilities in software and the types of vulnerabilities exploited by AI. […]\n\tPatches to software are often easy to create, and can often be made in a matter of weeks. These patches fully resolve the vulnerability. The patch can be easily propagated: for downloaded software, the software is often automatically updated over the internet; for websites, the fix can take effect immediately.\n\t[… F]or certain technologies, there is no low-cost, straightforward, effective defence. [… C]onsider biological research that provides insight into the manufacture of pathogens, such as a novel virus. A subset of viruses are very difficult to vaccinate for (there is still no vaccination for HIV) or otherwise prepare against. This lowers the defensive benefit of publication, by blocking a main causal pathway by which publication leads to greater protection. This contrasts with the case where an effective treatment can be developed within a reasonable time period[.]\n\nYann LeCun and Eliezer Yudkowsky discuss the concept \"AGI\".\nCFAR's Anna Salamon contrasts \"reality-revealing\" and \"reality-masking\" puzzles.\nScott Alexander reviews Stuart Russell's Human Compatible.\n\n\nLinks from the research team\n\nMIRI researchers anonymously summarize and comment on recent posts and papers:\n\nRe ACDT: a hack-y acausal decision theory — \"Stuart Armstrong calls this decision theory a hack. I think it might be more elegant than he's letting on (i.e., a different formulation could look less hack-y), and is getting at something.\"\nRe Predictors exist: CDT going bonkers… forever — \"I don't think Stuart Armstrong's example really adds much over some variants of Death in Damascus, but there's some good discussion of CDT vs. EDT stuff in the comments.\"\nRe Is the term mesa optimizer too narrow? — \"Matthew Barnett poses the important question, '[I]f even humans are not mesa optimizers, why should we expect mesa optimizers to be the primary real world examples of [malign generalization]?'\"\nRe Malign generalization without internal search — \"I think Matthew Barnett's question here is an important one. I lean toward the 'yes, this is a problem' camp—I don't think we can entirely eliminate malign generalization by eliminating internal search. But it is possible that this falls into other categories of misalignment (which we don't want to term 'inner alignment').\"\nRe (A -> B) -> A in Causal DAGs and Formulating Reductive Agency in Causal Models — \"I've wanted something like this for a while. Bayesian influence diagrams model agents non-reductively, by boldly asserting that some nodes are agentic. Can we make models which represent agents, without declaring a basic 'agent' type like that? John Wentworth offers an approach, representing agents via 'strange loops' across a use-mention boundary; and discusses how this might break down even further, with fully reductive agency. I'm not yet convinced that Wentworth has gotten it right, but it's exciting to see an attempt.\"\n\n\nThe post February 2020 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "February 2020 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=9", "id": "1012fa1ee377e1194d3000693b76d988"} {"text": "Our 2019 Fundraiser Review\n\nOur 2019 fundraiser ended on December 31 with the month-long campaign raising $601,1201 from over2 259 donors. While we didn't reach our $1M fundraising goal, the amount raised will be a very substantial help to us going into 2020. We're grateful to all who supported us during the fundraiser.\n \n20% of our fundraiser total was raised in a single minute on December 3 during Facebook's Giving Tuesday matching event, deserving special mention. Facebook's matching pool of $7M was exhausted within 14 seconds of their event's 5am PST start but in spite of the shortness of the event, a cohort of lightning-fast early risers secured a significant amount of Facebook matching for MIRI this year; of the $77,325 donated to MIRI on Giving Tuesday, $45,915 was donated early enough to be matched by Facebook. Thank you to everybody who set their clocks early to support us3 and a shout out to the EA Giving Tuesday/Rethink Charity collaboration which helped direct $563k in matching funds to EA nonprofits on Giving Tuesday.\n\n\n\n\nBeyond Giving Tuesday, we're very grateful to the Effective Altruism Foundation for providing a channel for residents of a number of EU countries including Germany, Switzerland, and the Netherlands to donate to MIRI in a tax-advantaged way and also to donors who leveraged their companies' matching generosity to maximum effect during the fundraiser.\nOur fundraiser fell well short of our $1M target this year, and also short of our in-fundraiser support in 2018 ($947k) and 2017 ($2.5M). It's plausible that some of the following (non-mutually-exclusive) factors may have contributed to this, though we don't know the relative strength of these factors:\n\n\nThe value of cryptocurrency, especially ETH, was significantly lower during the fundraiser than in 2017. Some donors have explicitly told us they're waiting for more advantageous ETH prices before supporting us again.\n\n\nMIRI's current nondisclosed-by-default research approach makes it difficult for some donors to financially support us at the moment, either because they disagree with the policy itself or because (absent more published research from MIRI) they feel like they personally lack the data they need to evaluate us against other giving opportunities. Several donors have voiced one or both of these sentiments to me, and they were cited prominently in this 2019 alignment research review.\n\n\nThe changes to US tax law in 2018 relating to individual deductions have caused some supporters to adjust the scale and timing of their donations, a trend noticed across US charitable giving in general. It's plausible that having future MIRI fundraisers straddle the new year (e.g. starting in 2020 and ending in 2021) might provide supporters with more flexibility in their giving; if you would personally find this helpful (or unhelpful), I'm interested to hear about it at .\n\n\nThis fundraiser saw MIRI involved in less counterfactual matching opportunities than in previous years — one compared to three in 2018 — which may have reduced the attraction for some of our more leverage-sensitive supporters this time around.\n\n\nSince MIRI's budget and size have grown a great deal over the past few years, some donors may think that we're hitting diminishing returns on marginal donations, at a rate that makes them want to look for other giving opportunities.\n\n\nThe scale of the funds received during MIRI fundraisers tends to be strongly affected by 1-4 large donors each year, a fair number of whom are one-time or sporadic donors. Since this has certainly given us higher-than-expected results on a number of past occasions, it's perhaps not surprising that such a randomness-heavy phenomenon would sometimes yield lower-than-expected support by chance. Specifically, we received no donations over $100,000 during this fundraiser, and the two donations over $50,000 were especially welcome.\n\n\nSome MIRI supporters who had been previously following an earning-to-give strategy have moved to direct work as 80,000 Hours' developing thoughts on the subject continue to influence the EA community.\n\n\nIn past years, when answering supporters' questions about the discount rate on their potential donations to MIRI, we've leaned towards a \"now > later\" approach. This plausibly resulted in a front-loading of some donations in 2017 and 2018.\n\n\nAs always, I'm interested in hearing individual supporters' thoughts about how they personally are thinking about their giving strategy; you're welcome to shoot me an email at \nOverall, we're extremely grateful for all the support we received during this fundraiser. Although we didn't hit our target, the support we received allows us to continue pursuing the majority of our growth plans, with cash reserves of 1.2–1.4 years at the start of 2020. To everyone who contributed, thank you.\n\n\n\n\nThe exact total is still subject to change as we continue to process a small number of donations.\n\n\nThere were significantly more anonymous donors than in previous years — plausibly a result of new data protection legislation like the European Union's GDPR and California's CCPA — which we aggregate as a single donor.\n\n\nIncluding Richard Schwall, Luke Stebbing, Simon Sáfár, Laura Soares, John Davis, Cliff Hyra, Noah Topper, Gwen Murray, and Daniel Kokotajlo. Thanks, all!\n\n\n\nThe post Our 2019 Fundraiser Review appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Our 2019 Fundraiser Review", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=9", "id": "f74d6cd64d66790b60a3a9dd5c228254"} {"text": "January 2020 Newsletter\n\n\nUpdates\n\nOur 2019 fundraiser ended Dec. 31. We'll have more to say in a few weeks in our fundraiser retrospective, but for now, a big thank you to the ~240 donors who together donated more than $526,000, including $67,484 in the first 20 seconds of Giving Tuesday (not counting matching dollars, which have yet to be announced). \nJan. 15 is the final day of CFAR's annual fundraiser. CFAR also recently ran an AMA and has posted their workshop participant handbook online.\nUnderstanding \"Deep Double Descent\": MIRI researcher Evan Hubinger describes a fascinating phenomenon in ML, and an interesting case study in ML research aimed at deepening our understanding, and not just advancing capabilities. In a follow-up post, Evan also considers possible implications for alignment research.\nSafe Exploration and Corrigibility: Evan notes an important (and alignment-relevant) way that notions of exploration in deep RL have shifted.\n\"Learning Human Objectives by Evaluating Hypothetical Behavior\": UC Berkeley and DeepMind researchers \"present a method for training reinforcement learning agents from human feedback in the presence of unknown unsafe states\".\n\n\nLinks from the research team\nThis continues my experiment from last month: having MIRI researchers anonymously pick out AI Alignment Forum posts to highlight and comment on.\n\nRe (When) is Truth-telling Favored in AI debate? — \"A paper by Vojtěch Kovařík and Ryan Carey; it's good to see some progress on the debate model!\"\nRe Recent Progress in the Theory of Neural Networks — \"Noah MacAulay provides another interesting example of research attempting to explain what's going on with NNs.\"\nRe When Goodharting is optimal — \"I like Stuart Armstrong's post for the systematic examination of why we might be afraid of Goodharting. The example at the beginning is an interesting one, because it seems (to me at least) like the robot really should go back and forth (staying a long time at each side to minimize lost utility). But Stuart is right that this answer is, at least, quite difficult to justify.\"\nRe Seeking Power is Instrumentally Convergent in MDPs and Clarifying Power-Seeking and Instrumental Convergence — \"It's nice to finally have a formal model of this, thanks to Alex Turner and Logan Smith. Instrumental convergence has been an informal part of the discussion for a long time.\"\nRe Critiquing \"What failure looks like\" — \"I thought that Grue Slinky's post was a good critical analysis of Paul Christiano's 'going out with a whimper' scenario, highlighting some of the problems it seems to have as a concrete AI risk scenario. In particular, I found the analogy given to the simplex algorithm persuasive in terms of showcasing how, despite the fact that many of our current most powerful tools already have massive differentials in how well they work on different problems, those values which are not served well by those tools don't seem to have lost out massively as a result. I still feel like there may be a real risk along the lines of 'going out with a whimper', but I think this post presents a real challenge to that scenario as it has been described so far.\"\nRe Counterfactual Induction — \"A proposal for logical counterfactuals by Alex Appel. This could use some more careful thought and critique; it's not yet clear exactly how much or little it accomplishes.\"\nRe A dilemma for prosaic AI alignment — \"Daniel Kokotajlo outlines key challenges for prosaic alignment: '[…] Now I think the problem is substantially harder than that: To be competitive prosaic AI safety schemes must deliberately create misaligned mesa-optimizers and then (hopefully) figure out how to align them so that they can be used in the scheme.'\"\n\n\nThe post January 2020 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "January 2020 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=9", "id": "f57f324a96c241804ba346613100c33d"} {"text": "December 2019 Newsletter\n\n\nFrom now through the end of December, MIRI's 2019 Fundraiser is live! See our fundraiser post for updates on our past year and future plans.\nOne of our biggest updates, I'm happy to announce, is that we've hired five new research staff, with a sixth to join us in February. For details, see Workshops and Scaling Up in the fundraiser post.\nAlso: Facebook's Giving Tuesday matching opportunity is tomorrow at 5:00am PT! See Colm's post for details on how to get your donation matched.\n\nOther updates\n\nOur most recent hire, \"Risks from Learned Optimization\" co-author Evan Hubinger, describes what he'll be doing at MIRI. See also Nate Soares' comment on how MIRI does nondisclosure-by-default.\nBuck Shlegeris discusses EA residencies as an outreach opportunity.\nOpenAI releases Safety Gym, a set of tools and environments for incorporating safety constraints into RL tasks.\nCHAI is seeking interns; application deadline Dec. 15.\n\n\nThoughts from the research team\n\nThis month, I'm trying something new: quoting MIRI researchers' summaries and thoughts on recent AI safety write-ups.\nI've left out names so that these can be read as a snapshot of people's impressions, rather than a definitive \"Ah, researcher X believes Y!\" Just keep in mind that these will be a small slice of thoughts from staff I've recently spoken to, not anything remotely like a consensus take.\n\nRe Will transparency help catch deception? — \"A good discussion of an important topic. Matthew Barnett suggests that any weaknesses in a transparency tool may turn it into a detrimental middle-man, and directly training supervisors to catch deception may be preferable.\"\nRe Chris Olah's views on AGI safety — \"I very much agree with Evan Hubinger's idea that collecting different perspectives — different 'hats' — is a useful thing to do. Chris Olah's take on transparency is good to see. The concept of microscope AI seems like a useful one, and Olah's vision of how the ML field could be usefully shifted is quite interesting.\"\nRe Defining AI Wireheading — \"Stuart Armstrong takes a shot at making a principled distinction between wireheading and the rest of Goodhart.\"\nRe How common is it to have a 3+ year lead? — \"This seems like a pretty interesting question for AI progress models. The expected lead time and questions of expected takeoff speed greatly influence the extent to which winner-take-all dynamics are plausible.\"\nRe Thoughts on Implementing Corrigible Robust Alignment — \"Steve Byrnes provides a decent overview of some issues around getting 'pointer' type values.\"\n\n\nThe post December 2019 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "December 2019 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=9", "id": "2d2568f248c246bca1fd34a5af9197f1"} {"text": "MIRI's 2019 Fundraiser\n\n\nMIRI's 2019 fundraiser is concluded.\nOver the past two years, huge donor support has helped us double the size of our AI alignment research team. Hitting our $1M fundraising goal this month will put us in a great position to continue our growth in 2020 and beyond, recruiting as many brilliant minds as possible to take on what appear to us to be the central technical obstacles to alignment.\nOur fundraiser progress, updated in real time (including donations and matches made during the Facebook Giving Tuesday event):\n\n \n\n\n\n\n\n\nMIRI is a CS/math research group with a goal of understanding how to reliably \"aim\" future general-purpose AI systems at known goals. For an introduction to this research area, see Ensuring Smarter-Than-Human Intelligence Has A Positive Outcome and Risks from Learned Optimization in Advanced Machine Learning Systems. For background on how we approach the problem, see 2018 Update: Our New Research Directions and Embedded Agency.\nAt the end of 2017, we announced plans to substantially grow our research team, with a goal of hiring \"around ten new research staff over the next two years.\" Two years later, I'm happy to report that we're up eight research staff, and we have a ninth starting in February of next year, which will bring our total research team size to 20.1\nWe remain excited about our current research directions, and continue to feel that we could make progress on them more quickly by adding additional researchers and engineers to the team. As such, our main organizational priorities remain the same: push forward on our research directions, and grow the research team to accelerate our progress.\nWhile we're quite uncertain about how large we'll ultimately want to grow, we plan to continue growing the research team at a similar rate over the next two years, and so expect to add around ten more research staff by the end of 2021.\nOur projected budget for 2020 is $6.4M–$7.4M, with a point estimate of $6.8M,2 up from around $6M this year.3 In the mainline-growth scenario, we expect our budget to look something like this:\n\n \nLooking further ahead, since staff salaries account for the vast majority of our expenses, I expect our spending to increase proportionately year-over-year while research team growth continues to be a priority.\nGiven our $6.8M budget for 2020, and the cash we currently have on hand, raising $1M in this fundraiser will put us in a great position for 2020. Hitting $1M positions us with cash reserves of 1.25–1.5 years going into 2020, which is exactly where we want to be to support ongoing hiring efforts and to provide the confidence we need to make and stand behind our salary and other financial commitments.\nFor more details on what we've been up to this year, and our plans for 2020, read on!\n \n1. Workshops and scaling up\nIf you lived in a world that didn't know calculus, but you knew something was missing, what general practices would have maximized your probability of coming up with it?\nWhat if you didn't start off knowing something was missing? Could you and some friends have gotten together and done research in a way that put you in a good position to notice it, to ask the right questions?\nMIRI thinks that humanity is currently missing some of the core concepts and methods that AGI developers will need in order to align their systems down the road. We think we've found research paths that may help solve that problem, and good ways to rapidly improve our understanding via experiments; and we're eager to add more researchers and engineers' eyes and brains to the effort.\nA significant portion of MIRI's current work is in Haskell, and benefits from experience with functional programming and dependent type systems. More generally, if you're a programmer who loves hunting for the most appropriate abstractions to fit some use case, developing clean concepts, making and then deploying elegant combinators, or audaciously trying to answer the deepest questions in computer science—then we think you should apply to work here, get to know us at a workshop, or reach out with questions.\nAs noted above, our research team is growing fast. The latest additions to the MIRI team include:\nEvan Hubinger, a co-author on \"Risks from Learned Optimization in Advanced Machine Learning Systems\". Evan previously designed the functional programming language Coconut, was an intern at OpenAI, and has done software engineering work at Google, Yelp, and Ripple.\nJeremy Schlatter, a software engineer who previously worked at Google and OpenAI. Some of the public projects Jeremy has contributed to include OpenAI's Dota 2 bot and a debugger for the Go programming language.\nSeraphina Nix, joining MIRI in February 2020. Seraphina graduates this month from Oberlin College with a major in mathematics and minors in computer science and physics. She has previously done research on ultra-lightweight dark matter candidates, deep reinforcement learning, and teaching neural networks to do high school mathematics.\nRafe Kennedy, who joins MIRI after working as an independent existential risk researcher at the Effective Altruism Hotel. Rafe previously worked at the data science startup NStack, and he holds an MPhysPhil from the University of Oxford in Physics & Philosophy.\nMIRI's hires and job trials are typically drawn from our 4.5-day, all-expense-paid AI Risk for Computer Scientists (AIRCS) workshop series.\nOur workshop program is the best way we know of to bring promising talented individuals into what we think are useful trajectories towards being highly-contributing AI researchers and engineers. Having established an experience that participants love and that we believe to be highly valuable, we plan to continue experimenting with new versions of the workshop, and expect to run ten workshops over the course of 2020, up from eight this year.\nThese programs have led to a good number of new hires at MIRI as well as other AI safety organizations, and we find them valuable for everything from introducing talented outsiders to AI safety, to leveling up people who have been thinking about these issues for years.\nIf you're interested in attending, apply here. If you have any questions, we highly encourage you to shoot Buck Shlegeris an email.\nOur MIRI Summer Fellows Program plays a similar role for us, but is more targeted at mathematicians. We're considering running MSFP in a shorter format in 2020. For any questions about MSFP, email Colm Ó Riain.\n \n2. Research and write-ups\nOur 2018 strategy update continues to be a great overview of where MIRI stands today, describing how we think about our research, laying out our case for working here, and explaining why most of our work currently isn't public-facing.\nGiven the latter point, I'll focus in this section on spotlighting what we've written up this past year, providing snapshots of some of the work individuals at MIRI are currently doing (without any intended implication that this is representative of the whole), and conveying some of our current broad impressions about how our research progress is going.\nSome of our major write-ups and publications this year were:\n\n\"Risks from Learned Optimization in Advanced Machine Learning Systems,\" by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. The process of generating this paper significantly clarified our own thinking, and informed Scott and Abram's discussion of subsystem alignment in \"Embedded Agency.\"Scott views \"Risks from Learned Optimization\" as being of comparable importance to \"Embedded Agency\" as exposition of key alignment difficulties, and we've been extremely happy about the new conversations and research that the field at large has produced to date in dialogue with the ideas in \"Risks from Learned Optimization\".\nThoughts on Human Models, by Scott Garrabrant and DeepMind-based MIRI Research Associate Ramana Kumar, argues that the AI alignment research community should begin prioritizing \"approaches that work well in the absence of human models.\"The role of human models in alignment plans strikes us as one of the most important issues for MIRI and other research groups to wrestle with, and we're generally interested in seeing what new approaches groups outside MIRI might come up with for leveraging AI for the common good in the absence of human models.\n\"Cheating Death in Damascus,\" by Nate Soares and Ben Levinstein. We presented this decision theory paper at the Formal Epistemology Workshop in 2017, but a lightly edited version has now been accepted to The Journal of Philosophy, previously voted the second highest-quality academic journal in philosophy.\nThe Alignment Research Field Guide, a very accessible and broadly applicable resource both for individual researchers and for groups getting off the ground.\n\nOur other recent public writing includes an Effective Altruism Forum AMA with Buck Shlegeris, Abram Demski's The Parable of Predict-O-Matic, and the many interesting outputs of the AI Alignment Writing Day we hosted toward the end of this year's MIRI Summer Fellows Program.\nTurning to our research team, last year we announced that prolific Haskell programmer Edward Kmett joined the MIRI team, freeing him up to do the thing he's passionate about—improving the state of highly reliable (and simultaneously highly efficient) programming languages. MIRI Executive Director Nate Soares views this goal as very ambitious, though would feel better about the world if there existed programming languages that were both efficient and amenable to strong formal guarantees about their properties.\nThis year Edward moved to Berkeley to work more closely with the rest of the MIRI team. We've found it very helpful to have him around to provide ideas and contributions to our more engineering-oriented projects, helping give some amount of practical grounding to our work. Edward has also continued to be a huge help with recruiting through his connections in the functional programming and type theory world.\nMeanwhile, our newest addition, Evan Hubinger, plans to continue working on solving inner alignment for amplification. Evan has outlined his research plans on the AI Alignment Forum, noting that relaxed adversarial training is a fairly up-to-date statement of his research agenda. Scott and other researchers at MIRI consider Evan's work quite exciting, both in the context of amplification and in the context of other alignment approaches it might prove useful for.\nAbram Demski is another MIRI researcher who has written up a large number of his research thoughts over the last year. Abram reports (fuller thoughts here) that he has moved away from a traditional decision-theoretic approach this year, and is now spending more time on learning-theoretic approaches, similar to MIRI Research Associate Vanessa Kosoy. Quoting Abram:\nAround December 2018, I had a big update against the \"classical decision-theory\" mindset (in which learning and decision-making are viewed as separate problems), and towards taking a learning-theoretic approach. [… I have] made some attempts to communicate my update against UDT and toward learning-theoretic approaches, including this write-up. I talked to Daniel Kokotajlo about it, and he wrote The Commitment Races Problem, which I think captures a good chunk of it.\nFor her part, Vanessa's recent work includes the paper \"Delegative Reinforcement Learning: Learning to Avoid Traps with a Little Help,\" which she presented at the ICLR 2019 SafeML workshop.\nI'll note again that the above are all snapshots of particular research directions various researchers at MIRI are pursuing, and don't necessarily represent other researchers' views or focus. As Buck recently noted, MIRI has a pretty flat management structure. We pride ourselves on minimizing bureaucracy, and on respecting the ability of our research staff to form their own inside-view models of the alignment problem and of what's needed next to make progress. Nate recently expressed similar thoughts about how we do nondisclosure-by-default.\nAs a consequence, MIRI's more math-oriented research especially tends to be dictated by individual models and research taste, without the expectation that everyone will share the same view of the problem.\nRegarding his overall (very high-level) sense of how MIRI's new research directions are progressing, Nate Soares reports:\nProgress in 2019 has been slower than expected, but I have a sense of steady progress. In particular, my experience is one of steadily feeling less confused each week than the week before—of me and other researchers having difficulties that were preventing us from doing a thing we wanted to do, staring at them for hours, and then realizing that we'd been thinking wrongly about this or that, and coming away feeling markedly more like we know what's going on.\nAn example of the kind of thing that causes us to feel like we're making progress is that we'll notice, \"Aha, the right tool for thinking about all three of these apparently-dissimilar problems was order theory,\" or something along those lines; and disparate pieces of frameworks will all turn out to be the same, and the relevant frameworks will become simpler, and we'll be a little better able to think about a problem that I care about. This description is extremely abstract, but represents the flavor of what I mean by \"steady progress\" here, in the same vein as my writing last year about \"deconfusion.\"\nOur hope is that enough of this kind of progress gives us a platform from which we can generate particular exciting results on core AI alignment obstacles, and I expect to see such results reasonably soon. To date, however, I have been disappointed by the amount of time that's instead been spent on deconfusing myself and shoring up my frameworks; I previously expected to have more exciting results sooner.\nIn research of the kind we're working on, it's not uncommon for there to be years between sizeable results, though we should also expect to sometimes see cascades of surprisingly rapid progress, if we are indeed pushing in the right directions. My inside view of our ongoing work currently predicts that we're on a productive track and should expect to see results we are more excited about before too long.\nOur research progress, then, is slower than we had hoped, but the rate and quality of progress continues to be such that we consider this work very worthwhile, and we remain optimistic about our ability to convert further research staff hours into faster progress. At the same time, we are also (of course) looking for where our research bottlenecks are and how we can make our work more efficient, and we're continuing to look for tweaks we can make that might boost our output further.\nIf things go well over the next few years—which seems likely but far from guaranteed—we'll continue to find new ways of making progress on research threads we care a lot about, and continue finding ways to hire people to help make that happen.\nResearch staff expansion is our biggest source of expense growth, and by encouraging us to move faster on exciting hiring opportunities, donor support plays a key role in how we execute on our research agenda. Though the huge support we've received to date has put us in a solid position even at our new size, further donor support is a big help for us in continuing to grow. If you want to play a part in that, thank you.\n\n\n\nDonate Now\n\n\n\nThis number includes a new staff member who is currently doing a 6-month trial with us.These estimates were generated using a model similar to the one I used last year. For more details see our 2018 fundraiser post.This falls outside the $4.4M–$5.5M range I estimated in our 2018 fundraiser post, but is in line with the higher end of revised estimates we made internally in Q1 2019.The post MIRI's 2019 Fundraiser appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "MIRI’s 2019 Fundraiser", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=9", "id": "767b8a7da322c227513c53088dfc5051"} {"text": "Giving Tuesday 2019\n\nUpdate January 25, 2020: $77,325 was donated to MIRI through Facebook on Giving Tuesday. $45,915 of this was donated within 13.5 seconds of the Facebook matching event starting at 5:00AM PT and was matched by Facebook. Thank you to everybody who set their clocks early to support us so generously! Shout out too to the EA Giving Tuesday and Rethink Charity team for their amazing efforts on behalf of the EA Community. \n\n\n\n\n\n\nUpdate December 2, 2019: This page has been updated to reflect (a) observed changes in Facebook's flow for donations of $500 or larger (b) additional information on securing matching for donations of $2,500 or larger during Facebook's matching event and (c) a pointer to Paypal's newly announced, though significantly smaller, matching event(s). Please check in here for more updates before the Facebook Matching event begins at 5am PT on December 3. \n\n\nMIRI's annual fundraiser begins this Monday, December 2, 2019 and Giving Tuesday takes place the next day; starting at 5:00:00am PT (8:00:00am ET) on December 3, Facebook will match donations made on fundraiser pages on their platform up to a total of $7,000,000. This post focuses on this Facebook matching event. (You can find information on Paypal's significantly smaller matching events in the footnotes.1)\n\n\nDonations during Facebook's Giving Tuesday event will be matched dollar for dollar on a first-come, first-served basis until the $7,000,000 in matching funds are used up. Based on trends in previous years, this will probably occur within 10 seconds.\nAny US-based 501(c)(3) nonprofit eligible to receive donations on Facebook, e.g. MIRI, can be matched.\nFacebook will match up to a total of $100,000 per nonprofit organization.\nEach donor can have up to $20,000 in eligible donations matched on Giving Tuesday. There is a default limit of $2,499 per donation. Donors who wish to donate more than $2,499 have multiple strategies to choose from (below) to increase the chances of their donations being matched.\n\n In 2018, Facebook's matching pool of $7M was exhausted within 16 seconds of the event starting and in that time, 66% of our lightning-fast donors got their donations to MIRI matched, securing a total of $40,072 in matching funds. This year, we're aiming for the per-organization $100,000 maximum and since it's highly plausible that this year's matching event will end within 4-10 seconds, here are some tips to improve the chances of your donation to MIRI's Fundraiser Page on Facebook being matched. \nPre-Event Preparation (before Giving Tuesday)\n\nConfirm your FB account is operational.\nAdd your preferred credit card(s) as payment method(s) in your FB settings page. Credit cards are plausibly mildly preferable to Paypal as a payment option in terms of donation speed.\nTest your payment method(s) ahead of time by donating small amount(s) to MIRI's Fundraiser page. \nIf your credit card limit is lower than the amount you're considering donating, it may be possible to (a) overpay the balance ahead of time and/or (b) call your credit card asking them to (even temporarily) increase your limit. \nIf you plan to donate more than $2,499, see below for instructions.\nSync whatever clock you'll be using with time.is.\nConsider pledging your donation to MIRI at EA Giving Tuesday.2\n\n\n\nDonating on Giving Tuesday\nOn Tuesday, December 3, BEFORE 5:00:00am PT — it's advisable to be alert and ready 10-20 minutes before the event — prepare your donation, so you can make your donation with a single click when the event begins at 5:00:00am PT.\n\n\n\nOpen an accurate clock at time.is.\nIn a different browser window alongside, open MIRI's Fundraiser Page on Facebook in your browser.\nClick on the Donate button. \nIn the \"Donate\" popup that surfaces:\n\nEnter your donation amount — between $5 and $2,499. See below for larger donations.\nChoose whichever card you're using for your donation.\nOptionally enter a note and/or adjust the donation visibility.\n\nAt 05:00:00 PST, click on the Green Donate button. If your donation amount is $500 or larger, you may be presented with an additional \"Confirm Your Donation\" dialog. If so, click on its Donate button as quickly as possible. \n\n \n\n\n\nLarger Donations\nBy default, Facebook places a limit of $2,499 per donation (in the US3), and will match up to $20,000 per donor. If you're in a position to donate $2,500 or more to MIRI, you can:\n\nUse multiple browser windows/tabs for each individual donation: open up MIRI's Fundraiser Page on Facebook in as many tabs as needed in your browser and follow the instructions above in each window/tab so you have multiple Donate buttons ready to click, one in each tab. Then at 5:00:00 PT, channel your lightning and click as fast as you can — one of our star supporters last year made 8 donations within 21 seconds, 5 of which were matched.\n\nand/or\nBefore the event — ideally not the morning of — follow EA Giving Tuesday's instructions on how to increase your per-donation limit on Facebook above $2,499. Our friends at EA Giving Tuesday estimate that \"you are likely to be able to successfully donate up to $9,999 per donation\" after following these instructions. Their analysis also suggests that going higher than $10,000 for an individual donation plausibly significantly increases the probability of being declined and therefore advise not going beyond $9,999 per donation. It is possible that Facebook may put a cap of $2,499 on individual donations closer to the event.\n\n\nUsing a combination of the above, a generous supporter could, for example, make 2 donations of $9,999 each — in separate browser windows — within seconds of the event starting.\n \n\n\n\n\n\n\n\n\n\nPaypal has 3 separate matching events on Giving Tuesday — all of which add 10% to eligible donations — for the USA, Canada, and the UK.\nThanks to Ari, William, Rethink Charity and all at EA Giving Tuesday for their work to help EA organizations maximize their share of Facebook's matching funds.\nFor up-to-date information on Facebook's donation limits outside the US, check out EA Giving Tuesday's doc.\n\n\nThe post Giving Tuesday 2019 appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Giving Tuesday 2019", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=9", "id": "8d4e7d4c4a42016633e230dd88029cd4"} {"text": "November 2019 Newsletter\n\n\nI'm happy to announce that Nate Soares and Ben Levinstein's \"Cheating Death in Damascus\" has been accepted for publication in The Journal of Philosophy (previously voted the second-highest-quality journal in philosophy).\nIn other news, MIRI researcher Buck Shlegeris has written over 12,000 words on a variety of MIRI-relevant topics in an EA Forum AMA. (Example topics: advice for software engineers; what alignment plans tend to look like; and decision theory.)\nOther updates\n\nAbram Demski's The Parable of Predict-O-Matic is a great read: the predictor/optimizer issues it covers are deep, but I expect a fairly wide range of readers to enjoy it and get something out of it.\nEvan Hubinger's Gradient Hacking describes an important failure mode that hadn't previously been articulated.\nVanessa Kosoy's LessWrong shortform has recently discussed some especially interesting topics related to her learning-theoretic agenda.\nStuart Armstrong's All I Know Is Goodhart constitutes nice conceptual progress on expected value maximizers that are aware of Goodhart's law and trying to avoid it.\nReddy, Dragan, and Levine's paper on modeling human intent cites (of all things) Harry Potter and the Methods of Rationality as inspiration. \n\nNews and links\n\nArtificial Intelligence Research Needs Responsible Publication Norms: Crootof provides a good review of the issue on Lawfare.\nStuart Russell's new book is out: Human Compatible: Artificial Intelligence and the Problem of Control (excerpt). Rohin Shah's review does an excellent job of contextualizing Russell's views within the larger AI safety ecosystem, and Rohin highlights the quote:\nThe task is, fortunately, not the following: given a machine that possesses a high degree of intelligence, work out how to control it. If that were the task, we would be toast. A machine viewed as a black box, a fait accompli, might as well have arrived from outer space. And our chances of controlling a superintelligent entity from outer space are roughly zero. Similar arguments apply to methods of creating AI systems that guarantee we won't understand how they work; these methods include whole-brain emulation — creating souped-up electronic copies of human brains — as well as methods based on simulated evolution of programs. I won't say more about these proposals because they are so obviously a bad idea.\n\nJacob Steinhardt releases an AI Alignment Research Overview.\nPatrick LaVictoire's AlphaStar: Impressive for RL Progress, Not for AGI Progress raises some important questions about how capable today's state-of-the-art systems are.\n\n\nThe post November 2019 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "November 2019 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=10", "id": "3078dbb84072f6bc53e418a0a569eb1d"} {"text": "October 2019 Newsletter\n\n\nUpdates\n\nBen Pace summarizes a second round of AI Alignment Writing Day posts.\nThe Zettelkasten Method: MIRI researcher Abram Demski describes a note-taking system that's had a large positive effect on his research productivity.\nWill MacAskill writes a detailed critique of functional decision theory; Abram Demski (1, 2) and Matthew Graves respond in the comments.\n\nNews and links\n\nRecent AI alignment posts: Evan Hubinger asks \"Are minimal circuits deceptive?\", Paul Christiano describes the strategy-stealing assumption, and Wei Dai lists his resolved confusions about Iterated Distillation and Amplification. See also Rohin Shah's comparison of recursive approaches to AI alignment.\nAlso on LessWrong: A Debate on Instrumental Convergence Between LeCun, Russell, Bengio, Zador, and More.\nFHI's Ben Garfinkel and Allan Dafoe argue that conflicts between nations tend to exhibit \"offensive-then-defensive scaling\".\nOpenAI releases a follow-up report on GPT-2, noting that several groups \"have explicitly adopted similar staged release approaches\" to OpenAI.\nNVIDIA Applied Deep Learning Research has trained a model that appears to essentially replicate GPT-2, with 5.6x as many parameters, slightly better WikiText perplexity, and slightly worse LAMBADA accuracy. The group has elected to share their training and evaluation code, but not the model weights.\nOpenAI fine-tunes GPT-2 for text continuation and summarization tasks that incorporate human feedback, noting, \"Our motivation is to move safety techniques closer to the general task of 'machines talking to humans,' which we believe is key to extracting information about human values.\"\n\n\nThe post October 2019 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "October 2019 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=10", "id": "958e519e942ffc4da3166bf05678b334"} {"text": "September 2019 Newsletter\n\n\nUpdates\n\nWe ran a very successful MIRI Summer Fellows Program, which included a day where participants publicly wrote up their thoughts on various AI safety topics. See Ben Pace's first post in a series of roundups.\nA few highlights from the writing day: Adele Lopez's Optimization Provenance; Daniel Kokotajlo's Soft Takeoff Can Still Lead to Decisive Strategic Advantage and The \"Commitment Races\" Problem; Evan Hubinger's Towards a Mechanistic Understanding of Corrigibility; and John Wentworth's Markets are Universal for Logical Induction and Embedded Agency via Abstraction.\nNew posts from MIRI staff and interns: Abram Demski's Troll Bridge; Matthew Graves' View on Factored Cognition; Daniel Filan's Verification and Transparency; and Scott Garrabrant's Intentional Bucket Errors and Does Agent-like Behavior Imply Agent-like Architecture?\nSee also a forum discussion on \"proof-level guarantees\" in AI safety.\n\nNews and links\n\nFrom Ben Cottier and Rohin Shah: Clarifying Some Key Hypotheses in AI Alignment\nClassifying Specification Problems as Variants of Goodhart's Law: Victoria Krakovna and Ramana Kumar relate DeepMind's SRA taxonomy to mesa-optimizers, selection and control, and Scott Garrabrant's Goodhart taxonomy. Also new from DeepMind: Ramana, Tom Everitt, and Marcus Hutter's Designing Agent Incentives to Avoid Reward Tampering.\nFrom OpenAI: Testing Robustness Against Unforeseen Adversaries. 80,000 Hours also recently interviewed OpenAI's Paul Christiano, with some additional material on decision theory.\nFrom AI Impacts: Evidence Against Current Methods Leading to Human-Level AI and Ernie Davis on the Landscape of AI Risks\nFrom Wei Dai: Problems in AI Alignment That Philosophers Could Potentially Contribute To\nRichard Möhn has put together a calendar of upcoming AI alignment events.\nThe Berkeley Existential Risk Initiative is seeking an Operations Manager.\n\n\nThe post September 2019 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "September 2019 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=10", "id": "e2e7f745cde981d26129b6a04e18b36c"} {"text": "August 2019 Newsletter\n\n\nUpdates\n\nMIRI research associate Stuart Armstrong is offering $1000 for good questions to ask an Oracle AI.\nRecent AI safety posts from Stuart: Indifference: Multiple Changes, Multiple Agents; Intertheoretic Utility Comparison: Examples; Normalising Utility as Willingness to Pay; and Partial Preferences Revisited.\nMIRI researcher Buck Shlegeris has put together a quick and informal AI safety reading list.\nThere's No Fire Alarm for AGI reports on a researcher's January 2017 prediction that \"in the next two years, we will not get 80, 90%\" on Winograd schemas, an NLP test. Although this prediction was correct, researchers at Microsoft, Carnegie Mellon and Google Brain, and Facebook have now (2.5 years later) achieved Winograd scores of 89.0 and 90.4.\nOrtega et al.'s \"Meta-Learning of Sequential Strategies\" includes a discussion of mesa-optimization, independent of Hubinger et al.'s \"Risks from Learned Optimization in Advanced Machine Learning Systems,\" under the heading of \"spontaneous meta-learning.\"\n\nNews and links\n\nWei Dai outlines forum participation as a research strategy.\nOn a related note, the posts on the AI Alignment Forum this month were very good — I'll spotlight them all this time around. Dai wrote on the purposes of decision theory research; Shah on learning biases and rewards simultaneously; Kovarik on AI safety debate and its applications; Steiner on the Armstrong agenda and the intentional stance; Trazzi on manipulative AI; Cohen on IRL and imitation; and Manheim on optimizing and Goodhart effects (1, 2, 3).\nJade Leung discusses AI governance on the AI Alignment Podcast.\nCMU and Facebook researchers' Pluribus program beats human poker professionals ⁠— using only $144 in compute. The developers also choose not to release the code: \"Because poker is played commercially, the risk associated with releasing the code outweighs the benefits.\"\nMicrosoft invests $1 billion in OpenAI. From Microsoft's press release: \"Through this partnership, the companies will accelerate breakthroughs in AI and power OpenAI's efforts to create artificial general intelligence (AGI).\" OpenAI has also released a paper on \"The Role of Cooperation in Responsible AI Development.\"\nOught has a new preferred introduction to their work. See also Paul Christiano's Ought: Why it Matters and Ways to Help.\nFHI has 11 open research positions; applications are due by Aug. 16. You can also apply to CSER's AGI risk research associate position through Aug. 26.\n\n\nThe post August 2019 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "August 2019 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=10", "id": "a89d7c92afa173427f246947a33fcb93"} {"text": "July 2019 Newsletter\n\n\nHubinger et al.'s \"Risks from Learned Optimization in Advanced Machine Learning Systems\", one of our new core resources on the alignment problem, is now available on arXiv, the AI Alignment Forum, and LessWrong.\nIn other news, we received an Ethereum donation worth $230,910 from Vitalik Buterin — the inventor and co-founder of Ethereum, and now our third-largest all-time supporter!\nAlso worth highlighting, from the Open Philanthropy Project's Claire Zabel and Luke Muehlhauser: there's a pressing need for security professionals in AI safety and biosecurity.\n \nIt's more likely than not that within 10 years, there will be dozens of GCR-focused roles in information security, and some organizations are already looking for candidates that fit their needs (and would hire them now, if they found them).\nIt's plausible that some people focused on high-impact careers (as many effective altruists are) would be well-suited to helping meet this need by gaining infosec expertise and experience and then moving into work at the relevant organizations.\n \nOther updates\n\nMesa Optimization: What It Is, And Why We Should Care — Rohin Shah's consistently excellent Alignment Newsletter discusses \"Risks from Learned Optimization…\" and other recent AI safety work.\nMIRI Research Associate Stuart Armstrong releases his Research Agenda v0.9: Synthesising a Human's Preferences into a Utility Function.\nOpenAI and MIRI staff help talk Munich student Connor Leahy out of releasing an attempted replication of OpenAI's GPT-2 model. (LessWrong discussion.) Although Leahy's replication attempt wasn't successful, write-ups like his suggest that OpenAI's careful discussion surrounding GPT-2 is continuing to prompt good reassessments of publishing norms within ML. Quoting from Leahy's postmortem:\n \nSometime in the future we will have reached a point where the consequences of our research are beyond what we can discover in a one-week evaluation cycle. And given my recent experiences with GPT2, we might already be there. The more complex and powerful our technology becomes, the more time we should be willing to spend in evaluating its consequences. And if we have doubts about safety, we should default to caution.\n\tWe tend to live in an ever accelerating world. Both the industrial and academic R&D cycles have grown only faster over the decades. Everyone wants \"the next big thing\" as fast as possible. And with the way our culture is now, it can be hard to resist the pressures to adapt to this accelerating pace. Your career can depend on being the first to publish a result, as can your market share.\n\tWe as a community and society need to combat this trend, and create a healthy cultural environment that allows researchers to take their time. They shouldn't have to fear repercussions or ridicule for delaying release. Postponing a release because of added evaluation should be the norm rather than the exception. We need to make it commonly accepted that we as a community respect others' safety concerns and don't penalize them for having such concerns, even if they ultimately turn out to be wrong. If we don't do this, it will be a race to the bottom in terms of safety precautions.\n \n\nFrom Abram Demski: Selection vs. Control; Does Bayes Beat Goodhart?; and Conceptual Problems with Updateless Decision Theory and Policy Selection\nVox's Future Perfect Podcast interviews Jaan Tallinn and discusses MIRI's role in originating and propagating AI safety memes.\nThe AI Does Not Hate You, journalist Tom Chivers' well-researched book about the rationality community and AI risk, is out in the UK.\n\nNews and links\n\nOther recent AI safety write-ups: David Krueger's Let's Talk About \"Convergent Rationality\"; Paul Christiano's Aligning a Toy Model of Optimization; and Owain Evans, William Saunders, and Andreas Stuhlmüller's Machine Learning Projects on Iterated Distillation and Amplification\nFrom DeepMind: Vishal Maini puts together an AI reading list, Victoria Krakovna recaps the ICLR Safe ML workshop, and Pushmeet Kohli discusses AI safety on the 80,000 Hours Podcast.\nThe EA Foundation is awarding grants for \"efforts to reduce risks of astronomical suffering (s-risks) from advanced artificial intelligence\"; apply by Aug. 11.\nAdditionally, if you're a young AI safety researcher (with a PhD) based at a European university or nonprofit, you may want to apply for ~$60,000 in funding from the Bosch Center for AI.\n\n\nThe post July 2019 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "July 2019 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=10", "id": "1f9185a39895ba6d805923dbad42c8f8"} {"text": "New paper: \"Risks from learned optimization\"\n\nEvan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant have a new paper out: \"Risks from learned optimization in advanced machine learning systems.\"\nThe paper's abstract:\nWe analyze the type of learned optimization that occurs when a learned model (such as a neural network) is itself an optimizer—a situation we refer to as mesa-optimization, a neologism we introduce in this paper.\nWe believe that the possibility of mesa-optimization raises two important questions for the safety and transparency of advanced machine learning systems. First, under what circumstances will learned models be optimizers, including when they should not be? Second, when a learned model is an optimizer, what will its objective be—how will it differ from the loss function it was trained under—and how can it be aligned? In this paper, we provide an in-depth analysis of these two primary questions and provide an overview of topics for future research.\nThe critical distinction presented in the paper is between what an AI system is optimized to do (its base objective) and what it actually ends up optimizing for (its mesa-objective), if it optimizes for anything at all. The authors are interested in when ML models will end up optimizing for something, as well as how the objective an ML model ends up optimizing for compares to the objective it was selected to achieve.\nThe distinction between the objective a system is selected to achieve and the objective it actually optimizes for isn't new. Eliezer Yudkowsky has previously raised similar concerns in his discussion of optimization daemons, and Paul Christiano has discussed such concerns in \"What failure looks like.\"\nThe paper's contents have also been released this week as a sequence on the AI Alignment Forum, cross-posted to LessWrong. As the authors note there:\nWe believe that this sequence presents the most thorough analysis of these questions that has been conducted to date. In particular, we plan to present not only an introduction to the basic concerns surrounding mesa-optimizers, but also an analysis of the particular aspects of an AI system that we believe are likely to make the problems related to mesa-optimization relatively easier or harder to solve. By providing a framework for understanding the degree to which different AI systems are likely to be robust to misaligned mesa-optimization, we hope to start a discussion about the best ways of structuring machine learning systems to solve these problems.\nFurthermore, in the fourth post we will provide what we think is the most detailed analysis yet of a problem we refer as deceptive alignment which we posit may present one of the largest—though not necessarily insurmountable—current obstacles to producing safe advanced machine learning systems using techniques similar to modern machine learning.\n \n\n\n\nSign up to get updates on new MIRI technical results\nGet notified every time a new technical paper is published.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n \nThe post New paper: \"Risks from learned optimization\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “Risks from learned optimization”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=10", "id": "79735886dad0f2b5c1f67b753e30e008"} {"text": "June 2019 Newsletter\n\n\n\tEvan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant have released the first two (of five) posts on \"mesa-optimization\":\nThe goal of this sequence is to analyze the type of learned optimization that occurs when a learned model (such as a neural network) is itself an optimizer—a situation we refer to as mesa-optimization.\nWe believe that the possibility of mesa-optimization raises two important questions for the safety and transparency of advanced machine learning systems. First, under what circumstances will learned models be optimizers, including when they should not be? Second, when a learned model is an optimizer, what will its objective be—how will it differ from the loss function it was trained under—and how can it be aligned?\nThe sequence begins with Risks from Learned Optimization: Introduction and continues with Conditions for Mesa-Optimization. (LessWrong mirror.)\nOther updates\n\nNew research posts: Nash Equilibria Can Be Arbitrarily Bad; Self-Confirming Predictions Can Be Arbitrarily Bad; And the AI Would Have Got Away With It Too, If…; Uncertainty Versus Fuzziness Versus Extrapolation Desiderata\nWe've released our annual review for 2018.\nApplications are open for two AI safety events at the EA Hotel in Blackpool, England: the Learning-By-Doing AI Safety Workshop (Aug. 16-19), and the Technical AI Safety Unconference (Aug. 22-25).\nA discussion of takeoff speed, including some very incomplete and high-level MIRI comments.\n\nNews and links\n\nOther recent AI safety posts: Tom Sittler's A Shift in Arguments for AI Risk and Wei Dai's \"UDT2\" and \"against UD+ASSA\".\nTalks from the SafeML ICLR workshop are now available online.\nFrom OpenAI: \"We're implementing two mechanisms to responsibly publish GPT-2 and hopefully future releases: staged release and partnership-based sharing.\"\nFHI's Jade Leung argues that \"states are ill-equipped to lead at the formative stages of an AI governance regime,\" and that \"private AI labs are best-placed to lead on AI governance\".\n\n\nThe post June 2019 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "June 2019 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=11", "id": "96387d4f62062d324cb8510276f79f51"} {"text": "2018 in review\n\nOur primary focus at MIRI in 2018 was twofold: research—as always!—and growth.\nThanks to the incredible support we received from donors the previous year, in 2018 we were able to aggressively pursue the plans detailed in our 2017 fundraiser post. The most notable goal we set was to \"grow big and grow fast,\" as our new research directions benefit a lot more from a larger team, and require skills that are a lot easier to hire for. To that end, we set a target of adding 10 new research staff by the end of 2019.\n2018 therefore saw us accelerate the work we started in 2017, investing more in recruitment and shoring up the foundations needed for our ongoing growth. Since our 2017 fundraiser post, we're up 3 new research staff, including noted Haskell developer Edward Kmett. I now think that we're most likely to hit 6–8 hires by the end of 2019, though hitting 9–10 still seems quite possible to me, as we are still engaging with many promising candidates, and continue to meet more.\nOverall, 2018 was a great year for MIRI. Our research continued apace, and our recruitment efforts increasingly paid out dividends.\n\nBelow I'll elaborate on our:\n\nresearch progress and outputs,\nresearch program support activities, including more details on our recruitment efforts,\noutreach related activities, and\nfundraising and spending.\n\n2018 Research\nOur 2018 update discussed the new research directions we're pursuing, and the nondisclosure-by-default policy we've adopted for our research overall. As described in the post, these new directions aim at deconfusion (similar to our traditional research programs, which we continue to pursue), and include the themes of \"seeking entirely new low-level foundations for optimization,\" \"endeavoring to figure out parts of cognition that can be very transparent as cognition,\" and \"experimenting with some [relatively deep] alignment problems,\" and require building software systems and infrastructure.\nIn 2018, our progress on these new directions and the supporting infrastructure was steady and significant, in line with our high expectations, albeit proceeding significantly slower than we'd like, due in part to the usual difficulties associated with software development. On the whole, our excitement about these new directions is high, and we remain very eager to expand the team to accelerate our progress.\nIn parallel, Agent Foundations work continued to be a priority at MIRI. Our biggest publication on this front was \"Embedded Agency,\" co-written by MIRI researchers Scott Garabrant and Abram Demski. \"Embedded Agency\" reframes our Agent Foundations research agenda as different angles of attack on a single central difficulty: we don't know how to characterize good reasoning and decision-making for agents embedded in their environment.\nBelow are notable technical results and analyses we released in each research category last year.1 These are accompanied by predictions made last year by Scott Garrabrant, the research lead for MIRI's Agent Foundations work, and Scott's assessment of the progress our published work represents against those predictions. The research categories below are explained in detail in \"Embedded Agency.\"\nThe actual share of MIRI's research that was non-public in 2018 ended up being larger than Scott expected when he registered his predictions. The list below is best thought of as a collection of interesting (though not groundbreaking) results and analyses that demonstrate the flavor of some of the directions we explored in our research last year. As such, these assessments don't represent our model of our overall progress, and aren't intended to be a good proxy for that question. Given the difficulty of predicting what we'll disclose for our 2019 public-facing results, we won't register new predictions this year.\nDecision theory\n\nPredicted progress: 3 (modest)\nActual progress: 2 (weak-to-modest)\n\nScott sees our largest public decision theory result of 2018 as Prisoners' Dilemma with Costs to Modeling, a modified version of open-source prisoners' dilemmas in which agents must pay resources in order to model each other.\nOther significant write-ups include:\n\nLogical Inductors Converge to Correlated Equilibria (Kinda): A game-theoretic analysis of logical inductors.\nNew results in Asymptotic Decision Theory and When EDT=CDT, ADT Does Well represent incremental progress on understanding what's possible with respect to learning the right counterfactuals.\n\nAdditional decision theory research posts from 2018:\n\nFrom Alex Appel, a MIRI contractor and summer intern: (a) Distributed Cooperation; (b) Cooperative Oracles; (c) When EDT=CDT, ADT Does Well; (d) Conditional Oracle EDT Equilibria in Games\nFrom Abram Demski: (a) In Logical Time, All Games are Iterated Games; (b) A Rationality Condition for CDT Is That It Equal EDT (Part 1); (c) A Rationality Condition for CDT Is That It Equal EDT (Part 2)\nFrom Scott Garrabrant: (a) Knowledge is Freedom; (b) Counterfactual Mugging Poker Game; (c) (A → B) → A\nFrom Alex Mennen, a MIRI summer intern: When Wishful Thinking Works\n\nEmbedded world-models\n\nPredicted progress: 3 (modest)\nActual progress: 1 (limited)\n\nSome of our relatively significant results related to embedded world-models included:\n\nSam Eisenstat's untrollable prior, explained in illustrated form by Abram Demski, shows that there is a Bayesian solution to one of the basic problems which motivated the development of non-Bayesian logical uncertainty tools (culminating in logical induction). This informs our picture of what's possible, and may lead to further progress in the direction of Bayesian logical uncertainty.\nSam Eisenstat and Tsvi Benson-Tilsen's formulation of Bayesian logical induction. This framework, which has yet to be written up, forces logical induction into a Bayesian framework by constructing a Bayesian prior which trusts the beliefs of a logical inductor (which must supply those beliefs to the Bayesian regularly).\n\nSam and Tsvi's work can be viewed as evidence that \"true\" Bayesian logical induction is possible. However, it can also be viewed as a demonstration that we have to be careful what we mean by \"Bayesian\"—the solution is arguably cheating, and it isn't clear that you get any new desirable properties by doing things this way.\nScott assigns the untrollable prior result a 2 (weak-to-modest progress) rather than a 1 (limited progress), but is counting this among our 2017 results, since it was written up in 2018 but produced in 2017.\nOther recent work in this category includes:\n\nFrom Alex Appel: (a) Resource-Limited Reflective Oracles; (b) Bounded Oracle Induction\nFrom Abram Demski: (a) Toward a New Technical Explanation of Technical Explanation; (b) Probability is Real, and Value is Complex\n\nRobust delegation\n\nPredicted progress: 2 (weak-to-modest)\nActual progress: 1 (limited)\n\nOur most significant 2018 public result in this category is perhaps Sam Eisenstat's logical inductor tiling result, which solves a version of the tiling problem for logically uncertain agents.2\nOther posts on robust delegation:\n\nFrom Stuart Armstrong (MIRI Research Associate): (a) Standard ML Oracles vs. Counterfactual Ones; (b) \"Occam's Razor is Insufficient to Infer the Preferences of Irrational Agents\"\nFrom Abram Demski: Stable Pointers to Value II: Environmental Goals\nFrom Scott Garrabrant: Optimization Amplifies\nFrom Vanessa Kosoy (MIRI Research Associate): (a) Quantilal Control for Finite Markov Decision Processes; (b) Computing An Exact Quantilal Policy\nFrom Alex Mennen: Safely and Usefully Spectating on AIs Optimizing Over Toy Worlds\n\nSubsystem alignment\n\nPredicted progress: 2 (weak-to-modest)\nActual progress: 2\n\nWe achieved greater clarity on subsystem alignment in 2018, largely reflected in Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse,3 and Scott Garrabrant's forthcoming paper, \"Risks from Learned Optimization in Advanced Machine Learning Systems.\"4 This paper is currently being rolled out on the AI Alignment Forum, as a sequence on \"Mesa-Optimization.\"5\nScott Garrabrant's Robustness to Scale also discusses issues in subsystem alignment (\"robustness to relative scale\"), alongside other issues in AI alignment.\nOther\n\nPredicted progress: 2 (weak-to-modest)\nActual progress: 2\n\nSome of the 2018 publications we expect to be most useful cut across all of the above categories:\n\n\"Embedded Agency,\" Scott and Abram's new introduction to all of the above research directions.\nFixed Point Exercises, a set of exercises created by Scott to introduce people to the core ideas and tools in agent foundations research.\n\nHere, other noteworthy posts include:\n\nFrom Scott Garrabrant: (a) Sources of Intuitions and Data on AGI; (b) History of the Development of Logical Induction\n\n2018 Research Program Support\nWe added three new research staff to the team in 2018: Ben Weinstein-Raun, James Payor, and Edward Kmett.\nWe invested a large share of our capacity into growing the research team in 2018, and generally into activities aimed at increasing the amount of alignment research in the world, including:\n\nRunning eight AI Risk for Computer Scientist (AIRCS) workshops. This is an ongoing all-expenses-paid workshop series for computer scientists and programmers who want to get started thinking about or working on AI alignment. At these workshops, we introduce AI risk and related concepts, share some CFAR-style rationality content, and introduce participants to the work done by MIRI and other safety research teams. Our overall aim is to cause good discussions to happen, improve participants' ability to make progress on whether and how to contribute, and in the process work out whether they may be interested in joining MIRI or other alignment groups. Of 2018 workshop participants, we saw one join MIRI full-time, four take on internships with us, and on the order of ten with good prospects of joining MIRI within a year, in addition to several who have since joined other safety-related organizations.\nRunning a 2.5-week AI Summer Fellows Program (AISFP) with CFAR.6 Additionally, MIRI researcher Tsvi Benson-Tilsen and MIRI summer intern Alex Zhu ran a mid-year AI safety retreat for MIT students and alumni.\nRunning a 10-week research internship program over the summer, reviewed in our summer updates. Interns also participated in AISFP and in a joint research workshop with interns from the Center for Human-Compatible AI. Additionally, we hosted three more research interns later in the year. We are hopeful that at least one of them will join the team in 2019.\nMaking grants to two individuals as part of our AI Safety Retraining Program. In 2018 we received $150k in restricted funding from the Open Philanthropy Project, \"to provide stipends and guidance to a few highly technically skilled individuals. The goal of the program is to free up 3–6 months of time for strong candidates to spend on retraining, so that they can potentially transition to full-time work on AI alignment.\" We issued grants to two people in 2018, including Carroll Wainwright who went on to become a Research Scientist at Partnership on AI.\n\nIn addition to the above, in 2018 we:\n\nHired additional operations staff to ensure we have the required operational capacity to support our continued growth.\nMoved into new larger office space.\n\n2018 Outreach and Exposition\nOn the outreach, coordination, and exposition front, we:\n\nReleased a new edition of Rationality: From AI to Zombies, beginning with volumes one and two, featuring a number of updates to the text and an official print edition. We also made Stuart Armstrong's 2014 book on AI risk, Smarter Than Us: The Rise of Machine Intelligence, available on the web for free at smarterthan.us.\nReleased 2018 Update: Our New Research Directions, a lengthy discussion of our research, our nondisclosure-by-default policies, and the case for computer scientists and software engineers to apply to join our team.\nProduced other expository writing: Two Clarifications About \"Strategic Background\"; Challenges to Paul Christiano's Capability Amplification Proposal (discussion on LessWrong, including follow-up conversations); Comment on Decision Theory; The Rocket Alignment Problem (LessWrong link).\nReceived press coverage in Axios, Forbes, Gizmodo, and Vox (1, 2), and were interviewed in Nautilus and on Sam Harris' podcast.\nSpoke at Effective Altruism Global in San Francisco and at the Human-Aligned AI Summer School in Prague.\nPresented on logical induction at the joint Applied Theory Workshop / Workshop in Economic Theory.\nReleased a paper, \"Categorizing Variants of Goodhart's Law,\" based on Scott Garrabrant's 2017 \"Goodhart Taxonomy.\" We also reprinted Nate Soares' \"The Value Learning Problem\" and Nick Bostrom and Eliezer Yudkowsky's \"The Ethics of Artificial Intelligence\" in Artificial Intelligence Safety and Security.\nSeveral MIRI researchers also received recognition from the AI Alignment Prize, including Scott Garrabrant receiving first place and second place in the first round and second round, respectively, MIRI Research Associate Vanessa Kosoy winning first prize in the third round, and Scott and Abram Demski tying Alex Turner for first place in the fourth round.\nMIRI senior staff also participated in AI research and strategy events and conversations throughout the year.\n\n2018 Finances\nFundraising\n2018 was another strong year for MIRI's fundraising. While the total raised of just over $5.1M was a 12% drop from the amount raised in 2017, the graph below shows that our strong growth trend continued—with 2017, as I surmised in last year's review, looking like an outlier year driven by the large influx of cryptocurrency contributions during a market high in December 2017.7\n\n\n(In this chart and those that follow, \"Unlapsed\" indicates contributions from past supporters who did not donate in the previous year.)\nHighlights include:\n\n$1.02M, our largest ever single donation by an individual, from \"Anonymous Ethereum Investor #2,\" based in Canada, made through Rethink Charity Forward's recently established tax-advantaged fund for Canadian MIRI supporters.8\n$1.4M in grants from the Open Philanthropy Project, $1.25M in general support and $150k for our AI Safety Retraining Program.\n$951k during our annual fundraiser, driven in large part by MIRI supporters' participation in multiple matching campaigns during the fundraiser, including WeTrust Spring's Ethereum-matching campaign, Facebook's Giving Tuesday event, and in partnership with Raising for Effective Giving (REG), professional poker players' Double Up Drive.\n$529k from 2 grants recommended by the EA Funds Long-Term Future Fund.\n$115K from Poker Stars, also through REG.\n\nIn 2018, we received contributions from 637 unique contributors, 16% less than in 2017. This drop was largely driven by a 27% reduction in the number of new donors, partly offset by the continuing trend of steady growth in the number of returning donors9:\n\n\n\n\nDonations of cryptocurrency were down in 2018 both in absolute terms (-$1.2M in value) and as a percentage of total contributions (23% compared to 42% in 2017). It's plausible that if cryptocurrency values continue to rebound in 2019, we may see this trend reversed.\nIn 2017, donations received from matching initiatives dramatically increased with almost a five-fold increase over the previous year. In 2018, our inclusion in two different REG-administered matching challenges, a significantly increased engagement among MIRI supporters with Facebook's Giving Tuesday, and MIRI's winning success in WeTrust's Spring campaign, offset a small decrease in corporate match dollars to improve slightly on 2017's matching total. The following graph represents the matching amounts received over the last 5 years:\n\n\nSpending\nIn our 2017 fundraiser post, I projected that we'd spend ~$2.8M in 2018. Towards the end of last year, I revised our estimate:\nFollowing the amazing show of support we received from donors last year (and continuing into 2018), we had significantly more funds than we anticipated, and we found more ways to usefully spend it than we expected. In particular, we've been able to translate the \"bonus\" support we received in 2017 into broadening the scope of our recruiting efforts. As a consequence, our 2018 spending, which will come in at around $3.5M, actually matches the point estimate I gave in 2017 for our 2019 budget, rather than my prediction for 2018—a large step up from what I predicted, and an even larger step from last year's [2017] budget of $2.1M.\nThe post goes on to give an overview of the ways in which we put this \"bonus\" support to good use. These included, in descending order by cost:\n\nInvesting significantly more in recruiting-related activities, including our AIRCS workshop series; and scaling up the number of interns we hosted, with an increased willingness to pay higher wages to attract promising candidates to come intern/trial with us.\nFiltering less on price relative to fit when choosing new office space to accommodate our growth, and spending more on renovations, than we otherwise would have been able to, in order to create a more focused working environment for research staff.\nRaising salaries for some existing staff, who were being paid well below market rates.\n\nWith concrete numbers now in hand, I'll go into more detail below on how we put those additional funds to work.\nTotal spending came in just over $3.75M. The chart below compares our actual spending in 2018 with our projections, and with our spending in 2017.10\n\n\nAt a high level, as expected, personnel costs in 2018 continued to account for the majority of our spending—though represented a smaller share of total spending than in 2017, due to increased spending on recruitment-related activities along with one-time costs related to securing and renovating our new office space.\nOur spending on recruitment-related activities is captured in the program activities category. The major ways we put additional funds to use, which account for the increase over my projections, break down as follows:\n\n~$170k on internships: We hosted nine research interns for an average of ~2.5 months each. We were able to offer more competitive wages for internships, allowing us to recruit interns (especially those with an engineering focus) that we otherwise would have had a much harder time attracting, given the other opportunities they had available to them. We are actively interested in hiring three of these interns, and have made formal offers to two of them. I'm hopeful that we'll have added at least one of them to the team by the end of this year.\n$54k on AI Safety Retraining Program grants, described above.\nThe bulk of the rest of the additional funds we spent in this category went towards funding our ongoing series of AI Risk for Computer Scientists workshops, described above.\n\nExpenses related to our new office space are accounted for in the cost of doing business category. The surplus spending in this category resulted from:\n\n~$300k for securing, renovating, and filling out our new office space. Finding a suitable new space to accommodate our growth in Berkeley ended up being much more challenging and time-consuming than we expected.11 We made use of additional funds to secure our preferred space ahead of when we were prepared to move, and to renovate the space to meet our needs, whereas if we'd been operating with the budget I originally projected, we would have almost certainly ended up in a much worse space.\nThe remainder of the spending beyond my projection in this category comes from higher-than-expected legal costs to secure visas for staff, and slightly higher-than-projected spending across many other subcategories.\n\n\n\nOur summaries of our more significant results below largely come from our 2018 fundraiser post.Not to be confused with Nate Soares' forthcoming tiling agents paper.Evan was a MIRI research intern, while Chris, Vladimir, and Joar are external collaborators.This paper was previously cited in \"Embedded Agency\" under the working title \"The Inner Alignment Problem.\"The full PDF version of the paper will be released in conjuction with the last post of the sequence.As noted in our summer updates:\nWe had a large and extremely strong pool of applicants, with over 170 applications for 30 slots (versus 50 applications for 20 slots in 2017). The program this year was more mathematically flavored than in 2017, and concluded with a flurry of new analyses by participants. On the whole, the program seems to have been more successful at digging into AI alignment problems than in previous years, as well as more successful at seeding ongoing collaborations between participants, and between participants and MIRI staff.\nThe program ended with a very active blogathon, with write-ups including: Dependent Type Theory and Zero-Shot Reasoning; Conceptual Problems with Utility Functions (and follow-up); Complete Class: Consequentialist Foundations; and Agents That Learn From Human Behavior Can't Learn Human Values That Humans Haven't Learned Yet.Note that amounts in this section may vary slightly from our audited financial statements, due to small differences between how we tracked donations internally, and how we are required to report them in our financial statements.A big thanks to Colm for all the work he's put into setting this up; have a look at our Tax-Advantaged Donations page for more information.2014 is anomalously high on this graph due to the community's active participation in our memorable SVGives campaign.Note that these numbers will differ slightly compared to our forthcoming audited financial statements for 2018, due to subtleties of how certain types of expenses are tracked. For example, in the financial statements, renovation costs are considered to be a fixed asset that depreciates over time, and as such, won't show up as an expense.The number of options available in the relevant time frame were very limited, and most did not meet many of our requirements. Of the available spaces, the option that offered the best combination of size, layout, and location, was looking for a tenant starting November 1st 2018, while we weren't able to move until early January 2019. Additionally, the space was configured with a very open layout that wouldn't have met our needs, but that many other prospective tenants found desirable, such that we'd have to cover renovation costs.The post 2018 in review appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "2018 in review", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=11", "id": "1210c2aba28167426c14675fabc02534"} {"text": "May 2019 Newsletter\n\n\nUpdates\n\nA new paper from MIRI researcher Vanessa Kosoy, presented at the ICLR SafeML workshop this week: \"Delegative Reinforcement Learning: Learning to Avoid Traps with a Little Help.\"\nNew research posts: Learning \"Known\" Information When the Information is Not Actually Known; Defeating Goodhart and the \"Closest Unblocked Strategy\" Problem; Reinforcement Learning with Imperceptible Rewards\nThe Long-Term Future Fund has announced twenty-three new grant recommendations, and provided in-depth explanations of the grants. These include a $50,000 grant to MIRI, and grants to CFAR and Ought. LTFF is also recommending grants to several individuals with AI alignment research proposals whose work MIRI staff will be helping assess.\nWe attended the Global Governance of AI Roundtable at the World Government Summit in Dubai.\n\nNews and links\n\nRohin Shah reflects on the first year of the Alignment Newsletter.\nSome good recent AI alignment discussion: Alex Turner asks for the best reasons for pessimism about impact measures; Henrik Åslund and Ryan Carey discuss corrigibility as constrained optimization; Wei Dai asks about low-cost AGI coordination; and Chris Leong asks, \"Would solving counterfactuals solve anthropics?\"\nFrom DeepMind: Towards Robust and Verified AI: Specification Testing, Robust Training, and Formal Verification.\nIlya Sutskever and Greg Brockman discuss OpenAI's new status as a \"hybrid of a for-profit and nonprofit\".\nMisconceptions about China and AI: Julia Galef interviews Helen Toner. (Excerpts.)\n\n\nThe post May 2019 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "May 2019 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=11", "id": "cce26e687758f9bf35159df2c615ccf9"} {"text": "New paper: \"Delegative reinforcement learning\"\n\nMIRI Research Associate Vanessa Kosoy has written a new paper, \"Delegative reinforcement learning: Learning to avoid traps with a little help.\" Kosoy will be presenting the paper at the ICLR 2019 SafeML workshop in two weeks. The abstract reads:\nMost known regret bounds for reinforcement learning are either episodic or assume an environment without traps. We derive a regret bound without making either assumption, by allowing the algorithm to occasionally delegate an action to an external advisor. We thus arrive at a setting of active one-shot model-based reinforcement learning that we call DRL (delegative reinforcement learning.)\nThe algorithm we construct in order to demonstrate the regret bound is a variant of Posterior Sampling Reinforcement Learning supplemented by a subroutine that decides which actions should be delegated. The algorithm is not anytime, since the parameters must be adjusted according to the target time discount. Currently, our analysis is limited to Markov decision processes with finite numbers of hypotheses, states and actions.\nThe goal of Kosoy's work on DRL is to put us on a path toward having a deep understanding of learning systems with human-in-the-loop and formal performance guarantees, including safety guarantees. DRL tries to move us in this direction by providing models in which such performance guarantees can be derived.\nWhile these models still make many unrealistic simplifying assumptions, Kosoy views DRL as already capturing some of the most essential features of the problem—and she has a fairly ambitious vision of how this framework might be further developed.\nKosoy previously described DRL in the post Delegative Reinforcement Learning with a Merely Sane Advisor. One feature of DRL Kosoy described here but omitted from the paper (for space reasons) is DRL's application to corruption. Given certain assumptions, DRL ensures that a formal agent will never have its reward or advice channel tampered with (corrupted). As a special case, the agent's own advisor cannot cause the agent to enter a corrupt state. Similarly, the general protection from traps described in \"Delegative reinforcement learning\" also protects the agent from harmful self-modifications.\nAnother set of DRL results that didn't make it into the paper is Catastrophe Mitigation Using DRL. In this variant, a DRL agent can mitigate catastrophes that the advisor would not be able to mitigate on its own—something that isn't supported by the more strict assumptions about the advisor in standard DRL.\n \n\n\n\nSign up to get updates on new MIRI technical results\nGet notified every time a new technical paper is published.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n \nThe post New paper: \"Delegative reinforcement learning\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “Delegative reinforcement learning”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=11", "id": "f245be4f0dcc4e0fde10266dade53ea6"} {"text": "April 2019 Newsletter\n\n\nUpdates\n\nNew research posts: Simplified Preferences Needed, Simplified Preferences Sufficient; Smoothmin and Personal Identity; Example Population Ethics: Ordered Discounted Utility; A Theory of Human Values; A Concrete Proposal for Adversarial IDA\nMIRI has received a set of new grants from the Open Philanthropy Project and the Berkeley Existential Risk Initiative.\n\nNews and links\n\nFrom the DeepMind safety team and Alex Turner: Designing Agent Incentives to Avoid Side Effects.\nFrom Wei Dai: Three Ways That \"Sufficiently Optimized Agents Appear Coherent\" Can Be False; What's Wrong With These Analogies for Understanding Informed Oversight and IDA?; and The Main Sources of AI Risk?\nOther recent write-ups: Issa Rice's Comparison of Decision Theories; Paul Christiano's More Realistic Tales of Doom; and Linda Linsefors' The Game Theory of Blackmail.\nOpenAI's Geoffrey Irving describes AI safety via debate on FLI's AI Alignment Podcast.\nA webcomic's take on AI x-risk concepts: Seed.\n\n\nThe post April 2019 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "April 2019 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=11", "id": "1ea2165d6da96697e5f985edbd8182cc"} {"text": "New grants from the Open Philanthropy Project and BERI\n\nI'm happy to announce that MIRI has received two major new grants:\n\nA two-year grant totaling $2,112,500 from the Open Philanthropy Project.\nA $600,000 grant from the Berkeley Existential Risk Initiative.\n\nThe Open Philanthropy Project's grant was awarded as part of the first round of grants recommended by their new committee for effective altruism support:\nWe are experimenting with a new approach to setting grant sizes for a number of our largest grantees in the effective altruism community, including those who work on long-termist causes. Rather than have a single Program Officer make a recommendation, we have created a small committee, comprised of Open Philanthropy staff and trusted outside advisors who are knowledgeable about the relevant organizations. […] We average the committee members' votes to arrive at final numbers for our grants.\nThe Open Philanthropy Project's grant is separate from the three-year $3.75 million grant they awarded us in 2017, the third $1.25 million disbursement of which is still scheduled for later this year. This new grant increases the Open Philanthropy Project's total support for MIRI from $1.4 million1 in 2018 to ~$2.31 million in 2019, but doesn't reflect any decision about how much total funding MIRI might receive from Open Phil in 2020 (beyond the fact that it will be at least ~$1.06 million).\nGoing forward, the Open Philanthropy Project currently plans to determine the size of any potential future grants to MIRI using the above committee structure.\nWe're very grateful for this increase in support from BERI and the Open Philanthropy Project—both organizations that already numbered among our largest funders of the past few years. We expect these grants to play an important role in our decision-making as we continue to grow our research team in the ways described in our 2018 strategy update and fundraiser posts.\nThe $1.4 million counts the Open Philanthropy Project's $1.25 million disbursement in 2018, as well as a $150,000 AI Safety Retraining Program grant to MIRI.The post New grants from the Open Philanthropy Project and BERI appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New grants from the Open Philanthropy Project and BERI", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=11", "id": "2872623fefe4558089559a7691f0107a"} {"text": "March 2019 Newsletter\n\n\n Want to be in the reference class \"people who solve the AI alignment problem\"?\n We now have a guide on how to get started, based on our experience of what tends to make research groups successful. (Also on the AI Alignment Forum.)\nOther updates\n\nDemski and Garrabrant's introduction to MIRI's agent foundations research, \"Embedded Agency,\" is now available (in lightly edited form) as an arXiv paper.\nNew research posts: How Does Gradient Descent Interact with Goodhart?; \"Normative Assumptions\" Need Not Be Complex; How the MtG Color Wheel Explains AI Safety; Pavlov Generalizes\nSeveral MIRIx groups are expanding and are looking for new members to join.\nOur summer fellows program is accepting applications through March 31.\nLessWrong's web edition of Rationality: From AI to Zombies at lesswrong.com/rationality is now fully updated to reflect the print edition of Map and Territory and How to Actually Change Your Mind, the first two books. (Announcement here.)\n\nNews and links\n\nOpenAI's GPT-2 model shows meaningful progress on a wide variety of language tasks. OpenAI adds:\nDue to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights. […] We believe our release strategy limits the initial set of organizations who may choose to [open source our results], and gives the AI community more time to have a discussion about the implications of such systems.\n\nThe Verge discusses OpenAI's language model concerns along with MIRI's disclosure policies for our own research. See other discussion by Jeremy Howard, John Seymour, and Ryan Lowe.\nAI Impacts summarizes evidence on good forecasting practices from the Good Judgment Project.\nRecent AI alignment ideas and discussion: Carey on quantilization; Filan on impact regularization methods; Saunders' HCH Is Not Just Mechanical Turk and RL in the Iterated Amplification Framework; Dai on philosophical difficulty (1, 2); Hubinger on ascription universality; and Everitt's Understanding Agent Incentives with Causal Influence Diagrams.\nThe Open Philanthropy Project announces their largest grant to date: $55 million to launch the Center for Security and Emerging Technology, a Washington, D.C. think tank with an early focus on \"the intersection of security and artificial intelligence\". See also CSET's many jobpostings.\n\n\nThe post March 2019 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "March 2019 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=12", "id": "44e1ab71c56b315c27363259d7d81e68"} {"text": "Applications are open for the MIRI Summer Fellows Program!\n\nCFAR and MIRI are running our fifth annual MIRI Summer Fellows Program (MSFP) in the San Francisco Bay Area from August 9 to August 24, 2019. \nMSFP is an extended retreat for mathematicians and programmers with a serious interest in making technical progress on the problem of AI alignment. It includes an overview of CFAR's applied rationality content, a breadth-first grounding in the MIRI perspective on AI safety, and multiple days of actual hands-on research with participants and MIRI staff attempting to make inroads on open questions.\n\nProgram Description\nThe intent of the program is to boost participants, as far as possible, in four overlapping areas:\nDoing rationality inside a human brain: understanding, with as much fidelity as possible, what phenomena and processes drive and influence human thinking and reasoning, so that we can account for our own biases and blindspots, better recruit and use the various functions of our brains, and, in general, be less likely to trick ourselves, gloss over our confusions, or fail to act in alignment with our endorsed values.\nEpistemic rationality, especially the subset of skills around deconfusion. Building the skill of noticing where the dots don't actually connect; answering the question \"why do we think we know what we think we know?\", particularly when it comes to predictions and assertions around the future development of artificial intelligence.\nGrounding in the current research landscape surrounding AI: being aware of the primary disagreements among leaders in the field, and the arguments for various perspectives and claims. Understanding the current open questions, and why different ones seem more pressing or real under different assumptions. Being able to follow the reasoning behind various alignment schemes/theories/proposed interventions, and being able to evaluate those interventions with careful reasoning and mature (or at least more-mature-than-before) intuitions.\nGenerative research skill: the ability to make real and relevant progress on questions related to the field of AI alignment without losing track of one's own metacognition. The parallel processes of using one's mental tools, critiquing and improving one's mental tools, and making one's own progress or deconfusion available to others through talks, papers, and models. Anything and everything involved in being the sort of thinker who can locate a good question, sniff out promising threads, and collaborate effectively with others and with the broader research ecosystem.\nFood and lodging are provided free of charge at CFAR's workshop venue in Bodega Bay, California. Participants must be able to remain onsite, largely undistracted for the duration of the program (e.g. no major appointments in other cities, no large looming academic or professional deadlines just after the program).\nIf you have any questions or comments, please send an email to , or, if you suspect others would also benefit from hearing the answer, post them here.\nUpdate April 23, 2019: Applications closed on March 31, 2019 and finalists are being contacted by a MIRI staff member for 1–2 Skype interviews. Admissions decisions — yes, no, waitlist — will go out no later than April 30th.\n\nThe post Applications are open for the MIRI Summer Fellows Program! appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Applications are open for the MIRI Summer Fellows Program!", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=12", "id": "304b95aec1b02681cb4594726ae6c641"} {"text": "A new field guide for MIRIx\n\n\nWe've just released a field guide for MIRIx groups, and for other people who want to get involved in AI alignment research.\nMIRIx is a program where MIRI helps cover basic expenses for outside groups that want to work on open problems in AI safety. You can start your own group or find information on existing meet-ups at intelligence.org/mirix.\nSeveral MIRIx groups have recently been ramping up their activity, including:\n\nUC Irvine: Daniel Hermann is starting a MIRIx group in Irvine, California. Contact him if you'd like to be involved.\nSeattle: MIRIxSeattle is a small group that's in the process of restarting and increasing its activities. Contact Pasha Kamyshev if you're interested.\nVancouver: Andrew McKnight and Evan Gaensbauer are looking for more people who'd like to join MIRIxVancouver events.\n\nThe new alignment field guide is intended to provide tips and background models to MIRIx groups, based on our experience of what tends to make a research group succeed or fail.\nThe guide begins:\n\nPreamble I: Decision Theory\nHello! You may notice that you are reading a document.\nThis fact comes with certain implications. For instance, why are you reading this? Will you finish it? What decisions will you come to as a result? What will you do next?\nNotice that, whatever you end up doing, it's likely that there are dozens or even hundreds of other people, quite similar to you and in quite similar positions, who will follow reasoning which strongly resembles yours, and make choices which correspondingly match.\nGiven that, it's our recommendation that you make your next few decisions by asking the question \"What policy, if followed by all agents similar to me, would result in the most good, and what does that policy suggest in my particular case?\" It's less of a question of trying to decide for all agents sufficiently-similar-to-you (which might cause you to make the wrong choice out of guilt or pressure) and more something like \"if I were in charge of all agents in my reference class, how would I treat instances of that class with my specific characteristics?\"\nIf that kind of thinking leads you to read further, great. If it leads you to set up a MIRIx chapter, even better. In the meantime, we will proceed as if the only people reading this document are those who justifiably expect to find it reasonably useful.\nPreamble II: Surface Area\nImagine that you have been tasked with moving a cube of solid iron that is one meter on a side. Given that such a cube weighs ~16000 pounds, and that an average human can lift ~100 pounds, a naïve estimation tells you that you can solve this problem with ~150 willing friends.\nBut of course, a meter cube can fit at most something like 10 people around it. It doesn't matter if you have the theoretical power to move the cube if you can't bring that power to bear in an effective manner. The problem is constrained by its surface area.\nMIRIx chapters are one of the best ways to increase the surface area of people thinking about and working on the technical problem of AI alignment. And just as it would be a bad idea to decree \"the 10 people who happen to currently be closest to the metal cube are the only ones allowed to think about how to think about this problem\", we don't want MIRI to become the bottleneck or authority on what kinds of thinking can and should be done in the realm of embedded agency and other relevant fields of research.\nThe hope is that you and others like you will help actually solve the problem, not just follow directions or read what's already been written. This document is designed to support people who are interested in doing real groundbreaking research themselves.\n(Read more)\n \n\nThe post A new field guide for MIRIx appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "A new field guide for MIRIx", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=12", "id": "a84339f0bf07cd4a9908c796c789a50d"} {"text": "February 2019 Newsletter\n\n\nUpdates\n\nRamana Kumar and Scott Garrabrant argue that the AGI safety community should begin prioritizing \"approaches that work well in the absence of human models\": \n[T]o the extent that human modelling is a good idea, it is important to do it very well; to the extent that it is a bad idea, it is best to not do it at all. Thus, whether or not to do human modelling at all is a configuration bit that should probably be set early when conceiving of an approach to building safe AGI.\n\nNew research forum posts: Conditional Oracle EDT Equilibria in Games; Non-Consequentialist Cooperation?; When is CDT Dutch-Bookable?; CDT=EDT=UDT\nThe MIRI Summer Fellows Program is accepting applications through the end of March! MSFP is a free two-week August retreat co-run by MIRI and CFAR, intended to bring people up to speed on problems related to embedded agency and AI alignment, train research-relevant skills and habits, and investigate open problems in the field.\nMIRI's Head of Growth, Colm Ó Riain, reviews how our 2018 fundraiser went.\nFrom Eliezer Yudkowsky: \"Along with adversarial resistance and transparency, what I'd term 'conservatism', or trying to keep everything as interpolation rather than extrapolation, is one of the few areas modern ML can explore that I see as having potential to carry over directly to serious AGI safety.\"\n\nNews and links\n\nEric Drexler has released his book-length AI safety proposal: Reframing Superintelligence: Comprehensive AI Services as General Intelligence. See discussion by Peter McCluskey, Richard Ngo, and Rohin Shah.\nOther recent AI alignment posts include Andreas Stuhlmüller's Factored Cognition and Alex Turner's Penalizing Impact via Attainable Utility Preservation, and a host of new write-ups by Stuart Armstrong.\n\n\nThe post February 2019 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "February 2019 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=12", "id": "fd6eca47d75435bdc1bbf745f3688c4b"} {"text": "Thoughts on Human Models\n\n\nThis is a joint post by MIRI Research Associate and DeepMind Research Scientist Ramana Kumar and MIRI Research Fellow Scott Garrabrant, cross-posted from the AI Alignment Forum and LessWrong.\n\nHuman values and preferences are hard to specify, especially in complex domains. Accordingly, much AGI safety research has focused on approaches to AGI design that refer to human values and preferences indirectly, by learning a model that is grounded in expressions of human values (via stated preferences, observed behaviour, approval, etc.) and/or real-world processes that generate expressions of those values. There are additionally approaches aimed at modelling or imitating other aspects of human cognition or behaviour without an explicit aim of capturing human preferences (but usually in service of ultimately satisfying them). Let us refer to all these models as human models.\nIn this post, we discuss several reasons to be cautious about AGI designs that use human models. We suggest that the AGI safety research community put more effort into developing approaches that work well in the absence of human models, alongside the approaches that rely on human models. This would be a significant addition to the current safety research landscape, especially if we focus on working out and trying concrete approaches as opposed to developing theory. We also acknowledge various reasons why avoiding human models seems difficult.\n \nProblems with Human Models\nTo be clear about human models, we draw a rough distinction between our actual preferences (which may not be fully accessible to us) and procedures for evaluating our preferences. The first thing, actual preferences, is what humans actually want upon reflection. Satisfying our actual preferences is a win. The second thing, procedures for evaluating preferences, refers to various proxies for our actual preferences such as our approval, or what looks good to us (with necessarily limited information or time for thinking). Human models are in the second category; consider, as an example, a highly accurate ML model of human yes/no approval on the set of descriptions of outcomes. Our first concern, described below, is about overfitting to human approval and thereby breaking its connection to our actual preferences. (This is a case of Goodhart's law.)\n\nLess Independent Audits\nImagine we have built an AGI system and we want to use it to design the mass transit system for a new city. The safety problems associated with such a project are well recognised; suppose we are not completely sure we have solved them, but are confident enough to try anyway. We run the system in a sandbox on some fake city input data and examine its outputs. Then we run it on some more outlandish fake city data to assess robustness to distributional shift. The AGI's outputs look like reasonable transit system designs and considerations, and include arguments, metrics, and other supporting evidence that they are good. Should we be satisfied and ready to run the system on the real city's data, and to implement the resulting proposed design?\nWe suggest that an important factor in the answer to this question is whether the AGI system was built using human modelling or not. If it produced a solution to the transit design problem (that humans approve of) without human modelling, then we would more readily trust its outputs. If it produced a solution we approve of with human modelling, then although we expect the outputs to be in many ways about good transit system design (our actual preferences) and in many ways suited to being approved by humans, to the extent that these two targets come apart we must worry about having overfit to the human model at the expense of the good design. (Why not the other way around? Because our assessment of the sandboxed results uses human judgement, not an independent metric for satisfaction of our actual preferences.)\nHumans have a preference for not being wrong about the quality of a design, let alone being fooled about it. How much do we want to rely on having correctly captured these preferences in our system? If the system is modelling humans, we strongly rely on the system learning and satisfying these preferences, or else we expect to be fooled to the extent that a good-looking but actually bad transit system design is easier to compose than an actually-good design. On the other hand, if the system is not modelling humans, then the fact that its output looks like a good design is better evidence that it is in fact a good design. Intuitively, if we consider sampling possible outputs and condition on the output looking good (via knowledge of humans), the probability of it being good (via knowledge of the domain) is higher when the system's knowledge is more about what is good than what looks good.\nHere is a handle for this problem: a desire for an independent audit of the system's outputs. When a system uses human modelling, the mutual information between its outputs and the auditing process (human judgement) is higher. Thus, using human models reduces our ability to do independent audits.\nAvoiding human models does not avoid this problem altogether. There is still an \"outer-loop optimisation\" version of the problem. If the system produces a weird or flawed design in sandbox, and we identify this during an audit, we will probably reject the solution and attempt to debug the system that produced it. This introduces a bias on the overall process (involving multiple versions of the system over phases of auditing and debugging) towards outputs that fool our auditing procedure.\nHowever, outer-loop optimisation pressures are weaker, and therefore less worrying, than in-loop optimisation pressures. We would argue that the problem is much worse, i.e., the bias towards fooling is stronger, when one uses human modelling. This is because the relevant optimisation is in-loop instead and is encountered more often.\nAs one more analogy to illustrate this point, consider a classic Goodhart's law example of teaching to the test. If you study the material, then take a test, your test score reveals your knowledge of the material fairly well. If you instead study past tests, your test score reveals your ability to pass tests, which may be correlated with your knowledge of the material but is increasingly less likely to be so correlated as your score goes up. Here human modelling is analogous to past tests and actual preferences are analogous to the material. Taking the test is analogous to an audit, which we want to be independent from the study regimen.\nRisk from Bugs\nWe might implement our first AGI system incorrectly in a mundane sense. Specifically, even if we fully develop a theory of safe or aligned AGI, we might fail to implement that theory due to bugs or problems with our implementation techniques. In this case, we would be relatively better off if the mutual information between the AGI's knowledge and human preferences is low. We expect the system's behaviour to be dependent on its knowledge in some way, and we expect implementation errors to shift the nature of that dependence away from our intentions and expectations. Incorrect behaviour that depends on human preferences seems more dangerous than incorrect behaviour that does not.\nConsider the space of AGI system implementations, under a metric like similarity to an intended design (equivalently: severity of deviation from the design due to bugs). We want all the points near the first AGI system we build to be safe, because we may end up with a slightly different design than intended for reasons such as being confused about what we are doing or making implementation errors.\nThere are at least three ways in which the risk from bugs can manifest.\nIncorrectly Encoded Values: Supposing we intend the first use of AGI to be solving some bounded and well-specified task, but we misunderstand or badly implement it so much that what we end up with is actually unboundedly optimising some objective function. Then it seems better if that objective is something abstract like puzzle solving rather than something more directly connected to human preferences: consider, as a toy example, if the sign (positive/negative) around the objective were wrong.\nManipulation: The earlier arguments for independent audits do not just apply to the specific tasks we would plan to audit, but also to any activities an AGI system might carry out that humans might disapprove of. Examples include finding ways to hack into our supposedly secure systems, hiding its intentions and activity from us, or outright manipulating us. These tasks are much easier with access to a good psychological model of humans, which can be used to infer what mistakes we might make, or what loopholes we might overlook, or how we might respond to different behaviour from the system.\nHuman modelling is very close to human manipulation in design space. A system with accurate models of humans is close to a system which successfully uses those models to manipulate humans.\nThreats: Another risk from bugs comes not from the AGI system caring incorrectly about our values, but from having inadequate security. If our values are accurately encoded in an AGI system that cares about satisfying them, they become a target for threats from other actors who can gain from manipulating the first system. More examples and perspectives on this problem have been described here.\nThe increased risk from bugs of human modelling can be summarised as follows: whatever the risk that AGI systems produce catastrophic outcomes due to bugs, the very worst outcomes seem more likely if the system was trained using human modelling because these worst outcomes depend on the information in human models.\nLess independent audits and the risk from bugs can both be mitigated by preserving independence of the system from human model information, so the system cannot overfit to that information or use it perversely. The remaining two problems we consider, mind crime and unexpected agents, depend more heavily on the claim that modelling human preferences increases the chances of simulating something human-like.\nMind Crime\nMany computations may produce entities that are morally relevant because, for example, they constitute sentient beings that experience pain or pleasure. Bostrom calls improper treatment of such entities \"mind crime\". Modelling humans in some form seems more likely to result in such a computation than not modelling them, since humans are morally relevant and the system's models of humans may end up sharing whatever properties make humans morally relevant.\nUnexpected Agents\nSimilar to the mind crime point above, we expect AGI designs that use human modelling to be more at risk of producing subsystems that are agent-like, because humans are agent-like. For example, we note that trying to predict the output of consequentialist reasoners can reduce to an optimisation problem over a space of things that contains consequentialist reasoners. A system engineered to predict human preferences well seems strictly more likely to run into problems associated with misaligned sub-agents. (Nevertheless, we think the amount by which it is more likely is small.)\n \nSafe AGI Without Human Models is Neglected\nGiven the independent auditing concern, plus the additional points mentioned above, we would like to see more work done on practical approaches to developing safe AGI systems that do not depend on human modelling. At present, this is a neglected area in the AGI safety research landscape. Specifically, work of the form \"Here's a proposed approach, here are the next steps to try it out or investigate further\", which we might term engineering-focused research, is almost entirely done in a human-modelling context. Where we do see some safety work that eschews human modelling, it tends to be theory-focused research, for example, MIRI's work on agent foundations. This does not fill the gap of engineering-focused work on safety without human models.\nTo flesh out the claim of a gap, consider the usual formulations of each of the following efforts within safety research: iterated distillation and amplification, debate, recursive reward modelling, cooperative inverse reinforcement learning, and value learning. In each case, there is human modelling built into the basic setup for the approach. However, we note that the technical results in these areas may in some cases be transportable to a setup without human modelling, if the source of human feedback (etc.) is replaced with a purely algorithmic, independent system.\nSome existing work that does not rely on human modelling includes the formulation of safely interruptible agents, the formulation of impact measures (or side effects), approaches involving building AI systems with clear formal specifications (e.g., some versions of tool AIs), some versions of oracle AIs, and boxing/containment. Although they do not rely on human modelling, some of these approaches nevertheless make most sense in a context where human modelling is happening: for example, impact measures seem to make most sense for agents that will be operating directly in the real world, and such agents are likely to require human modelling. Nevertheless, we would like to see more work of all these kinds, as well as new techniques for building safe AGI that does not rely on human modelling.\n \nDifficulties in Avoiding Human Models\nA plausible reason why we do not yet see much research on how to build safe AGI without human modelling is that it is difficult. In this section, we describe some distinct ways in which it is difficult.\nUsefulness\nIt is not obvious how to put a system that does not do human modelling to good use. At least, it is not as obvious as for the systems that do human modelling, since they draw directly on sources (e.g., human preferences) of information about useful behaviour. In other words, it is unclear how to solve the specification problem—how to correctly specify desired (and only desired) behaviour in complex domains—without human modelling. The \"against human modelling\" stance calls for a solution to the specification problem wherein useful tasks are transformed into well-specified, human-independent tasks either solely by humans or by systems that do not model humans.\nTo illustrate, suppose we have solved some well-specified, complex but human-independent task like theorem proving or atomically precise manufacturing. Then how do we leverage this solution to produce a good (or better) future? Empowering everyone, or even a few people, with access to a superintelligent system that does not directly encode their values in some way does not obviously produce a future where those values are realised. (This seems related to Wei Dai's human-safety problem.)\nImplicit Human Models\nEven seemingly \"independent\" tasks leak at least a little information about their origins in human motivations. Consider again the mass transit system design problem. Since the problem itself concerns the design of a system for use by humans, it seems difficult to avoid modelling humans at all in specifying the task. More subtly, even highly abstract or generic tasks like puzzle solving contain information about the sources/designers of the puzzles, especially if they are tuned for encoding more obviously human-centred problems. (Work by Shah et al. looks at using the information about human preferences that is latent in the world.)\nSpecification Competitiveness / Do What I Mean\nExplicit specification of a task in the form of, say, an optimisation objective (of which a reinforcement learning problem would be a specific case) is known to be fragile: there are usually things we care about that get left out of explicit specifications. This is one of the motivations for seeking more and more high level and indirect specifications, leaving more of the work of figuring out what exactly is to be done to the machine. However, it is currently hard to see how to automate the process of turning tasks (vaguely defined) into correct specifications without modelling humans.\nPerformance Competitiveness of Human Models\nIt could be that modelling humans is the best way to achieve good performance on various tasks we want to apply AGI systems to for reasons that are not simply to do with understanding the problem specification well. For example, there may be aspects of human cognition that we want to more or less replicate in an AGI system, for competitiveness at automating those cognitive functions, and those aspects may carry a lot of information about human preferences with them in a hard to separate way.\n \nWhat to Do Without Human Models?\nWe have seen arguments for and against aspiring to solve AGI safety using human modelling. Looking back on these arguments, we note that to the extent that human modelling is a good idea, it is important to do it very well; to the extent that it is a bad idea, it is best to not do it at all. Thus, whether or not to do human modelling at all is a configuration bit that should probably be set early when conceiving of an approach to building safe AGI.\nIt should be noted that the arguments above are not intended to be decisive, and there may be countervailing considerations which mean we should promote the use of human models despite the risks outlined in this post. However, to the extent that AGI systems with human models are more dangerous than those without, there are two broad lines of intervention we might attempt. Firstly, it may be worthwhile to try to decrease the probability that advanced AI develops human models \"by default\", by promoting some lines of research over others. For example, an AI trained in a procedurally-generated virtual environment seems significantly less likely to develop human models than an AI trained on human-generated text and video data.\nSecondly, we can focus on safety research that does not require human models, so that if we eventually build AGI systems that are highly capable without using human models, we can make them safer without needing to teach them to model humans. Examples of such research, some of which we mentioned earlier, include developing human-independent methods to measure negative side effects, to prevent specification gaming, to build secure approaches to containment, and to extend the usefulness of task-focused systems.\n \nAcknowledgements: thanks to Daniel Kokotajlo, Rob Bensinger, Richard Ngo, Jan Leike, and Tim Genewein for helpful comments on drafts of this post.\n\nThe post Thoughts on Human Models appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Thoughts on Human Models", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=12", "id": "9ff500c9dd6987f7da1c11b3326bc393"} {"text": "Our 2018 Fundraiser Review\n\nOur 2018 Fundraiser ended on December 31 with the five week campaign raising $951,8171 from 348 donors to help advance MIRI's mission. We surpassed our Mainline Target ($500k) and made it more than halfway again to our Accelerated Growth Target ($1.2M). We're grateful to all of you who supported us. Thank you!\n \n\n\nTarget 1$500,000Completed\n\n Target 2\n$1,200,000\nIn Progress\n\n\n\nFundraiser Concluded\n348 donors contributed\n\n\n\n×\nTarget Descriptions\n\n\n\nTarget 1\nTarget 2\n\n\n\n$500k: Mainline target\nThis target represents the difference between what we've raised so far this year, and our point estimate for business-as-usual spending next year.\n\n\n$1.2M: Accelerated growth target\nThis target represents what's needed for our funding streams to keep pace with our growth toward the upper end of our projections.\n\n\n\n\n\n\n \nWith cryptocurrency prices significantly lower than during our 2017 fundraiser, we received less of our funding (~6%) from holders of cryptocurrency this time around. Despite this, our fundraiser was a success, in significant part thanks to the leverage gained by MIRI supporters' participation in multiple matching campaigns during the fundraiser, including WeTrust Spring's Ethereum-matching campaign, Facebook's Giving Tuesday event and professional poker player Dan Smith's Double Up Drive, expertly administered by Raising for Effective Giving.\n\nTogether with significant matching funds generated through donors' employer matching programs, matching donations accounted for ~37% of the total funds raised during the fundraiser.\n1. WeTrust Spring\nMIRI participated, along with 17 other non-profit organizations, in WeTrust's Spring innovative ETH-matching event, which ran thru Giving Tuesday, November 27. The event was the first-ever implementation of Glen Weyl, Zoë Hitzig, and Vitalik Buterin's Liberal Radicalism (LR) model for non-profit funding matching. Unlike most matching campaigns, which match exclusively based on total amount donated, this campaign matched in a way that heavily factored in the number of unique donors when divvying out the matching pool, a feature WeTrust highlighted as \"Democratic Donation Matching\".\nDuring MIRI's week-long campaign leading up to Giving Tuesday, some supporters went deep into trying to determine exactly what instantiation of the model WeTrust had created — how exactly DO the large donations provide leverage of 450% match rate for minimum donations of .1 ETH? Our supporters' excitement about this new matching model was also evident in the many donations that were made — as WeTrust reported in their blog post, \"MIRI, the Machine Intelligence Research Institute was the winner, clocking in 64 qualified donations totaling 147.751 ETH, then Lupus Foundation in second with 22 qualified donations and 23.851 total ETH.\" Thanks to our supporters' donations, MIRI received over 91% of the matching funds allotted by WeTrust and, all told, we received ETH worth more than $31,000 from the campaign. Thank you!\n2. Facebook Giving Tuesday Event\nSome of our hardiest supporters set their alarms clocks extra early to support us in Facebook's Giving Tuesday matching event, which kicked off at 5:00am EST on Giving Tuesday. Donations made before the $7M matching pool was exhausted were matched 1:1 by Facebook/PayPal, up to a maximum of $250,000 per organization, and a limit of $20,000 per donor, and $2500 per donation.\nMIRI supporters, some with our tipsheet in hand, pointed their browsers — and credit cards — at MIRI's fundraiser Facebook Page (and another page set up by the folks behind the EA Giving Tuesday Donation Matching Initiative — thank you Avi and William!), and clicked early and often. During the previous year's event, it took only 86 seconds for the $2M matching pool to be exhausted. This year saw a significantly larger $7M pool being exhausted dramatically quicker, sometime in the 16th second. Fortunately, before it ended, 11 MIRI donors had already made 20 donations totalling $40,072.\n\n\n\n \nOverall, 66% of the $61,023 donated to MIRI on Facebook on Giving Tuesday was matched by Facebook/PayPal, resulting in a total of $101,095. Thank you to everyone who participated, especially the early risers who so effectively leveraged matching funds on MIRI's behalf including Quinn Maurmann, Richard Schwall, Alan Chang, William Ehlhardt, Daniel Kokotajlo, John Davis, Herleik Holtan and others. You guys rock!\nYou can read more about the general EA community's fundraising performance on Giving Tuesday in Ari Norowitz's retrospective on the EA Forum.\n3. Double Up Drive Challenge\n\nPoker player Dan Smith and a number of his fellow professional players came together for another end-of-year Matching Challenge — once again administered by Raising for Effective Giving (REG), who have facilitated similar matching opportunities in years past. \nStarting on Giving Tuesday, November 27, $940,000 in matching funds was made available for eight charities focused on near-term causes (Malaria Consortium, GiveDirectly, Helen Keller International, GiveWell, Animal Charity Evaluators, Good Food Institute, StrongMinds and the Massachusetts Bail Fund); and, with the specific support of poker pro Aaron Merchak, $200,000 in matching funds was made available for two charities focused on improving the long-term future of our civilization, MIRI and REG.\nWith the addition of an anonymous sponsor to Dan's roster in early December, an extra $150,000 was added to the near-term causes pool and, then, a week later, after his win at the DraftKings World Championship, Tom Crowley followed through on his pledge to donate half of his total event winnings to the drive, adding significantly increased funding, $1.127M, to the drive's overall matching pool as well as 2 more organizations — Against Malaria Foundation and EA Funds' Long-Term Future Fund.\nThe last few days of the Drive saw a whirlwind of donations being made to all organizations, causing the pool of $2.417M to be exhausted 24 hours before the declared end of the drive (December 29) at which point Martin Crowley came in to match all donations made in the last day, thus increasing the matched donations to over $2.7M.\nIn total, MIRI donors had $229,000 matched during the event. We're very grateful to all these donors, to Dan Smith for instigating this phenomenally successful event, and to his fellow sponsors and especially Aaron Merchak and Martin Crowley for matching donations made to MIRI. Finally, a big shout-out to REG for facilitating and administering so effectively – thank you Stefan and Alfredo!\n4. Corporate Matching\nA number of MIRI supporters work at corporations that match contributions made by their employees to 501(c)(3) organizations like MIRI. During the duration of MIRI's fundraiser, over $62,000 in matching funds from various Employee Matching Programs was leveraged by our supporters, adding to the significant matching corporate funds already leveraged during 2018 by these and other MIRI supporters.\n \nWe're extremely grateful for all the support we received during this fundraiser, especially the effective leveraging of the numerous matching opportunities, and are excited about the opportunity it creates for us to continue to grow our research team. \nIf you know of — or want to discuss — any giving/matching/support MIRI opportunities in 2019, please get in touch with me2 at . Thank you!\n\n\n\n\nThe exact total is still subject to change as we continue to process a small number of donations.\n\n\nColm Ó Riain is MIRI's Head of Growth. Colm coordinates MIRI's philanthropic and recruitment strategy to support MIRI's growth plans.\n\n\n\nThe post Our 2018 Fundraiser Review appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Our 2018 Fundraiser Review", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=12", "id": "b48054dba43cd0d44a1a3fd5a33a4ef9"} {"text": "January 2019 Newsletter\n\n\nOur December fundraiser was a success, with 348 donors contributing just over $950,000. Supporters leveraged a variety of matching opportunities, including employer matching programs, WeTrust Spring's Ethereum-matching campaign, Facebook's Giving Tuesday event, and professional poker players Dan Smith, Aaron Merchak, and Martin Crowley's Double Up Drive, expertly facilitated by Raising for Effective Giving.\nIn all, matching donations accounted for just over one third of the funds raised. Thank you to everyone who contributed!\n\nNews and links\n\nFrom NVIDIA's Tero Karras, Samuli Laine, and Timo Aila: \"A Style-Based Generator Architecture for Generative Adversarial Networks.\"\nHow AI Training Scales: OpenAI's Sam McCandlish, Jared Kaplan, and Dario Amodei introduce a method that \"predicts the parallelizability of neural network training on a wide range of tasks\".\nFrom Vox: The case for taking AI seriously as a threat to humanity. See also Gizmodo's article on AI risk.\n\n\nThe post January 2019 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "January 2019 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=13", "id": "7be9bb30e80e7f93465f06215852f247"} {"text": "December 2018 Newsletter\n\n\nEdward Kmett has joined the MIRI team! Edward is a prominent Haskell developer who popularized the use of lenses for functional programming, and currently maintains many of the libraries around the Haskell core libraries.\nI'm also happy to announce another new recruit: James Payor. James joins the MIRI research team after three years at Draftable, a software startup. He previously studied math and CS at MIT, and he holds a silver medal from the International Olympiad in Informatics, one of the most prestigious CS competitions in the world.\nIn other news, we've today released a new edition of Rationality: From AI to Zombies with a fair amount of textual revisions and (for the first time) a print edition!\nFinally, our 2018 fundraiser has passed the halfway mark on our first target! (And there's currently $136,000 available in dollar-for-dollar donor matching through the Double Up Drive!)\nOther updates\n\nA new paper from Stuart Armstrong and Sören Mindermann: \"Occam's Razor is Insufficient to Infer the Preferences of Irrational Agents.\"\nNew AI Alignment Forum posts: Kelly Bettors; Bounded Oracle Induction\nOpenAI's Jack Clark and Axios discuss research-sharing in AI, following up on our 2018 Update post.\nA throwback post from Eliezer Yudkowsky: Should Ethicists Be Inside or Outside a Profession?\n\nNews and links\n\nNew from the DeepMind safety team: Jan Leike's Scalable Agent Alignment via Reward Modeling (arXiv) and Viktoriya Krakovna's Discussion on the Machine Learning Approach to AI Safety.\nTwo recently released core Alignment Forum sequences: Rohin Shah's Value Learning and Paul Christiano's Iterated Amplification.\nOn the 80,000 Hours Podcast, Catherine Olsson and Daniel Ziegler discuss paths for ML engineers to get involved in AI safety.\nNick Bostrom has a new paper out: \"The Vulnerable World Hypothesis.\"\n\n\nThe post December 2018 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "December 2018 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=13", "id": "b5e2bb1a901ba6c60f719d707b29cee7"} {"text": "Announcing a new edition of \"Rationality: From AI to Zombies\"\n\nMIRI is putting out a new edition of Rationality: From AI to Zombies, including the first set of R:AZ print books! Map and Territory (volume 1) and How to Actually Change Your Mind (volume 2) are out today!\n \n                   \n \n\nMap and Territory is:\n\n$6.50 on Amazon, for the print version.\nPay-what-you-on Gumroad, for PDF, EPUB, and MOBI versions.\n\n\n\nHow to Actually Change Your Mind is:\n\n$8 on Amazon, for the print version.\nPay-what-you-on Gumroad, for PDF, EPUB, and MOBI versions (available in the next day).\n\n\n\n \nThe Rationality: From AI to Zombies compiles Eliezer Yudkowsky's original Overcoming Bias and Less Wrong sequences, modified to form a more cohesive whole as books.\nMap and Territory is the canonical starting point, though we've tried to make How to Actually Change Your Mind a good jumping-on point too, since we expect different people to take interest in one book or the other.\nThe previous edition of Rationality: From AI to Zombies was digital-only, and took the form of a single sprawling ebook. The new version has been revised a fair amount, with larger changes including:\n \n\nThe first sequence in Map and Territory, \"Predictably Wrong,\" has been substantially reorganized and rewritten, with a goal of making it a much better experience for new readers.\n \n\nMore generally, the books are now more optimized for new readers and less focused on extreme fidelity to Eliezer's original blog posts, as this was one of the largest requests we got in response to the previous edition of Rationality: From AI to Zombies. Although the book as a whole is mostly unchanged, this represented an update about which option to pick in quite a few textual tradeoffs.\n \n\nA fair number of essays have been added, removed, or rearranged. The \"Against Doublethink\" sequence in How to Actually Change Your Mind has been removed entirely, except for one essay (\"Singlethink\").\n \n\nImportant links and references are now written out rather than hidden behind Easter egg hyperlinks, so that they'll show up in print editions too.\n \nEaster egg links are kept around if they're interesting enough to be worth retaining, but not important enough to deserve a footnote; so there will still be some digital-only content, but the goal is for this to be pretty minor.\n \n\nA glossary has been added to the back of each book.\n\n \nOver the coming months, We'll be rolling out the other four volumes of Rationality: From AI to Zombies. To learn more, see the R:AZ landing page.\n\nThe post Announcing a new edition of \"Rationality: From AI to Zombies\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Announcing a new edition of “Rationality: From AI to Zombies”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=13", "id": "d81c6497613594f36ae542f1635a1783"} {"text": "2017 in review\n\n\nThis post reviews MIRI's activities in 2017, including research, recruiting, exposition, and fundraising activities.\n2017 was a big transitional year for MIRI, as we took on new research projects that have a much greater reliance on hands-on programming work and experimentation. We've continued these projects in 2018, and they're described more in our 2018 update. This meant a major focus on laying groundwork for much faster growth than we've had in the past, including setting up infrastructure and changing how we recruit to reach out to more people with engineering backgrounds.\n\nAt the same time, 2017 was our best year to date on fundraising, as we saw a significant increase in support both from the Open Philanthropy Project and from the cryptocurrency community, which responded to the crypto boom with a great deal of generosity toward us. This put us in an excellent position to move ahead with our plans with confidence, and to focus more of our effort on technical research and growth.\nThe review this year is coming out far later than usual, for which I apologize. One of the main reasons for this is that I felt that a catalogue of our 2017 activities would be much less informative if I couldn't cite our 2018 update, which explains a lot of the reasoning behind our new work and how the things we're doing relate to each other. I apologize for any inconvenience this might have caused people trying to track what MIRI's been up to. I plan to have our next annual review out much earlier, in the first quarter of 2019.\n \n2017 Research Progress\nAs described in our 2017 organizational update and elaborated on in much more detail in our recent 2018 update, 2017 saw a significant shift in where we're putting our research efforts. Although an expanded version of the Agent Foundations agenda continues to be a major focus at MIRI, we're also now tackling a new set of alignment research directions that lend themselves more to code experiments.\nSince early 2017, we've been increasingly adopting a policy of not disclosing many of our research results, which has meant that less of our new output is publicly available. Some of our work in 2017 (and 2018) has continued to be made public, however, including research on the AI Alignment Forum.\nIn 2017, Scott Garrabrant refactored our Agent Foundations agenda into four new categories: decision theory, embedded world-models, robust delegation, and subsystem alignment. Abram Demski and Scott have now co-written an introduction to these four problems, considered as different aspects of the larger problem of \"Embedded Agency.\"\nComparing our predictions (from March 20171) to our progress over 2017, and using a 1-5 scale where 1 means \"limited\" progress, 3 means \"modest\" progress, and 5 means \"sizable\" progress, we get the following retrospective take on our public-facing research progress:\nDecision theory\n\n2015 progress: 3. (Predicted: 3.)\n2016 progress: 3. (Predicted: 3.)\n2017 progress: 3. (Predicted: 3.)\n\nOur most significant 2017 results include posing and solving a version of the converse Lawvere problem;2 developing cooperative oracles; and improving our understanding of how causal decision theory relates to evidential decision theory (e.g., in Smoking Lesion Steelman).\nWe also released a number of introductory resources on decision theory, including \"Functional Decision Theory\" and Decisions Are For Making Bad Outcomes Inconsistent.\nEmbedded world-models\n\n2015 progress: 5. (Predicted: 3.)\n2016 progress: 5. (Predicted: 3.)\n2017 progress: 2. (Predicted: 2.)\n\nKey 2017 results in this area include the finding that logical inductors that can see each other dominate each other; and, as a corollary, logical inductor limits dominate each other.\nBeyond that, Scott Garrabrant reports that Hyperreal Brouwer shifted his thinking significantly with respect to probabilistic truth predicates, reflective oracles, and logical inductors. Additionally, Vanessa Kosoy's \"Forecasting Using Incomplete Models\" built on our previous work on logical inductors to create a cleaner (purely learning-theoretic) formalism for modeling complex environments, showing that the methods developed in \"Logical Induction\" are useful for applications in classical sequence prediction unrelated to logic.\nRobust delegation\n\n2015 progress: 3. (Predicted: 3.)\n2016 progress: 4. (Predicted: 3.)\n2017 progress: 4. (Predicted: 1.)\n\nWe made significant progress on the tiling problem, and also clarified our thinking about Goodhart's Law (see \"Goodhart Taxonomy\"). Other noteworthy work in this area includes Vanessa Kosoy's Delegative Inverse Reinforcement Learning framework, Abram Demski's articulation of \"stable pointers to value\" as a central desideratum for value loading, and Ryan Carey's \"Incorrigibility in the CIRL Framework.\"\nSubsystem alignment (new category)\nOne of the more significant research shifts at MIRI in 2017 was orienting toward the subsystem alignment problem at all, following discussion such as Eliezer Yudkowsky's Optimization Daemons, Paul Christiano's What Does the Universal Prior Actually Look Like?, and Jessica Taylor's Some Problems with Making Induction Benign. Our high-level thoughts about this problem can be found in Scott Garrabrant and Abram Demski's recent write-up.\n \n2017 also saw a reduction in our focus on the Alignment for Advanced Machine Learning Systems (AAMLS) agenda. Although we view these problems as highly important, and continue to revisit them regularly, we've found AAMLS work to be less obviously tractable than our other research agendas thus far.\nOn the whole, we continue (at year's end in 2018) to be very excited by the alignment avenues of attack that we started exploring in earnest in 2017, both with respect to embedded agency and with respect to our newer lines of research. \n \n2017 Research Support Activities\nAs discussed in our 2018 Update, the new lines of research we're tackling are much easier to hire for than has been the case for our Agent Foundations research:\n\nThis work seems to \"give out its own guideposts\" more than the Agent Foundations agenda does. While we used to require extremely close fit of our hires on research taste, we now think we have enough sense of the terrain that we can relax those requirements somewhat. We're still looking for hires who are scientifically innovative and who are fairly close on research taste, but our work is now much more scalable with the number of good mathematicians and engineers working at MIRI.\n\nFor that reason, one of our top priorities in 2017 (continuing into 2018) was to set MIRI up to be able to undergo major, sustained growth. We've been helped substantially in ramping up our recruitment by Blake Borgeson, a Nature-published computational biologist (and now a MIRI board member) who previously co-founded Recursion Pharmaceuticals and led its machine learning work as CTO.\nConcretely, in 2017 we:\n\nHired research staff including Sam Eisenstat, Abram Demski, Tsvi Benson-Tilsen, Jesse Liptrap, and Nick Tarleton.\nRan the AI Summer Fellows Program with CFAR.\nRan 3 research workshops on the Agent Foundations agenda, the AAMLS agenda, and Paul Christiano's research agenda. We also ran a large number of internal research retreats and other events.\nRan software engineer trials where participants spent the summer training up to become research engineers, resulting in a hire.\n\n \n2017 Conversations and Exposition\nOne of our 2017 priorities was to sync up and compare models more on the strategic landscape with other existential risk and AI safety groups. For snapshots of some of the discussions over the years, see Daniel Dewey's thoughts on HRAD and Nate's response; and more recently, Eliezer Yudkowsky and Paul Christiano's conversations about Paul's research proposals.\nWe also did a fair amount of public dialoguing, exposition, and outreach in 2017. On that front we:\n\nReleased Inadequate Equilibria, a book by Eliezer Yudkowsky on group- and system-level inefficiencies, and when individuals can hope to do better than the status quo.\nProduced research exposition: On Motivations for MIRI's Highly Reliable Agent Design Research; Ensuring Smarter-Than-Human Intelligence Has a Positive Outcome; Security Mindset and Ordinary Paranoia; Security Mindset and the Logistic Success Curve\nProduced strategy and forecasting exposition: Response to Cegłowski on Superintelligence; There's No Fire Alarm for Artificial General Intelligence; AlphaGo Zero and the Foom Debate; Why We Should Be Concerned About Artificial Superintelligence; A Reply to Francois Chollet on Intelligence Explosion\nReceived press coverage in The Huffington Post, Vanity Fair, Nautilus, and Wired. We were also interviewed for Mark O'Connell's To Be A Machine and Richard Clarke and R.P. Eddy's Warnings: Finding Cassandras to Stop Catastrophes.\nSpoke at the O'Reilly AI Conference, and on panels at the Beneficial AI conference and Effective Altruism Global (1, 2).\nPresented papers at TARK 2017, FEW 2017, and the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, and published the Agent Foundations agenda in The Technological Singularity: Managing the Journey.\nParticipated in other events, including the \"Envisioning and Addressing Adverse AI Outcomes\" workshop at Arizona State and the UCLA Colloquium on Catastrophic and Existential Risk.\n\n \n2017 Finances\nFundraising\n2017 was by far our best fundraising year to date. We raised a total of $5,849,500, more than 2.5× what we raised in 2016.3 During our annual fundraiser, we also raised double our highest target. We are very grateful for this incredible show of support. This unexpected fundraising success enabled us to move forward with our growth plans with a lot more confidence, and boosted our recruiting efforts in a variety of ways.4\nThe large increase in funding we saw in 2017 was significantly driven by:\n\nA large influx of cryptocurrency contributions, which made up ~42% of our total contributions in 2017. The largest of these were:\n\n$1.01M in ETH from an anonymous donor.\n$764,970 in ETH from Vitalik Buterin, the inventor and co-founder of Ethereum.\n$367,575 in BTC from Christian Calderon.\n$295,899 in BTC from professional poker players Dan Smith, Tom Crowley and Martin Crowley as part of their Matching Challenge in partnership with Raising for Effective Giving.\n\n\nOther contributions including:\n\nA $1.25M grant disbursement from the Open Philanthropy Project, significantly increased from the $500k grant they awarded MIRI in 2016.\n$200k in grants from the Berkeley Existential Risk Initiative.\n\n\n\nAs the graph below shows, although our fundraising has increased year over year since 2014, 2017 looks very much like an outlier year relative to our previous growth rate. This was largely driven by the large influx of cryptocurrency contributions, but even excluding those contributions, we raised ~$3.4M, which is 1.5× what we raised in 2016.5\n\n\n(In this chart and those that follow, \"Unlapsed\" indicates contributions from past supporters who did not donate in the previous year.)\nWhile the largest contributions drove the overall trend, we saw growth in both the number of contributors and amount contributed across all contributor sizes.\n\n\n\n\nIn 2017 we received contributions from 745 unique contributors, 38% more than in 2016 and nearly as many as in 2014 when we participated in SVGives.\nSupport from international contributors increased from 20% in 2016 to 42% in 2017. This increase was largely driven by the $1.01M ETH donation, but support still increased from 20% to 25% if we ignore this donation. Starting in late 2016, we've been working hard to find ways for our international supporters to be able to contribute in a tax-advantaged manner. I expect this percentage to substantially increase in 2018 due to those efforts.6\n\n\nSpending\nIn our 2016 fundraiser post, we projected that we'd spend $2–2.2M in 2017. Later in 2017, we revised our estimates to $2.1–2.5M (with a point estimate of $2.25M) along with a breakdown of our estimate across our major budget categories.\nOverall, our projections were fairly accurate. Total spending came in at just below $2.1M. The graph below compares our actual spending with our projections.7\n\n\nThe largest deviation from our projected spending came as a result of the researchers who had been working on our AAMLS agenda moving on (on good terms) to other projects.\n \nFor past annual reviews, see: 2016, 2015, 2014, and 2013; and for more recent information on what we've been up to following 2017, see our 2018 update and fundraiser posts.\nThese predictions have been edited below to match Scott's terminology changes as described in 2018 research plans and predictions.Scott coined the name for this problem in his post: The Ubiquitous Converse Lawvere Problem.Note that amounts in this section may vary slightly from our audited financial statements, due to small differences between how we tracked donations internally, and how we are required to report them in our financial statements.See our 2018 fundraiser post for more information.This is similar to 2013, when 33% of our contributions that year came from a single Ripple donation from Jed McCaleb.A big thanks to Colm for all the work he's put into this; have a look at our Tax-Advantaged Donations page for more information.Our subsequent budget projections have used a simpler set of major budget categories. I've translated our 2017 budget projections into this categorization scheme, for our comparison with actual spending, in order to remain consistent with this new scheme.The post 2017 in review appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "2017 in review", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=13", "id": "60ae53d7fd830048c86e90216d862b84"} {"text": "MIRI's newest recruit: Edward Kmett!\n\nProlific Haskell developer Edward Kmett has joined the MIRI team!\nEdward is perhaps best known for popularizing the use of lenses for functional programming. Lenses are a tool that provides a compositional vocabulary for accessing parts of larger structures and describing what you want to do with those parts.\nBeyond the lens library, Edward maintains a significant chunk of all libraries around the Haskell core libraries, covering everything from automatic differentiation (used heavily in deep learning, computer vision, and financial risk) to category theory (biased heavily towards organizing software) to graphics, SAT bindings, RCU schemes, tools for writing compilers, and more.\nInitial support for Edward joining MIRI is coming in the form of funding from long-time MIRI donor Jaan Tallinn. Increased donor enthusiasm has put MIRI in a great position to take on more engineers in general, and to consider highly competitive salaries for top-of-their-field engineers like Edward who are interested in working with us.\nAt MIRI, Edward is splitting his time between helping us grow our research team and diving in on a line of research he's been independently developing in the background for some time: building a new language and infrastructure to make it easier for people to write highly complex computer programs with known desirable properties. While we are big fans of his work, Edward's research is independent of the directions we described in our 2018 Update, and we don't consider it part of our core research focus.\nWe're hugely excited to have Edward at MIRI. We expect to learn and gain a lot from our interactions, and we also hope that having Edward on the team will let him and other MIRI staff steal each other's best problem-solving heuristics and converge on research directions over time.\n\nAs described in our recent update, our new lines of research are heavy on the mix of theoretical rigor and hands-on engineering that Edward and the functional programming community are well-known for:\nIn common between all our new approaches is a focus on using high-level theoretical abstractions to enable coherent reasoning about the systems we build. A concrete implication of this is that we write lots of our code in Haskell, and are often thinking about our code through the lens of type theory.\nMIRI's nonprofit mission is to ensure that smarter-than-human AI systems, once developed, have a positive impact on the world. And we want to actually succeed in that goal, not just go through the motions of working on the problem.\nOur current model of the challenges involved says that the central sticking point for future engineers will likely be that the building blocks of AI just aren't sufficiently transparent. We think that someone, somewhere, needs to develop some new foundations and deep theory/insights, above and beyond what's likely to arise from refining or scaling up currently standard techniques.\nWe think that the skillset of functional programmers tends to be particularly well-suited to this kind of work, and we believe that our new research areas can absorb a large number of programmers and computer scientists. So we want this hiring announcement to double as a hiring pitch: consider joining our research effort!\nTo learn more about what it's like to work at MIRI and what kinds of candidates we're looking for, see our last big post, or shoot MIRI researcher Buck Shlegeris an email.\nThe post MIRI's newest recruit: Edward Kmett! appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "MIRI’s newest recruit: Edward Kmett!", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=13", "id": "ab41b6186321039a310c556291b1a88b"} {"text": "November 2018 Newsletter\n\n\nIn 2018 Update: Our New Research Directions, Nate Soares discusses MIRI's new research; our focus on \"deconfusion\"; some of the thinking behind our decision to default to nondisclosure on new results; and why more people than you might think should come join the MIRI team!\nAdditionally, our 2018 fundraiser begins today! To kick things off, we'll be participating in three separate matching campaigns, all focused around Giving Tuesday, Nov. 27; details in our fundraiser post.\nOther updates\n\nNew alignment posts: A Rationality Condition for CDT Is That It Equal EDT (1, 2); Standard ML Oracles vs. Counterfactual Ones; Addressing Three Problems with Counterfactual Corrigibility; When EDT=CDT, ADT Does Well. See also Paul Christiano's EDT vs. CDT (1, 2).\nEmbedded Agency: Scott Garrabrant and Abram Demski's full sequence is up! The posts serve as our new core introductory resource to MIRI's Agent Foundations research.\n\"Sometimes people ask me what math they should study in order to get into agent foundations. My first answer is that I have found the introductory class in every subfield to be helpful, but I have found the later classes to be much less helpful. My second answer is to learn enough math to understand all fixed point theorems….\" In Fixed Point Exercises, Scott provides exercises for getting into MIRI's Agent Foundations research.\nMIRI is seeking applicants for a new series of AI Risk for Computer Scientists workshops, aimed at technical people who want to think harder about AI alignment.\n\nNews and links\n\nVox unveils Future Perfect, a new section of their site focused on effective altruism.\n80,000 Hours interviews Paul Christiano, including discussion of MIRI/Paul disagreements and Paul's approach to AI alignment research.\n80,000 Hours surveys effective altruism orgs on their hiring needs.\nFrom Christiano, Buck Shlegeris, and Dario Amodei: Learning Complex Goals with Iterated Amplification (arXiv paper).\n\n\nThe post November 2018 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "November 2018 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=13", "id": "84ec652612df6e56cde46932c57a6076"} {"text": "MIRI's 2018 Fundraiser\n\nUpdate January 2019: MIRI's 2018 fundraiser is now concluded.\n \n\n\nTarget 1\n$500,000\nCompletedTarget 2\n$1,200,000In Progress\n\n\n\n$946,981\n|\n\n\n\n\n\n\n\n|\n$0\n\n\n|\n$300,000\n\n\n|\n$600,000\n\n\n|\n$900,000\n\n\n|\n$1,200,000\n\n\n\n\nFundraiser concluded\n345 donors contributed\n\n\n\n×\nTarget Descriptions\n\n\n\nTarget 1\nTarget 2\n\n\n\n$500k: Mainline target\nThis target represents the difference between what we've raised so far this year, and our point estimate for business-as-usual spending next year.\n\n\n$1.2M: Accelerated growth target\nThis target represents what's needed for our funding streams to keep pace with our growth toward the upper end of our projections.\n\n\n\n\n\n\n\n\nDonate Now\n\n\n\nMIRI is a math/CS research nonprofit with a mission of maximizing the potential humanitarian benefit of smarter-than-human artificial intelligence. You can learn more about the kind of work we do in \"Ensuring Smarter-Than-Human Intelligence Has A Positive Outcome\" and \"Embedded Agency.\"\nOur funding targets this year are based on a goal of raising enough in 2018 to match our \"business-as-usual\" budget next year. We view \"make enough each year to pay for the next year\" as a good heuristic for MIRI, given that we're a quickly growing nonprofit with a healthy level of reserves and a budget dominated by researcher salaries.\nWe focus on business-as-usual spending in order to factor out the (likely very large) cost of moving to new spaces in the next couple of years as we continue to grow, which introduces a high amount of variance to the model.1\nMy current model for our (business-as-usual, outlier-free) 2019 spending ranges from $4.4M to $5.5M, with a point estimate of $4.8M—up from $3.5M this year and $2.1M in 2017. The model takes as input estimated ranges for all our major spending categories, but the overwhelming majority of the variance comes from the number of staff we'll add to the team.2\nIn the mainline scenario, our 2019 budget breakdown looks roughly like this:\n\nIn this scenario, we currently have ~1.5 years of reserves on hand. Since we've raised ~$4.3M between Jan. 1 and Nov. 25,3 our two targets are:\n\nTarget 1 ($500k), representing the difference between what we've raised so far this year, and our point estimate for business-as-usual spending next year.\nTarget 2 ($1.2M), what's needed for our funding streams to keep pace with our growth toward the upper end of our projections.4\n\nBelow, we'll summarize what's new at MIRI and talk more about our room for more funding.\n \n1. Recent updates\nWe've released a string of new posts on our recent activities and strategy:\n\n2018 Update: Our New Research Directions discusses the new set of research directions we've been ramping up over the last two years, how they relate to our Agent Foundations research and our goal of \"deconfusion,\" and why we've adopted a \"nondisclosed-by-default\" policy for this research.\n\n\nEmbedded Agency describes our Agent Foundations research agenda as different angles of attack on a single central difficulty: we don't know how to characterize good reasoning and decision-making for agents embedded in their environment.\n\n\nSummer MIRI Updates discusses new hires, new donations and grants, and new programs we've been running to recruit researcher staff and grow the total pool of AI alignment researchers.\n\n\n\nAnd, added Nov. 28:\n\nMIRI's Newest Recruit: Edward Kmett announces our latest hire: noted Haskell developer Edward Kmett, who popularized the use of lenses for functional programming and maintains a large number of the libraries around the Haskell core libraries.\n\n\n2017 in Review recaps our activities and donors' support from last year.\n\nOur 2018 Update also discusses the much wider pool of engineers and computer scientists we're now trying to recruit, and the much larger total number of people we're trying to add to the team in the near future:\nWe're seeking anyone who can cause our \"become less confused about AI alignment\" work to go faster.\nIn practice, this means: people who natively think in math or code, who take seriously the problem of becoming less confused about AI alignment (quickly!), and who are generally capable. In particular, we're looking for high-end Google programmer levels of capability; you don't need a 1-in-a-million test score or a halo of destiny. You also don't need a PhD, explicit ML background, or even prior research experience.\nIf the above might be you, and the idea of working at MIRI appeals to you, I suggest sending in a job application or shooting your questions at Buck Shlegeris, a researcher at MIRI who's been helping with our recruiting.\nWe're also hiring for Agent Foundations roles, though at a much slower pace. For those roles, we recommend interacting with us and other people hammering on AI alignment problems on Less Wrong and the AI Alignment Forum, or at local MIRIx groups. We then generally hire Agent Foundations researchers from people we've gotten to know through the above channels, visits, and events like the AI Summer Fellows program.\nA great place to start developing intuitions for these problems is Scott Garrabrant's recently released fixed point exercises, or various resources on the MIRI Research Guide. Some examples of recent public work on Agent Foundations / embedded agency include:\n\nSam Eisenstat's untrollable prior, explained in illustrated form by Abram Demski, shows that there is a Bayesian solution to one of the basic problems which motivated the development of non-Bayesian logical uncertainty tools (culminating in logical induction). This informs our picture of what's possible, and may lead to further progress in the direction of Bayesian logical uncertainty.\nScott Garrabrant outlines a taxonomy of ways that Goodhart's law can manifest.\nSam's logical inductor tiling result solves a version of the tiling problem for logically uncertain agents.\nPrisoners' Dilemma with Costs to Modeling: A modified version of open-source prisoners' dilemmas in which agents must pay resources in order to model each other.\nLogical Inductors Converge to Correlated Equilibria (Kinda): A game-theoretic analysis of logical inductors.\nNew results in Asymptotic Decision Theory and When EDT=CDT, ADT Does Well represent incremental progress on understanding what's possible with respect to learning the right counterfactuals.\n\nThese results are relatively small, compared to Nate's forthcoming tiling agents paper or Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant's forthcoming \"The Inner Alignment Problem.\" However, they should give a good sense of some of the recent directions we've been pushing in with our Agent Foundations work.\n \n2. Room for more funding\nAs noted above, the biggest sources of uncertainty in our 2018 budget estimates are about how many research staff we hire, and how much we spend on moving to new offices.\nIn our 2017 fundraiser, we set a goal of hiring 10 new research staff in 2018–2019. So far, we're up two research staff, with enough promising candidates in the pipeline that I still consider 10 a doable (though ambitious) goal.\nFollowing the amazing show of support we received from donors last year (and continuing into 2018), we had significantly more funds than we anticipated, and we found more ways to usefully spend it than we expected. In particular, we've been able to translate the \"bonus\" support we received in 2017 into broadening the scope of our recruiting efforts. As a consequence, our 2018 spending, which will come in at around $3.5M, actually matches the point estimate I gave in 2017 for our 2019 budget, rather than my prediction for 2018—a large step up from what I predicted, and an even larger step from last year's budget of $2.1M.5\nOur two fundraiser goals, Target 1 ($500k) and Target 2 ($1.2M), correspond to the budgetary needs we can easily predict and account for. Our 2018 went much better as a result of donors' greatly increased support in 2017, and it's possible that we're in a similar situation today, though I'm not confident that this is the case.\nConcretely, some ways that our decision-making changed as a result of the amazing support we saw were:\n\nWe spent more on running all-expenses-paid AI Risk for Computer Scientists workshops. We ran the first such workshop in February, and saw a lot of value in it as a venue for people with relevant technical experience to start reasoning more about AI risk. Since then, we've run another seven events in 2018, with more planned for 2019.\nAs hoped, these workshops have also generated interest in joining MIRI and other AI safety research teams. This has resulted in one full-time MIRI research staff hire, and on the order of ten candidates with good prospects of joining full-time in 2019, including two recent interns.\n\n\nWe've been more consistently willing and able to pay competitive salaries for top technical talent. A special case of this is hiring relatively senior research staff like Edward Kmett.\n\n\nWe raised salaries for some existing staff members. We have a very committed staff, and some staff at MIRI had previously asked for fairly low salaries in order to help keep MIRI's organizational costs lower. The inconvenience this caused staff members doesn't make much sense at our current organizational size, both in terms of our team's productivity and in terms of their well-being.\n\n\nWe ran a summer research internship program, on a larger scale than we otherwise would have.\n\n\nAs we've considered options for new office space that can accommodate our expansion, we've been able to filter less on price relative to fit. We've also been able to spend more on renovations that we expect to produce a working environment where our researchers can do their work with fewer distractions or inconveniences.\n\n\n\n2018 brought a positive update about our ability to cost-effectively convert surprise funding increases into (what look from our perspective like) very high-value actions, and the above list hopefully helps clarify what that can look like in practice. We can't promise to be able to repeat that in 2019, given an overshoot in this fundraiser, but we have reason for optimism.\n\n\n\nDonate Now\n\n\nThat is, our business-as-usual model tries to remove one-time outlier costs so that it's easier to see what \"the new normal\" is in MIRI's spending and think about our long-term growth curve.This estimation is fairly rough and uncertain. One weakness of this model is that it treats the inputs as though they were independent, which is not always the case. I also didn't try to account for the fact that we're likely to spend more in worlds where we see more fundraising success.\nHowever, a sensitivity analysis on the final output showed that the overwhelming majority of the uncertainty in this estimate comes from how many research staff we hire, which matches my expectations and suggests that the model is doing a decent job of tracking the intuitively important variables. I also ended up with similar targets when I ran the numbers on our funding status in other ways and when I considered different funding scenarios.This excludes earmarked funding for AI Impacts, an independent research group that's institutionally housed at MIRI.We could also think in terms of a \"Target 0.5\" of $100k in order to hit the bottom of the range, $4.4M. However, I worried that a $100k target would be misleading given that we're thinking in terms of a $4.4–5.5M budget.Quoting our 2017 fundraiser post: \"If we succeed, our point estimate is that our 2018 budget will be $2.8M and our 2019 budget will be $3.5M, up from roughly $1.9M in 2017.\" The $1.9M figure was an estimate from before 2017 had ended. We've now revised this figure to $2.1M, which happens to bring it in line with our 2016 point estimate for how much we'd spend in 2017.The post MIRI's 2018 Fundraiser appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "MIRI’s 2018 Fundraiser", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=14", "id": "245a482315637592936236ee2396b66a"} {"text": "2018 Update: Our New Research Directions\n\n\nFor many years, MIRI's goal has been to resolve enough fundamental confusions around alignment and intelligence to enable humanity to think clearly about technical AI safety risks—and to do this before this technology advances to the point of potential catastrophe. This goal has always seemed to us to be difficult, but possible.1\nLast year, we said that we were beginning a new research program aimed at this goal.2 Here, we're going to provide background on how we're thinking about this new set of research directions, lay out some of the thinking behind our recent decision to do less default sharing of our research, and make the case for interested software engineers to join our team and help push our understanding forward.\n\n \nContents:\n\nOur research\nWhy deconfusion is so important to us\nNondisclosed-by-default research, and how this policy fits into our overall strategy\nJoining the MIRI team\n\n \n1. Our research\nIn 2014, MIRI published its first research agenda, \"Agent Foundations for Aligning Machine Intelligence with Human Interests.\" Since then, one of our main research priorities has been to develop a better conceptual understanding of embedded agency: formally characterizing reasoning systems that lack a crisp agent/environment boundary, are smaller than their environment, must reason about themselves, and risk having parts that are working at cross purposes. These research problems continue to be a major focus at MIRI, and are being studied in parallel with our new research directions (which I'll be focusing on more below).3\nFrom our perspective, the point of working on these kinds of problems isn't that solutions directly tell us how to build well-aligned AGI systems. Instead, the point is to resolve confusions we have around ideas like \"alignment\" and \"AGI,\" so that future AGI developers have an unobstructed view of the problem. Eliezer illustrates this idea in \"The Rocket Alignment Problem,\" which imagines a world where humanity tries to land on the Moon before it understands Newtonian mechanics or calculus.\nRecently, some MIRI researchers developed new research directions that seem to enable more scalable progress towards resolving these fundamental confusions. Specifically, the progress is more scalable in researcher hours—it's now the case that we believe excellent engineers coming from a variety of backgrounds can have their work efficiently converted into research progress at MIRI—where previously, we only knew how to speed our research progress with a (relatively atypical) breed of mathematician.\nAt the same time, we've seen some significant financial success over the past year—not so much that funding is no longer a constraint at all, but enough to pursue our research agenda from new and different directions, in addition to the old.\nFurthermore, our view implies that haste is essential. We see AGI as a likely cause of existential catastrophes, especially if it's developed with relatively brute-force-reliant, difficult-to-interpret techniques; and although we're quite uncertain about when humanity's collective deadline will come to pass, many of us are somewhat alarmed by the speed of recent machine learning progress.\nFor these reasons, we're eager to locate the right people quickly and offer them work on these new approaches; and with this kind of help, it strikes us as very possible that we can resolve enough fundamental confusion in time to port the understanding to those who will need it before AGI is built and deployed.\n \nComparing our new research directions and Agent Foundations\nOur new research directions involve building software systems that we can use to test our intuitions, and building infrastructure that allows us to rapidly iterate this process. Like the Agent Foundations agenda, our new research directions continue to focus on \"deconfusion,\" rather than on, e.g., trying to improve robustness metrics of current systems—our sense being that even if we make major strides on this kind of robustness work, an AGI system built on principles similar to today's systems would still be too opaque to align in practice.\n\nIn a sense, you can think of our new research as tackling the same sort of problem that we've always been attacking, but from new angles. In other words, if you aren't excited about logical inductors or functional decision theory, you probably wouldn't be excited by our new work either. Conversely, if you already have the sense that becoming less confused is a sane way to approach AI alignment, and you've been wanting to see those kinds of confusions attacked with software and experimentation in a manner that yields theoretical satisfaction, then you may well want to work at MIRI. (I'll have more to say about this below.)\nOur new research directions stem from some distinct ideas had by Benya Fallenstein, Eliezer Yudkowsky, and myself (Nate Soares). Some high-level themes of these new directions include:\n \n\nSeeking entirely new low-level foundations for optimization, designed for transparency and alignability from the get-go, as an alternative to gradient-descent-style machine learning foundations.\n \nNote that this does not entail trying to beat modern ML techniques on computational efficiency, speed of development, ease of deployment, or other such properties. However, it does mean developing new foundations for optimization that are broadly applicable in the same way, and for some of the same reasons, that gradient descent scales to be broadly applicable, while possessing significantly better alignment characteristics.\n \nWe're aware that there are many ways to attempt this that are shallow, foolish, or otherwise doomed; and in spite of this, we believe our own research avenues have a shot.\n \n\nEndeavoring to figure out parts of cognition that can be very transparent as cognition, without being GOFAI or completely disengaged from subsymbolic cognition.\n \n\nExperimenting with some specific alignment problems that are deeper than problems that have previously been put into computational environments.\n \n\n\n \nIn common between all our new approaches is a focus on using high-level theoretical abstractions to enable coherent reasoning about the systems we build. A concrete implication of this is that we write lots of our code in Haskell, and are often thinking about our code through the lens of type theory.\nWe aren't going to distribute the technical details of this work anytime soon, in keeping with the recent MIRI policy changes discussed below. However, we have a good deal to say about this research on the meta level.\nWe are excited about these research directions, both for their present properties and for the way they seem to be developing. When Benya began the predecessor of this work ~3 years ago, we didn't know whether her intuitions would pan out. Today, having watched the pattern by which research avenues in these spaces have opened up new exciting-feeling lines of inquiry, none of us expect this research to die soon, and some of us are hopeful that this work may eventually open pathways to attacking the entire list of basic alignment issues.4\nWe are similarly excited by the extent to which useful cross-connections have arisen between initially-unrelated-looking strands of our research. During a period where I was focusing primarily on new lines of research, for example, I stumbled across a solution to the original version of the tiling agents problem from the Agent Foundations agenda.5\nThis work seems to \"give out its own guideposts\" more than the Agent Foundations agenda does. While we used to require extremely close fit of our hires on research taste, we now think we have enough sense of the terrain that we can relax those requirements somewhat. We're still looking for hires who are scientifically innovative and who are fairly close on research taste, but our work is now much more scalable with the number of good mathematicians and engineers working at MIRI.\nWith all of that said, and despite how promising the last couple of years have seemed to us, this is still \"blue sky\" research in the sense that we'd guess most outside MIRI would still regard it as of academic interest but of no practical interest. The more principled/coherent/alignable optimization algorithms we are investigating are not going to sort cat pictures from non-cat pictures anytime soon.\nThe thing that generally excites us about research results is the extent to which they grant us \"deconfusion\" in the sense described in the next section, not the ML/engineering power they directly enable. This \"deconfusion\" that they allegedly reflect must, for the moment, be discerned mostly via abstract arguments supported only weakly by concrete \"look what this understanding lets us do\" demos. Many of us at MIRI regard our work as being of strong practical relevance nonetheless—but that is because we have long-term models of what sorts of short-term feats indicate progress, and because we view becoming less confused about alignment as having a strong practical relevance to humanity's future, for reasons that I'll sketch out next.\n \n2. Why deconfusion is so important to us\n \nWhat we mean by deconfusion\nQuoting Anna Salamon, the president of the Center for Applied Rationality and a MIRI board member:\n \n\nIf I didn't have the concept of deconfusion, MIRI's efforts would strike me as mostly inane. MIRI continues to regard its own work as significant for human survival, despite the fact that many larger and richer organizations are now talking about AI safety. It's a group that got all excited about Logical Induction (and tried paranoidly to make sure Logical Induction \"wasn't dangerous\" before releasing it)—even though Logical Induction had only a moderate amount of math and no practical engineering at all (and did something similar with Timeless Decision Theory, to pick an even more extreme example). It's a group that continues to stare mostly at basic concepts, sitting reclusively off by itself, while mostly leaving questions of politics, outreach, and how much influence the AI safety community has, to others.\nHowever, I do have the concept of deconfusion. And when I look at MIRI's activities through that lens, MIRI seems to me much more like \"oh, yes, good, someone is taking a straight shot at what looks like the critical thing\" and \"they seem to have a fighting chance\" and \"gosh, I hope they (or someone somehow) solve many many more confusions before the deadline, because without such progress, humanity sure seems kinda sunk.\"\n\n \nI agree that MIRI's perspective and strategy don't make much sense without the idea I'm calling \"deconfusion.\" As someone reading a MIRI strategy update, you probably already partly have this concept, but I've found that it's not trivial to transmit the full idea, so I ask your patience as I try to put it into words.\nBy deconfusion, I mean something like \"making it so that you can think about a given topic without continuously accidentally spouting nonsense.\"\nTo give a concrete example, my thoughts about infinity as a 10-year-old were made of rearranged confusion rather than of anything coherent, as were the thoughts of even the best mathematicians from 1700. \"How can 8 plus infinity still be infinity? What happens if we subtract infinity from both sides of the equation?\" But my thoughts about infinity as a 20-year-old were not similarly confused, because, by then, I'd been exposed to the more coherent concepts that later mathematicians labored to produce. I wasn't as smart or as good of a mathematician as Georg Cantor or the best mathematicians from 1700; but deconfusion can be transferred between people; and this transfer can spread the ability to think actually coherent thoughts.\nIn 1998, conversations about AI risk and technological singularity scenarios often went in circles in a funny sort of way. People who are serious thinkers about the topic today, including my colleagues Eliezer and Anna, said things that today sound confused. (When I say \"things that sound confused,\" I have in mind things like \"isn't intelligence an incoherent concept,\" \"but the economy's already superintelligent,\" \"if a superhuman AI is smart enough that it could kill us, it'll also be smart enough to see that that isn't what the good thing to do is, so we'll be fine,\" \"we're Turing-complete, so it's impossible to have something dangerously smarter than us, because Turing-complete computations can emulate anything,\" and \"anyhow, we could just unplug it.\") Today, these conversations are different. In between, folks worked to make themselves and others less fundamentally confused about these topics—so that today, a 14-year-old who wants to skip to the end of all that incoherence can just pick up a copy of Nick Bostrom's Superintelligence.6\nOf note is the fact that the \"take AI risk and technological singularities seriously\" meme started to spread to the larger population of ML scientists only after its main proponents attained sufficient deconfusion. If you were living in 1998 with a strong intuitive sense that AI risk and technological singularities should be taken seriously, but you still possessed a host of confusion that caused you to occasionally spout nonsense as you struggled to put things into words in the face of various confused objections, then evangelism would do you little good among serious thinkers—perhaps because the respectable scientists and engineers in the field can smell nonsense, and can tell (correctly!) that your concepts are still incoherent. It's by accumulating deconfusion until your concepts cohere and your arguments become well-formed that your ideas can become memetically fit and spread among scientists—and can serve as foundations for future work by those same scientists.\nInterestingly, the history of science is in fact full of instances in which individual researchers possessed a mostly-correct body of intuitions for a long time, and then eventually those intuitions were formalized, corrected, made precise, and transferred between people. Faraday discovered a wide array of electromagnetic phenomena, guided by an intuition that he wasn't able to formalize or transmit except through hundreds of pages of detailed laboratory notes and diagrams; Maxwell later invented the language to describe electromagnetism formally by reading Faraday's work, and expressed those hundreds of pages of intuitions in three lines.\nAn even more striking example is the case of Archimedes, who intuited his way to the ability to do useful work in both integral and differential calculus thousands of years before calculus became a simple formal thing that could be passed between people.\nIn both cases, it was the eventual formalization of those intuitions—and the linked ability of these intuitions to be passed accurately between many researchers—that allowed the fields to begin building properly and quickly.7\n \nWhy deconfusion (on our view) is highly relevant to AI accident risk\nIf human beings eventually build smarter-than-human AI, and if smarter-than-human AI is as powerful and hazardous as we currently expect it to be, then AI will one day bring enormous forces of optimization to bear.8 We believe that when this occurs, those enormous forces need to be brought to bear on real-world problems and subproblems deliberately, in a context where they're theoretically well-understood. The larger those forces are, the more precision is called for when researchers aim them at cognitive problems.\nWe suspect that today's concepts about things like \"optimization\" and \"aiming\" are incapable of supporting the necessary precision, even if wielded by researchers who care a lot about safety. Part of why I think this is that if you pushed me to explain what I mean by \"optimization\" and \"aiming,\" I'd need to be careful to avoid spouting nonsense—which indicates that I'm still confused somewhere around here.\nA worrying fact about this situation is that, as best I can tell, humanity doesn't need coherent versions of these concepts to hill-climb its way to AGI. Evolution hill-climbed that distance, and evolution had no model of what it was doing. But as evolution applied massive optimization pressure to genomes, those genomes started coding for brains that internally optimized for targets that merely correlated with genetic fitness. Humans find ever-smarter ways to satisfy our own goals (video games, ice cream, birth control…) even when this runs directly counter to the selection criterion that gave rise to us: \"propagate your genes into the next generation.\"\nIf we are to avoid a similar fate—one where we attain AGI via huge amounts of gradient descent and other optimization techniques, only to find that the resulting system has internal optimization targets that are very different from the targets we externally optimized it to be adept at pursuing—then we must be more careful.\nAs AI researchers explore the space of optimizers, what will it take to ensure that the first highly capable optimizers that researchers find are optimizers they know how to aim at chosen tasks? I'm not sure, because I'm still in some sense confused about the question. I can tell you vaguely how the problem relates to convergent instrumental incentives, and I can observe various reasons why we shouldn't expect the strategy \"train a large cognitive system to optimize for X\" to actually result in a system that internally optimizes for X, but there are still wide swaths of the question where I can't say much without saying nonsense.\nAs an example, AI systems like Deep Blue and AlphaGo cannot reasonably be said to be reasoning about the whole world. They're reasoning about some much simpler abstract platonic environment, such as a Go board. There's an intuitive sense in which we don't need to worry about these systems taking over the world, for this reason (among others), even in the world where those systems are run on implausibly large amounts of compute.\nVaguely speaking, there's a sense in which some alignment difficulties don't arise until an AI system is \"reasoning about the real world.\" But what does that mean? It doesn't seem to mean \"the space of possibilities that the system considers literally concretely includes reality itself.\" Ancient humans did perfectly good general reasoning even while utterly lacking the concept that the universe can be described by specific physical equations.\nIt looks like it must mean something more like \"the system is building internal models that, in some sense, are little representations of the whole of reality.\" But what counts as a \"little representation of reality,\" and why do a hunter-gatherer's confused thoughts about a spirit-riddled forest count while a chessboard doesn't? All these questions are likely confused; my goal here is not to name coherent questions, but to gesture in the direction of a confusion that prevents me from precisely naming a portion of the alignment problem.\nOr, to put it briefly: precisely naming a problem is half the battle, and we are currently confused about how to precisely name the alignment problem.\nFor an alternative attempt to name this concept, refer to Eliezer's rocket alignment analogy. For a further discussion of some of the reasons today's concepts seem inadequate for describing an aligned intelligence with sufficient precision, see Scott and Abram's recent write-up. (Or come discuss with us in person, at an \"AI Risk for Computer Scientists\" workshop.)\n \nWhy this research may be tractable here and now\nMany types of research become far easier at particular places and times. It seems to me that for the work of becoming less confused about AI alignment, MIRI in 2018 (and for a good number of years to come, I think) is one of those places and times.\nWhy? One point is that MIRI has some history of success at deconfusion-style research (according to me, at least), and MIRI's researchers are beneficiaries of the local research traditions that grew up in dialog with that work. Among the bits of conceptual progress that MIRI contributed to are:\n \n\ntoday's understanding that AI accident risk is important;\n \n\ntoday's understanding that an aligned AI is at least a theoretical possibility (the Gandhi argument that consequentialist preferences are reflectively stable by default, etc.), and that it's worth investing in basic research toward the possibility of such an AI in advance;\n \n\nearly statements of subproblems like corrigibility, the Löbian obstacle, and subsystem alignment, including descriptions of various problems in the Agent Foundations research agenda;\n \n\ntimeless decision theory and its successors (updateless decision theory and functional decision theory);\n \n\nlogical induction;\n \n\nreflective oracles; and\n \n\nmany smaller results in the vicinity of the Agent Foundations agenda, notably robust cooperation in the one-shot prisoner's dilemma, universal inductors, and model polymorphism, HOL-in-HOL, and more recent progress on Vingean reflection.\n\n \nLogical inductors, as an example, give us at least a clue about why we're apt to informally use words like \"probably\" in mathematical reasoning. It's not a full answer to \"how does probabilistic reasoning about mathematical facts work?\", but it does feel like an interesting hint—which is relevant to thinking about how \"real-world\" AI reasoning could possibly work, because AI systems might well also use probabilistic reasoning in mathematics.\nA second point is that, if there is something that unites most folks at MIRI besides a drive to increase the odds of human survival, it is probably a taste for getting our understanding of the foundations of the universe right. Many of us came in with this taste—for example, many of us have backgrounds in physics (and fundamental physics in particular), and those of us with a background in programming tend to have an interest in things like type theory, formal logic, and/or probability theory.\nA third point, as noted above, is that we are excited about our current bodies of research intuitions, and about how they seem increasingly transferable/cross-applicable/concretizable over time.\nFinally, I observe that the field of AI at large is currently highly vitalized, largely by the deep learning revolution and various other advances in machine learning. We are not particularly focused on deep neural networks ourselves, but being in contact with a vibrant and exciting practical field is the sort of thing that tends to spark ideas. 2018 really seems like an unusually easy time to be seeking a theoretical science of AI alignment, in dialog with practical AI methods that are beginning to work.\n \n3. Nondisclosed-by-default research, and how this policy fits into our overall strategy\nMIRI recently decided to make most of its research \"nondisclosed-by-default,\" by which we mean that going forward, most results discovered within MIRI will remain internal-only unless there is an explicit decision to release those results, based usually on a specific anticipated safety upside from their release.\nI'd like to try to share some sense of why we chose this policy—especially because this policy may prove disappointing or inconvenient for many people interested in AI safety as a research area.9 MIRI is a nonprofit, and there's a natural default assumption that our mechanism for good is to regularly publish new ideas and insights. But we don't think this is currently the right choice for serving our nonprofit mission.\nThe short version of why we chose this policy is:\n \n\nwe're in a hurry to decrease existential risk;\n \n\nin the same way that Faraday's journals aren't nearly as useful as Maxwell's equations, and in the same way that logical induction isn't all that useful to the average modern ML researcher, we don't think it would be that useful to try to share lots of half-confused thoughts with a wider set of people;\n \n\nwe believe we can have more of the critical insights faster if we stay focused on making new research progress rather than on exposition, and if we aren't feeling pressure to justify our intuitions to wide audiences;\n \n\nwe think it's not unreasonable to be anxious about whether deconfusion-style insights could lead to capabilities insights, and have empirically observed we can think more freely when we don't have to worry about this; and\n \n\neven when we conclude that those concerns were paranoid or silly upon reflection, we benefited from moving the cognitive work of evaluating those fears from \"before internally sharing insights\" to \"before broadly distributing those insights,\" which is enabled by this policy.\n\n \nThe somewhat longer version is below.\nI'll caveat that in what follows I'm attempting to convey what I believe, but not necessarily why—I am not trying to give an argument that would cause any rational person to take the same strategy in my position; I am shooting only for the more modest goal of conveying how I myself am thinking about the decision.\nI'll begin by saying a few words about how our research fits into our overall strategy, then discuss the pros and cons of this policy.\n \nWhen we say we're doing AI alignment research, we really genuinely don't mean outreach\nAt present, MIRI's aim is to make research progress on the alignment problem. Our focus isn't on shifting the field of ML toward taking AGI safety more seriously, nor on any other form of influence, persuasion, or field-building. We are simply and only aiming to directly make research progress on the core problems of alignment.\nThis choice may seem surprising to some readers—field-building and other forms of outreach can obviously have hugely beneficial effects, and throughout MIRI's history, we've been much more outreach-oriented than the typical math research group.\nOur impression is indeed that well-targeted outreach efforts can be highly valuable. However, attempts at outreach/influence/field-building seem to us to currently constitute a large majority of worldwide research activity that's motivated by AGI safety concerns,10 such that MIRI's time is better spent on taking a straight shot at the core research problems. Further, we think our own comparative advantage lies here, and not in outreach work.11\nMy beliefs here are connected to my beliefs about the mechanics of deconfusion described above. In particular, I believe that the alignment problem might start seeming significantly easier once it can be precisely named, and I believe that precisely naming this sort of problem is likely to be a serial challenge—in the sense that some deconfusions cannot be attained until other deconfusions have matured. Additionally, my read on history says that deconfusions regularly come from relatively small communities thinking the right kinds of thoughts (as in the case of Faraday and Maxwell), and that such deconfusions can spread rapidly as soon as the surrounding concepts become coherent (as exemplified by Bostrom's Superintelligence). I conclude from all this that trying to influence the wider field isn't the best place to spend our own efforts.\n \nIt is difficult to predict whether successful deconfusion work could spark capability advances\nWe think that most of MIRI's expected impact comes from worlds in which our deconfusion work eventually succeeds—that is, worlds where our research eventually leads to a principled understanding of alignable optimization that can be communicated to AI researchers, more akin to a modern understanding of calculus and differential equations than to Faraday's notebooks (with the caveat that most of us aren't expecting solutions to the alignment problem to compress nearly so well as calculus or Maxwell's equations, but I digress).\nOne pretty plausible way this could go is that our deconfusion work makes alignment possible, without much changing the set of available pathways to AGI.12 To pick a trivial analogy illustrating this sort of world, consider interval arithmetic as compared to the usual way of doing floating point operations. In interval arithmetic, an operation like sqrt takes two floating point numbers, a lower and an upper bound, and returns a lower and an upper bound on the result. Figuring out how to do interval arithmetic requires some careful thinking about the error of floating-point computations, and it certainly won't speed those computations up; the only reason to use it is to ensure that the error incurred in a floating point operation isn't larger than the user assumed. If you discover interval arithmetic, you're at no risk of speeding up modern matrix multiplications, despite the fact that you really have found a new way of doing arithmetic that has certain desirable properties that normal floating-point arithmetic lacks.\nIn worlds where deconfusing ourselves about alignment leads us primarily to insights similar (on this axis) to interval arithmetic, it would be best for MIRI to distribute its research as widely as possible, especially once it has reached a stage where it is comparatively easy to communicate, in order to encourage AI capabilities researchers to adopt and build upon it.\nHowever, it is also plausible to us that a successful theory of alignable optimization may itself spark new research directions in AI capabilities. For an analogy, consider the progression from classical probability theory and statistics to a modern deep neural net classifying images. Probability theory alone does not let you classify cat pictures, and it is possible to understand and implement an image classification network without thinking much about probability theory; but probability theory and statistics were central to the way machine learning was actually discovered, and still underlie how modern deep learning researchers think about their algorithms.\nIn worlds where deconfusing ourselves about alignment leads to insights similar (on this axis) to probability theory, it is much less clear whether distributing our results widely would have a positive impact. It goes without saying that we want to have a positive impact (or, at the very least, a neutral impact), even in those sorts of worlds.\nThe latter scenario is relatively less important in worlds where AGI timelines are short. If current deep learning research is already on the brink of AGI, for example, then it becomes less plausible that the results of MIRI's deconfusion work could become a relevant influence on AI capabilities research, and most of the potential impact of our work would come from its direct applicability to deep-learning-based systems. While many of us at MIRI believe that short timelines are at least plausible, there is significant uncertainty and disagreement about timelines inside MIRI, and I would not feel comfortable committing to a course of action that is safe only in worlds where timelines are short.\nIn sum, if we continue to make progress on, and eventually substantially succeed at, figuring out the actual \"cleave nature at its joints\" concepts that let us think coherently about alignment, I find it quite plausible that those same concepts may also enable capabilities boosts (especially in worlds where there's a lot of time for those concepts to be pushed in capabilities-facing directions). There is certainly strong historical precedent for deep scientific insights yielding unexpected practical applications.\nBy the nature of deconfusion work, it seems very difficult to predict in advance which other ideas a given insight may unlock. These considerations seem to us to call for conservatism and delay on information releases—potentially very long delays, as it can take quite a bit of time to figure out where a given insight leads.\n \nWe need our researchers to not have walls within their own heads\nWe take our research seriously at MIRI. This means that, for many of us, we know in the back of our minds that deconfusion-style research could sometimes (often in an unpredictable fashion) open up pathways that can lead to capabilities insights in the manner discussed above. As a consequence, many MIRI researchers flinch away from having insights when they haven't spent a lot of time thinking about the potential capabilities implications of those insights down the line—and they usually haven't spent that time, because it requires a bunch of cognitive overhead. This effect has been evidenced in reports from researchers, myself included, and we've empirically observed that when we set up \"closed\" research retreats or research rooms,13 researchers report that they can think more freely, that their brainstorming sessions extend further and wider, and so on.\nThis sort of inhibition seems quite bad for research progress. It is not a small area that our researchers were (un- or semi-consciously) holding back from; it's a reasonably wide swath that may well include most of the deep ideas or insights we're looking for.\nAt the same time, this kind of caution is an unavoidable consequence of doing deconfusion research in public, since it's very hard to know what ideas may follow five or ten years after a given insight. AI alignment work and AI capabilities work are close enough neighbors that many insights in the vicinity of AI alignment are \"potentially capabilities-relevant until proven harmless,\" both for reasons discussed above and from the perspective of the conservative security mindset we try to encourage around here.\nIn short, if we request that our brains come up with alignment ideas that are fine to share with everybody—and this is what we're implicitly doing when we think of ourselves as \"researching publicly\"—then we're requesting that our brains cut off the massive portion of the search space that is only probably safe.\nIf our goal is to make research progress as quickly as possible, in hopes of having concepts coherent enough to allow rigorous safety engineering by the time AGI arrives, then it seems worth finding ways to allow our researchers to think without constraints, even when those ways are somewhat expensive.\n \nFocus seems unusually useful for this kind of work\nThere may be some additional speed-up effects from helping free up researchers' attention, though we don't consider this a major consideration on its own.\nHistorically, early-stage scientific work has often been done by people who were solitary or geographically isolated, perhaps because this makes it easier to slowly develop a new way to factor the phenomenon, instead of repeatedly translating ideas into the current language others are using. It's difficult to describe how much mental space and effort turns out to be taken up with thoughts of how your research will look to other people staring at you, until you try going into a closed room for an extended period of time with a promise to yourself that all the conversation within it really won't be shared at all anytime soon.\nOnce we realized this was going on, we realized that in retrospect, we may have been ignoring common practice, in a way. Many startup founders have reported finding stealth mode, and funding that isn't from VC outsiders, tremendously useful for focus. For this reason, we've also recently been encouraging researchers at MIRI to worry less about appealing to a wide audience when doing public-facing work. We want researchers to focus mainly on whatever research directions they find most compelling, make exposition and distillation a secondary priority, and not worry about optimizing ideas for persuasiveness or for being easier to defend.\n \nEarly deconfusion work just isn't that useful (yet)\nML researchers aren't running around using logical induction or functional decision theory. These theories don't have practical relevance to the researchers on the ground, and they're not supposed to; the point of these theories is just deconfusion.\nTo put it more precisely, the theories themselves aren't the interesting novelty; the novelty is that a few years ago, we couldn't write down any theory of how in principle to assign sane-seeming probabilities to mathematical facts, and today we can write down logical induction. In the journey from point A to point B, we became less confused. The logical induction paper is an artifact witnessing that deconfusion, and an artifact which granted its authors additional deconfusion as they went through the process of writing it; but the thing that excited me about logical induction was not any one particular algorithm or theorem in the paper, but rather the fact that we're a little bit less in-the-dark than we were about how a reasoner can reasonably assign probabilities to logical sentences. We're not fully out of the dark on this front, mind you, but we're a little less confused than we were before.14\nIf the rest of the world were talking about how confusing they find the AI alignment topics we're confused about, and were as concerned about their confusions as we are concerned about ours, then failing to share our research would feel a lot more costly to me. But as things stand, most people in the space look at us kind of funny when we say that we're excited about things like logical induction, and I repeatedly encounter deep misunderstandings when I talk to people who have read some of our papers and tried to infer our research motivations, from which I conclude that they weren't drawing a lot of benefit from my current ramblings anyway.\nAnd in a sense most of our current research is a form of rambling—in the same way, at best, that Faraday's journal was rambling. It's OK if most practical scientists avoid slogging through Faraday's journal and wait until Maxwell comes along and distills the thing down to three useful equations. And, if Faraday expects that physical theories eventually distill, he doesn't need to go around evangelizing his journal—he can just wait until it's been distilled, and then work to transmit some less-confused concepts.\nWe expect our understanding of alignment, which is currently far from complete, to eventually distill, and I, at least, am not very excited about attempting to push it on anyone until it's significantly more distilled. (Or, barring full distillation, until a project with a commitment to the common good, an adequate security mindset, and a large professed interest in deconfusion research comes knocking.)\nIn the interim, there are of course some researchers outside MIRI who care about the same problems we do, and who are also pursuing deconfusion. Our nondisclosed-by-default policy will negatively affect our ability to collaborate with these people on our other research directions, and this is a real cost and not worth dismissing. I don't have much more to say about this here beyond noting that if you're one of those people, you're very welcome to get in touch with us (and you may want to consider joining the team)!\n \nWe'll have a better picture of what to share or not share in the future\nIn the long run, if our research is going to be useful, our findings will need to go out into the world where they can impact how humanity builds AI systems. However, it doesn't follow from this need for eventual distribution (of some sort) that we might as well publish all of our research immediately. As discussed above, as best I can tell, our current research insights just aren't that practically useful, and sharing early-stage deconfusion research is time-intensive.\nOur nondisclosed-by-default policy also allows us to preserve options like:\n\ndeciding which research findings we think should be developed further, while thinking about differential technological development; and\ndeciding which group(s) to share each interesting finding with (e.g., the general public, other closed safety research groups, groups with strong commitment to security mindset and the common good, etc.).\n\nFuture versions of us obviously have better abilities to make calls on these sorts of questions, though this needs to be weighed against many facts that push in the opposite direction—the later we decide what to release, the less time others have to build upon it, and the more likely it is to be found independently in the interim (thereby wasting time on duplicated efforts), and so on.\nNow that I've listed reasons in favor of our nondisclosed-by-default policy, I'll note some reasons against.\n \nConsiderations pulling against our nondisclosed-by-default policy\nThere are a host of pathways via which our work will be harder with this nondisclosed-by-default policy:\n \n\nWe will have a harder time attracting and evaluating new researchers; sharing less research means getting fewer chances to try out various research collaborations and notice which collaborations work well for both parties.\n \n\nWe lose some of the benefits of accelerating the progress of other researchers outside MIRI via sharing useful insights with them in real time as they are generated.\n \n\nWe will be less able to get useful scientific insights and feedback from visitors, remote scholars, and researchers elsewhere in the world, since we will be sharing less of our work with them.\n \n\nWe will have a harder time attracting funding and other indirect aid—with less of our work visible, it will be harder for prospective donors to know whether our work is worth supporting.\n \n\nWe will have to pay various costs associated with keeping research private, including social costs and logistical overhead.\n\n \nWe expect these costs to be substantial. We will be working hard to offset some of the losses from a, as I'll discuss in the next section. For reasons discussed above, I'm not presently very worried about b. The remaining costs will probably be paid in full.\nThese costs are why we didn't adopt this policy (for most of our research) years ago. With outreach feeling less like our comparative advantage than it did in the pre-Puerto-Rico days, and funding seeming like less of a bottleneck than it used to (though still something of a bottleneck), this approach now seems workable.\nWe've already found it helpful in practice to let researchers have insights first and sort out the safety or desirability of publishing later. On the whole, then, we expect this policy to cause a significant net speed-up to our research progress, while ensuring that we can responsibly investigate some of the most important technical questions on our radar.\n \n4. Joining the MIRI team\nI believe that MIRI is, and will be for at least the next several years, a focal point of one of those rare scientifically exciting points in history, where the conditions are just right for humanity to substantially deconfuse itself about an area of inquiry it's been pursuing for centuries—and one where the output is directly impactful in a way that is rare even among scientifically exciting places and times.\nWhat can we offer? On my view:\n \n\nWork that Eliezer, Benya, myself, and a number of researchers in AI safety view as having a significant chance of boosting humanity's survival odds.\n \n\nWork that, if it pans out, visibly has central relevance to the alignment problem—the kind of work that has a meaningful chance of shedding light on problems like \"is there a loophole-free way to upper-bound the amount of optimization occurring within an optimizer?\".\n \n\nProblems that, if your tastes match ours, feel closely related to fundamental questions about intelligence, agency, and the structure of reality; and the associated thrill of working on one of the great and wild frontiers of human knowledge, with large and important insights potentially close at hand.\n \n\nAn atmosphere in which people are taking their own and others' research progress seriously. For example, you can expect colleagues who come into work every day looking to actually make headway on the AI alignment problem, and looking to pull their thinking different kinds of sideways until progress occurs. I'm consistently impressed with MIRI staff's drive to get the job done—with their visible appreciation for the fact that their work really matters, and their enthusiasm for helping one another make forward strides.\n \n\nAs an increasing focus at MIRI, empirically grounded computer science work on the AI alignment problem, with clear feedback of the form \"did my code type-check?\" or \"do we have a proof?\".\n \n\nFinally, some good, old-fashioned fun—for a certain very specific brand of \"fun\" that includes the satisfaction that comes from making progress on important technical challenges, the enjoyment that comes from pursuing lines of research you find compelling without needing to worry about writing grant proposals or otherwise raising funds, and the thrill that follows when you finally manage to distill a nugget of truth from a thick cloud of confusion.\n\n \nWorking at MIRI also means working with other people who were drawn by the very same factors—people who seem to me to have an unusual degree of care and concern for human welfare and the welfare of sentient life as a whole, an unusual degree of creativity and persistence in working on major technical problems, an unusual degree of cognitive reflection and skill with perspective-taking, and an unusual level of efficacy and grit.\nMy own experience at MIRI has been that this is a group of people who really want to help Team Life get good outcomes from the large-scale events that are likely to dramatically shape our future; who can tackle big challenges head-on without appealing to false narratives about how likely a given approach is to succeed; and who are remarkably good at fluidly updating on new evidence, and at creating a really fun environment for collaboration.\n \nWho are we seeking?\nWe're seeking anyone who can cause our \"become less confused about AI alignment\" work to go faster.\nIn practice, this means: people who natively think in math or code, who take seriously the problem of becoming less confused about AI alignment (quickly!), and who are generally capable. In particular, we're looking for high-end Google programmer levels of capability; you don't need a 1-in-a-million test score or a halo of destiny. You also don't need a PhD, explicit ML background, or even prior research experience.\nEven if you're not pointed towards our research agenda, we intend to fund or help arrange funding for any deep, good, and truly new ideas in alignment. This might be as a hire, a fellowship grant, or whatever other arrangements may be needed.\n \nWhat to do if you think you might want to work here\nIf you'd like more information, there are several good options:\n \n\nChat with Buck Shlegeris, a MIRI computer scientist who helps out with our recruiting. In addition to answering any of your questions and running interviews, Buck can sometimes help skilled programmers take some time off to skill-build through our AI Safety Retraining Program.\n \n\nIf you already know someone else at MIRI and talking with them seems better, you might alternatively reach out to that person—especially Blake Borgeson (a new MIRI board member who helps us with technical recruiting) or Anna Salamon (a MIRI board member who is also the president of CFAR, and is helping run some MIRI recruiting events).\n \n\nCome to a 4.5-day AI Risk for Computer Scientists workshop, co-run by MIRI and CFAR. These workshops are open only to people who Buck arbitrarily deems \"probably above MIRI's technical hiring bar,\" though their scope is wider than simply hiring for MIRI—the basic idea is to get a bunch of highly capable computer scientists together to try to fathom AI risk (with a good bit of rationality content, and of trying to fathom the way we're failing to fathom AI risk, thrown in for good measure).\n \nThese are a great way to get a sense of MIRI's culture, and to pick up a number of thinking tools whether or not you are interested in working for MIRI. If you'd like to either apply to attend yourself or nominate a friend of yours, send us your info here.\n \n\nCome to next year's MIRI Summer Fellows program, or be a summer intern with us. This is a better option for mathy folks aiming at Agent Foundations than for computer sciencey folks aiming at our new research directions. This last summer we took 6 interns and 30 MIRI Summer Fellows (see Malo's Summer MIRI Updates post for more details). Also, note that \"summer internships\" need not occur during summer, if some other schedule is better for you. Contact Colm Ó Riain if you're interested.\n \n\nYou could just try applying for a job.\n\n \n \nSome final notes\nA quick note on \"inferential distance,\" or on what it sometimes takes to understand MIRI researchers' perspectives: To many, MIRI's take on things is really weird. Many people who bump into our writing somewhere find our basic outlook pointlessly weird/silly/wrong, and thus find us uncompelling forever. Even among those who do ultimately find MIRI compelling, many start off thinking it's weird/silly/wrong and then, after some months or years of MIRI's worldview slowly rubbing off on them, eventually find that our worldview makes a bunch of unexpected sense.\nIf you think that you may be in this latter category, and that such a change of viewpoint, should it occur, would be because MIRI's worldview is onto something and not because we all got tricked by false-but-compelling ideas… you might want to start exposing yourself to all this funny worldview stuff now, and see where it takes you. Good starting-points are Rationality: From AI to Zombies; Inadequate Equilibria; Harry Potter and the Methods of Rationality; the \"AI Risk for Computer Scientists\" workshops; ordinary CFAR workshops; or just hanging out with folks in or near MIRI.\nI suspect that I've failed to communicate some key things above, based on past failed attempts to communicate my perspective, and based on some readers of earlier drafts of this post missing key things I'd wanted to say. I've tried to clarify as many points as possible—hence this post's length!—but in the end, \"we're focusing on research and not exposition now\" holds for me too, and I need to get back to the work.15\n \nA note on the state of the field:  MIRI is one of the dedicated teams trying to solve technical problems in AI alignment, but we're not the only such team. There are currently three others: the Center for Human-Compatible AI at UC Berkeley, and the safety teams at OpenAI and at Google DeepMind. All three of these safety teams are highly capable, top-of-their-class research groups, and we recommend them too as potential places to join if you want to make a difference in this field.\nThere are also solid researchers based at many other institutions, like the Future of Humanity Institute, whose Governance of AI Program focuses on the important social/coordination problems associated with AGI development.\nTo learn more about AI alignment research at MIRI and other groups, I recommend the MIRI-produced Agent Foundations and Embedded Agency write-ups; Dario Amodei, Chris Olah, et al.'s Concrete Problems agenda; the AI Alignment Forum; and Paul Christiano and the DeepMind safety team's blogs.\n \nOn working here: Salaries here are more flexible than people usually suppose. I've had a number of conversations with folks who assumed that because we're a nonprofit, we wouldn't be able to pay them enough to maintain their desired standard of living, meet their financial goals, support their family well, or similar. This is false. If you bring the right skills, we're likely able to provide the compensation you need. We also place a high value on weekends and vacation time, on avoiding burnout, and in general on people here being happy and thriving.\nYou do need to be physically in Berkeley to work with us on the projects we think are most exciting, though we have pretty great relocation assistance and ops support for moving.\nDespite all of the great things about working at MIRI, I would consider working here a pretty terrible deal if all you wanted was a job. Reorienting to work on major global risks isn't likely to be the most hedonic or relaxing option available to most people.\nOn the other hand, if you like the idea of an epic calling with a group of people who somehow claim to take seriously a task that sounds more like it comes from a science fiction novel than from a Dilbert strip, while having a lot of scientific fun; or you just care about humanity's future, and want to help however you can… give us a call.\n \n\nThis post is an amalgam put together by a variety of MIRI staff. The byline saying \"Nate\" means that I (Nate) endorse the post, and that many of the concepts and themes come in large part from me, and I wrote a decent number of the words. However, I did not write all of the words, and the concepts and themes were built in collaboration with a bunch of other MIRI staff. (This is roughly what bylines have meant on the MIRI blog for a while now, and it's worth noting explicitly.) See our 2017 strategic update and fundraiser posts for more details.In past fundraisers, we've said that with sufficient funding we would like to spin up alternative lines of attack on the alignment problem. Our new research directions can be seen as following this spirit, and indeed, at least one of our new research directions is heavily inspired by alternative approaches I was considering back in 2015. That said, unlike many of the ideas I had in mind when writing our 2015 fundraiser posts, our new work is quite contiguous with our Agent-Foundations-style research.That is, the requisites for aligning AGI systems to perform limited tasks; not all of the requisites for aligning a full CEV-class autonomous AGI. Compare Paul Christiano's distinction between ambitious and narrow value learning (though note that Paul thinks narrow value learning is sufficient for strongly autonomous AGI).This result is described more in a paper that will be out soon. Or, at least, eventually. I'm not putting a lot of time into writing papers these days, for reasons discussed below.For more discussion of this concept, see \"Personal Thoughts on Careers in AI Policy and Strategy\" by Carrick Flynn.Historical examples of deconfusion work that gave rise to a rich and healthy field include the distillation of Lagrangian and Hamiltonian mechanics from Newton's laws; Cauchy's overhaul of real analysis; the slow acceptance of the usefulness of complex numbers; and the development of formal foundations of mathematics.I should emphasize that from my perspective, humanity never building AGI, never realizing our potential, and failing to make use of the cosmic endowment would be a tragedy comparable (on an astronomical scale) to AGI wiping us out. I say \"hazardous\", but we shouldn't lose sight of the upside of humanity getting the job done right.My own feeling is that I and other senior staff at MIRI have never been particularly good at explaining what we're doing and why, so this inconvenience may not be a new thing. It's new, however, for us to not be making it a priority to attempt to explain where we're coming from.In other words, many people are explicitly focusing only on outreach, and many others are selecting technical problems to work on with a stated goal of strengthening the field and drawing others into it.This isn't meant to suggest that nobody else is taking a straight shot at the core problems. For example, OpenAI's Paul Christiano is a top-tier researcher who is doing exactly that. But we nonetheless want more of this on the present margin.For example, perhaps the easiest path to unalignable AGI involves following descendants of today's gradient descent and deep learning techniques, and perhaps the same is true for alignable AGI.In other words, retreats/rooms where it is common knowledge that all thoughts and ideas are not going to be shared, except perhaps after some lengthy and irritating bureaucratic process and with everyone's active support.As an aside, perhaps my main discomfort with attempting to publish academic papers is that there appears to be no venue in AI where we can go to say, \"Hey, check this out—we used to be confused about X, and now we can say Y, which means we're a little bit less confused!\" I think there are a bunch of reasons behind this, not least the fact that the nature of confusion is such that Y usually sounds obviously true once stated, and so it's particularly difficult to make such a result sound like an impressive practical result.\nA side effect of this, unfortunately, is that all MIRI papers that I've ever written with the goal of academic publishing do a pretty bad job of saying what I was previously confused about, and how the \"result\" is indicative of me becoming less confused—for which I hereby apologize.If you have more questions, I encourage you to shoot us an email at post 2018 Update: Our New Research Directions appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "2018 Update: Our New Research Directions", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=14", "id": "5cc2443cda3aeddd5006ea838f4f8562"} {"text": "Embedded Curiosities\n\n\nThis is the conclusion of the Embedded Agency series. Previous posts:\n \nEmbedded Agents  —  Decision Theory  —  Embedded World-ModelsRobust Delegation  —  Subsystem Alignment\n \n\n \nA final word on curiosity, and intellectual puzzles:\nI described an embedded agent, Emmy, and said that I don't understand how she evaluates her options, models the world, models herself, or decomposes and solves problems.\nIn the past, when researchers have talked about motivations for working on problems like these, they've generally focused on the motivation from AI risk. AI researchers want to build machines that can solve problems in the general-purpose fashion of a human, and dualism is not a realistic framework for thinking about such systems. In particular, it's an approximation that's especially prone to breaking down as AI systems get smarter. When people figure out how to build general AI systems, we want those researchers to be in a better position to understand their systems, analyze their internal properties, and be confident in their future behavior.\nThis is the motivation for most researchers today who are working on things like updateless decision theory and subsystem alignment. We care about basic conceptual puzzles which we think we need to figure out in order to achieve confidence in future AI systems, and not have to rely quite so much on brute-force search or trial and error.\nBut the arguments for why we may or may not need particular conceptual insights in AI are pretty long. I haven't tried to wade into the details of that debate here. Instead, I've been discussing a particular set of research directions as an intellectual puzzle, and not as an instrumental strategy.\nOne downside of discussing these problems as instrumental strategies is that it can lead to some misunderstandings about why we think this kind of work is so important. With the \"instrumental strategies\" lens, it's tempting to draw a direct line from a given research problem to a given safety concern. But it's not that I'm imagining real-world embedded systems being \"too Bayesian\" and this somehow causing problems, if we don't figure out what's wrong with current models of rational agency. It's certainly not that I'm imagining future AI systems being written in second-order logic! In most cases, I'm not trying at all to draw direct lines between research problems and specific AI failure modes.\nWhat I'm instead thinking about is this: We sure do seem to be working with the wrong basic concepts today when we try to think about what agency is, as seen by the fact that these concepts don't transfer well to the more realistic embedded framework.\nIf AI developers in the future are still working with these confused and incomplete basic concepts as they try to actually build powerful real-world optimizers, that seems like a bad position to be in. And it seems like the research community is unlikely to figure most of this out by default in the course of just trying to develop more capable systems. Evolution certainly figured out how to build human brains without \"understanding\" any of this, via brute-force search.\nEmbedded agency is my way of trying to point at what I think is a very important and central place where I feel confused, and where I think future researchers risk running into confusions too.\nThere's also a lot of excellent AI alignment research that's being done with an eye toward more direct applications; but I think of that safety research as having a different type signature than the puzzles I've talked about here.\n\nIntellectual curiosity isn't the ultimate reason we privilege these research directions. But there are some practical advantages to orienting toward research questions from a place of curiosity at times, as opposed to only applying the \"practical impact\" lens to how we think about the world.\nWhen we apply the curiosity lens to the world, we orient toward the sources of confusion preventing us from seeing clearly; the blank spots in our map, the flaws in our lens. It encourages re-checking assumptions and attending to blind spots, which is helpful as a psychological counterpoint to our \"instrumental strategy\" lens—the latter being more vulnerable to the urge to lean on whatever shaky premises we have on hand so we can get to more solidity and closure in our early thinking.\nEmbedded agency is an organizing theme behind most, if not all, of our big curiosities. It seems like a central mystery underlying many concrete difficulties.\n \n\nThe post Embedded Curiosities appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Embedded Curiosities", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=14", "id": "bf747582a72f997e07647377806d2dd1"} {"text": "Subsystem Alignment\n\n\n\n\n \nYou want to figure something out, but you don't know how to do that yet.\nYou have to somehow break up the task into sub-computations. There is no atomic act of \"thinking\"; intelligence must be built up of non-intelligent parts.\nThe agent being made of parts is part of what made counterfactuals hard, since the agent may have to reason about impossible configurations of those parts.\nBeing made of parts is what makes self-reasoning and self-modification even possible.\nWhat we're primarily going to discuss in this section, though, is another problem: when the agent is made of parts, there could be adversaries not just in the external environment, but inside the agent as well.\nThis cluster of problems is Subsystem Alignment: ensuring that subsystems are not working at cross purposes; avoiding subprocesses optimizing for unintended goals.\n \n\n\nbenign induction\nbenign optimization\ntransparency\nmesa-optimizers\n\n\n \n\n\nHere's a straw agent design:\n \n\n \nThe epistemic subsystem just wants accurate beliefs. The instrumental subsystem uses those beliefs to track how well it is doing. If the instrumental subsystem gets too capable relative to the epistemic subsystem, it may decide to try to fool the epistemic subsystem, as depicted.\nIf the epistemic subsystem gets too strong, that could also possibly yield bad outcomes.\nThis agent design treats the system's epistemic and instrumental subsystems as discrete agents with goals of their own, which is not particularly realistic. However, we saw in the section on wireheading that the problem of subsystems working at cross purposes is hard to avoid. And this is a harder problem if we didn't intentionally build the relevant subsystems.\n\nOne reason to avoid booting up sub-agents who want different things is that we want robustness to relative scale.\nAn approach is robust to scale if it still works, or fails gracefully, as you scale capabilities. There are three types: robustness to scaling up; robustness to scaling down; and robustness to relative scale.\n \n\nRobustness to scaling up means that your system doesn't stop behaving well if it gets better at optimizing. One way to check this is to think about what would happen if the function the AI optimizes were actually maximized. Think Goodhart's Law.\n \n\nRobustness to scaling down means that your system still works if made less powerful. Of course, it may stop being useful; but it should fail safely and without unnecessary costs.\n \nYour system might work if it can exactly maximize some function, but is it safe if you approximate? For example, maybe a system is safe if it can learn human values very precisely, but approximation makes it increasingly misaligned.\n \n\nRobustness to relative scale means that your design does not rely on the agent's subsystems being similarly powerful. For example, GAN (Generative Adversarial Network) training can fail if one sub-network gets too strong, because there's no longer any training signal.\n\n\nLack of robustness to scale isn't necessarily something which kills a proposal, but it is something to be aware of; lacking robustness to scale, you need strong reason to think you're at the right scale.\nRobustness to relative scale is particularly important for subsystem alignment. An agent with intelligent sub-parts should not rely on being able to outsmart them, unless we have a strong account of why this is always possible.\n\nThe big-picture moral: aim to have a unified system that doesn't work at cross purposes to itself.\nWhy would anyone make an agent with parts fighting against one another? There are three obvious reasons: subgoals, pointers, and search.\nSplitting up a task into subgoals may be the only way to efficiently find a solution. However, a subgoal computation shouldn't completely forget the big picture!\nAn agent designed to build houses should not boot up a sub-agent who cares only about building stairs.\nOne intuitive desideratum is that although subsystems need to have their own goals in order to decompose problems into parts, the subgoals need to \"point back\" robustly to the main goal.\nA house-building agent might spin up a subsystem that cares only about stairs, but only cares about stairs in the context of houses.\nHowever, you need to do this in a way that doesn't just amount to your house-building system having a second house-building system inside its head. This brings me to the next item:\n\nPointers: It may be difficult for subsystems to carry the whole-system goal around with them, since they need to be reducing the problem. However, this kind of indirection seems to encourage situations in which different subsystems' incentives are misaligned.\nAs we saw in the example of the epistemic and instrumental subsystems, as soon as we start optimizing some sort of expectation, rather than directly getting feedback about what we're doing on the metric that's actually important, we may create perverse incentives—that's Goodhart's Law.\nHow do we ask a subsystem to \"do X\" as opposed to \"convince the wider system that I'm doing X\", without passing along the entire overarching goal-system?\nThis is similar to the way we wanted successor agents to robustly point at values, since it is too hard to write values down. However, in this case, learning the values of the larger agent wouldn't make any sense either; subsystems and subgoals need to be smaller.\n\nIt might not be that difficult to solve subsystem alignment for subsystems which humans entirely design, or subgoals which an AI explicitly spins up. If you know how to avoid misalignment by design and robustly delegate your goals, both problems seem solvable.\nHowever, it doesn't seem possible to design all subsystems so explicitly. At some point, in solving a problem, you've split it up as much as you know how to and must rely on some trial and error.\nThis brings us to the third reason subsystems might be optimizing different things, search: solving a problem by looking through a rich space of possibilities, a space which may itself contain misaligned subsystems.\n \n\n \nML researchers are quite familiar with the phenomenon: it's easier to write a program which finds a high-performance machine translation system for you than to directly write one yourself.\nIn the long run, this process can go one step further. For a rich enough problem and an impressive enough search process, the solutions found via search might themselves be intelligently optimizing something.\nThis might happen by accident, or be purposefully engineered as a strategy for solving difficult problems. Either way, it stands a good chance of exacerbating Goodhart-type problems—you now effectively have two chances for misalignment, where you previously had one.\nThis problem is described in Hubinger, et al.'s \"Risks from Learned Optimization in Advanced Machine Learning Systems\".\nLet's call the original search process the base optimizer, and the search process found via search a mesa-optimizer.\n\"Mesa\" is the opposite of \"meta\". Whereas a \"meta-optimizer\" is an optimizer designed to produce a new optimizer, a \"mesa-optimizer\" is any optimizer generated by the original optimizer—whether or not the programmers wanted their base optimizer to be searching for new optimizers.\n\"Optimization\" and \"search\" are ambiguous terms. I'll think of them as any algorithm which can be naturally interpreted as doing significant computational work to \"find\" an object that scores highly on some objective function.\nThe objective function of the base optimizer is not necessarily the same as that of the mesa-optimizer. If the base optimizer wants to make pizza, the new optimizer may enjoy kneading dough, chopping ingredients, et cetera.\nThe new optimizer's objective function must be helpful for the base objective, at least in the examples the base optimizer is checking. Otherwise, the mesa-optimizer would not have been selected.\nHowever, the mesa-optimizer must reduce the problem somehow; there is no point to it running the exact same search all over again. So it seems like its objectives will tend to be like good heuristics; easier to optimize, but different from the base objective in general.\nWhy might a difference between base objectives and mesa-objectives be concerning, if the new optimizer is scoring highly on the base objective anyway? It's about the interplay with what's really wanted. Even if we get value specification exactly right, there will always be some distributional shift between the training set and deployment. (See Amodei, et al.'s \"Concrete Problems in AI Safety\".)\nDistributional shifts which would be small in ordinary cases may make a big difference to a capable mesa-optimizer, which may observe the slight difference and figure out how to capitalize on it for its own objective.\nActually, to even use the term \"distributional shift\" seems wrong in the context of embedded agency. The world is not i.i.d. The analog of \"no distributional shift\" would be to have an exact model of the whole future relevant to what you want to optimize, and the ability to run it over and over during training. So we need to deal with massive \"distributional shift\".\nWe may also want to optimize for things that aren't exactly what we want. The obvious way to avoid agents that pursue subgoals at the cost of the overall goal is to have the subsystems not be agentic. Just search over a bunch of ways to make stairs, don't make something that cares about stairs.\nThe problem is then that powerful mesa-optimizers are optimizing something the base system doesn't care about, and that the mesa-optimizers will have a convergent incentive to be agentic.\nAdditionally, there's the possibility that the mesa-optimizer becomes aware of the base optimizer, in which case it might start explicitly trying to do well on the base objective function purely in order to be kept around, while looking for any signs that it has left training and can stop pretending. This creates a version of Nick Bostrom's \"treacherous turn\".\nThis is the same story we saw in adversarial Goodhart: there is something agentic in the search space, which responds to our choice of proxy in a way which makes our proxy a bad one.\nYou might try to avoid treacherous turns by repeatedly simulating \"end of training, time for deployment\" during training, in a kind of nested-dream setup. The goal is to test enough layers of simulation that you can conclude there won't be a treacherous turn in the real deployment case.\n \n\nUnfortunately, convergence for this kind of learning is going to be poor. Ordinarily in machine learning, good performance means good average-case performance. But a treacherous turn is an \"error\" which can be carefully placed to do the most damage. We want to ensure this doesn't happen.\nThe problem is, in part, that some outputs are much more important than others. Deployment is more important than training, and certain critical or vulnerable moments during deployment will be especially important. We want to be particularly sure to get important things right, rather than just getting low average loss.\nBut we can't solve this by telling the system what's important. Indeed, it seems we hope it can't figure that out—we are banking on being able to generalize from performance on less-important cases to more-important cases. This is why research into ML techniques which avoid rare catastrophes (or \"traps\") is relevant to the problem of making sure mesa-optimizers are aligned with base optimizers.\nIt is difficult to trust arbitrary code—which is what models from rich model classes are—based only on empirical testing. Consider a highly simplified problem: we want to find a program which only ever outputs \\(1\\). \\(0\\) is a catastrophic failure.\nIf we could examine the code ourselves, this problem would be easy. But the output of machine learning is often difficult to analyze; so let's suppose that we can't understand code at all.\nNow, in some sense, we can trust simpler functions more. A short piece of code is less likely to contain a hard-coded exception. Let's quantify that.\nConsider the set of all programs of length \\(L\\). Some programs \\(p\\) will print \\(1\\) for a long time, but then print \\(0\\). We're trying to avoid that.\nCall the time-to-first-zero \\(W_{p}\\).  (\\(W_{p} = \\infty\\) if the program \\(p\\) is trustworthy, i.e., if it never outputs \\(0\\).)\nThe highest finite  \\(W_{p}\\)  out of all length-\\(L\\) programs is a form of the Busy Beaver function, so I will refer to it as \\(BB(L)\\). If we wanted to be completely sure that a random program of length \\(L\\) were trustworthy, we would need to observe \\(BB(L)\\) ones from that program.\nNow, a fact about the Busy Beaver function is that \\(BB(n)\\) grows faster than any computable function. So this kind of empirical trust-building takes uncomputably long to find the truth, in the worst case.\nWhat about the average case?\nIf we suppose all the other length-\\(L\\) programs are easy cases, there are exponentially many length-\\(L\\) programs, so the average is \\(BB(L)\\ / \\ \\mathrm{exp}(L)\\). But exponentials are computable. So \\(BB(L)\\ / \\ \\mathrm{exp}(L)\\) still grows faster than any computable function.\nSo while using short programs gives us some confidence in theory, the difficulty of forming generalized conclusions about behavior grows extremely quickly as a function of length.\nIf length restrictions aren't so practical, perhaps restricting computational complexity can help us? Intuitively, a mesa-optimizer needs time to think in order to successfully execute a treacherous turn. So a program which arrives at conclusions more quickly might be more trustworthy.\nHowever, restricting complexity class unfortunately doesn't get around Busy-Beaver-type behavior. Strategies that wait a long time before outputting \\(0\\) can be slowed down even further with only slightly longer program length \\(L\\).\n\nIf all of these problems seem too hypothetical, consider the evolution of life on Earth. Evolution can be thought of as a reproductive fitness maximizer.\n(Evolution can actually be thought of as an optimizer for many things, or as no optimizer at all, but that doesn't matter. The point is that if an agent wanted to maximize reproductive fitness, it might use a system that looked like evolution.)\nIntelligent organisms are mesa-optimizers of evolution. Although the drives of intelligent organisms are certainly correlated with reproductive fitness, organisms want all sorts of things. There are even mesa-optimizers who have come to understand evolution, and even to manipulate it at times. Powerful and misaligned mesa-optimizers appear to be a real possibility, then, at least with enough processing power.\nProblems seem to arise because you try to solve a problem which you don't yet know how to solve by searching over a large space and hoping \"someone\" can solve it.\nIf the source of the issue is the solution of problems by massive search, perhaps we should look for different ways to solve problems. Perhaps we should solve problems by figuring things out. But how do you solve problems which you don't yet know how to solve other than by trying things?\n\nLet's take a step back.\nEmbedded world-models is about how to think at all, as an embedded agent; decision theory is about how to act. Robust delegation is about building trustworthy successors and helpers. Subsystem alignment is about building one agent out of trustworthy parts.\n \n\n \nThe problem is that:\n\nWe don't know how to think about environments when we're smaller.\nTo the extent we can do that, we don't know how to think about consequences of actions in those environments.\nEven when we can do that, we don't know how to think about what we want.\nEven when we have none of these problems, we don't know how to reliably output actions which get us what we want!\n\n\nThis is the penultimate post in Scott Garrabrant and Abram Demski's Embedded Agency sequence. Conclusion: embedded curiosities.\n\nThe post Subsystem Alignment appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Subsystem Alignment", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=14", "id": "5eeb67681fc8521ff3c4637a4be1f98c"} {"text": "Robust Delegation\n\n\n\n\nBecause the world is big, the agent as it is may be inadequate to accomplish its goals, including in its ability to think.\nBecause the agent is made of parts, it can improve itself and become more capable.\nImprovements can take many forms: The agent can make tools, the agent can make successor agents, or the agent can just learn and grow over time. However, the successors or tools need to be more capable for this to be worthwhile. \nThis gives rise to a special type of principal/agent problem:\nYou have an initial agent, and a successor agent. The initial agent gets to decide exactly what the successor agent looks like. The successor agent, however, is much more intelligent and powerful than the initial agent. We want to know how to have the successor agent robustly optimize the initial agent's goals.\nHere are three examples of forms this principal/agent problem can take:\n \n\n \nIn the AI alignment problem, a human is trying to build an AI system which can be trusted to help with the human's goals.\nIn the tiling agents problem, an agent is trying to make sure it can trust its future selves to help with its own goals.\nOr we can consider a harder version of the tiling problem—stable self-improvement—where an AI system has to build a successor which is more intelligent than itself, while still being trustworthy and helpful.\nFor a human analogy which involves no AI, you can think about the problem of succession in royalty, or more generally the problem of setting up organizations to achieve desired goals without losing sight of their purpose over time.\nThe difficulty seems to be twofold:\nFirst, a human or AI agent may not fully understand itself and its own goals. If an agent can't write out what it wants in exact detail, that makes it hard for it to guarantee that its successor will robustly help with the goal.\nSecond, the idea behind delegating work is that you not have to do all the work yourself. You want the successor to be able to act with some degree of autonomy, including learning new things that you don't know, and wielding new skills and capabilities.\nIn the limit, a really good formal account of robust delegation should be able to handle arbitrarily capable successors without throwing up any errors—like a human or AI building an unbelievably smart AI, or like an agent that just keeps learning and growing for so many years that it ends up much smarter than its past self.\nThe problem is not (just) that the successor agent might be malicious. The problem is that we don't even know what it means not to be.\nThis problem seems hard from both points of view.\n\n \n\n \n\nThe initial agent needs to figure out how reliable and trustworthy something more powerful than it is, which seems very hard. But the successor agent has to figure out what to do in situations that the initial agent can't even understand, and try to respect the goals of something that the successor can see is inconsistent, which also seems very hard.\nAt first, this may look like a less fundamental problem than \"make decisions\" or \"have models\". But the view on which there are multiple forms of the \"build a successor\" problem is itself a dualistic view.\nTo an embedded agent, the future self is not privileged; it is just another part of the environment. There isn't a deep difference between building a successor that shares your goals, and just making sure your own goals stay the same over time.\nSo, although I talk about \"initial\" and \"successor\" agents, remember that this isn't just about the narrow problem humans currently face of aiming a successor. This is about the fundamental problem of being an agent that persists and learns over time.\nWe call this cluster of problems Robust Delegation. Examples include:\n \n\n\nVingean reflection\nthe tiling problem\naverting Goodhart's law\nvalue loading\ncorrigibility\ninformed oversight\n\n\n \n\n\nImagine you are playing the CIRL game with a toddler.\nCIRL means Cooperative Inverse Reinforcement Learning. The idea behind CIRL is to define what it means for a robot to collaborate with a human. The robot tries to pick helpful actions, while simultaneously trying to figure out what the human wants.\n \n\nA lot of current work on robust delegation comes from the goal of aligning AI systems with what humans want. So usually, we think about this from the point of view of the human.\nBut now consider the problem faced by a smart robot, where they're trying to help someone who is very confused about the universe. Imagine trying to help a toddler optimize their goals.\n\nFrom your standpoint, the toddler may be too irrational to be seen as optimizing anything.\nThe toddler may have an ontology in which it is optimizing something, but you can see that ontology doesn't make sense.\nMaybe you notice that if you set up questions in the right way, you can make the toddler seem to want almost anything.\n\nPart of the problem is that the \"helping\" agent has to be bigger in some sense in order to be more capable; but this seems to imply that the \"helped\" agent can't be a very good supervisor for the \"helper\".\n\n\n\nFor example, updateless decision theory eliminates dynamic inconsistencies in decision theory by, rather than maximizing expected utility of your action given what you know, maximizing expected utility of reactions to observations, from a state of ignorance.\nAppealing as this may be as a way to achieve reflective consistency, it creates a strange situation in terms of computational complexity: If actions are type \\(A\\), and observations are type \\(O\\), reactions to observations are type  \\(O \to A\\)—a much larger space to optimize over than \\(A\\) alone. And we're expecting our smaller self to be able to do that!\nThis seems bad.\nOne way to more crisply state the problem is: We should be able to trust that our future self is applying its intelligence to the pursuit of our goals without being able to predict precisely what our future self will do. This criterion is called Vingean reflection.\nFor example, you might plan your driving route before visiting a new city, but you do not plan your steps. You plan to some level of detail, and trust that your future self can figure out the rest.\nVingean reflection is difficult to examine via classical Bayesian decision theory because Bayesian decision theory assumes logical omniscience. Given logical omniscience, the assumption \"the agent knows its future actions are rational\" is synonymous with the assumption \"the agent knows its future self will act according to one particular optimal policy which the agent can predict in advance\".\nWe have some limited models of Vingean reflection (see \"Tiling Agents for Self-Modifying AI, and the Löbian Obstacle\" by Yudkowsky and Herreshoff). A successful approach must walk the narrow line between two problems:\n\nThe Löbian Obstacle:   Agents who trust their future self because they trust the output of their own reasoning are inconsistent.\nThe Procrastination Paradox:   Agents who trust their future selves without reason tend to be consistent but unsound and untrustworthy, and will put off tasks forever because they can do them later.\n\nThe Vingean reflection results so far apply only to limited sorts of decision procedures, such as satisficers aiming for a threshold of acceptability. So there is plenty of room for improvement, getting tiling results for more useful decision procedures and under weaker assumptions.\nHowever, there is more to the robust delegation problem than just tiling and Vingean reflection.\nWhen you construct another agent, rather than delegating to your future self, you more directly face a problem of value loading.\nThe main problems here:\n\nWe don't know what we want.\nOptimization amplifies slight differences between what we say we want and what we really want.\n\nThe misspecification-amplifying effect is known as Goodhart's Law, named for Charles Goodhart's observation: \"Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.\"\nWhen we specify a target for optimization, it is reasonable to expect it to be correlated with what we want—highly correlated, in some cases. Unfortunately, however, this does not mean that optimizing it will get us closer to what we want—especially at high levels of optimization.\n\nThere are (at least) four types of Goodhart: regressional, extremal, causal, and adversarial.\n \n\n \nRegressional Goodhart happens when there is a less than perfect correlation between the proxy and the goal. It is more commonly known as the optimizer's curse, and it is related to regression to the mean.\nAn example of regressional Goodhart is that you might draft players for a basketball team based on height alone. This isn't a perfect heuristic, but there is a correlation between height and basketball ability, which you can make use of in making your choices.\nIt turns out that, in a certain sense, you will be predictably disappointed if you expect the general trend to hold up as strongly for your selected team.\n\nStated in statistical terms: an unbiased estimate of \\(y\\) given \\(x\\) is not an unbiased estimate of \\(y\\) when we select for the best \\(x\\). In that sense, we can expect to be disappointed when we use \\(x\\) as a proxy for \\(y\\) for optimization purposes.\n\n\n\n(The graphs in this section are hand-drawn to help illustrate the relevant concepts.)\nUsing a Bayes estimate instead of an unbiased estimate, we can eliminate this sort of predictable disappointment. The Bayes estimate accounts for the noise in \\(x\\), bending toward typical \\(y\\) values.\n \n\nThis doesn't necessarily allow us to get a better \\(y\\) value, since we still only have the information content of \\(x\\) to work with. However, it sometimes may. If \\(y\\) is normally distributed with variance \\(1\\), and \\(x\\) is \\(y \\pm 10\\) with even odds of \\(+\\) or \\(-\\), a Bayes estimate will give better optimization results by almost entirely removing the noise.\n\nRegressional Goodhart seems like the easiest form of Goodhart to beat: just use Bayes!\nHowever, there are two big problems with this solution:\n\nBayesian estimators are very often intractable in cases of interest.\nIt only makes sense to trust the Bayes estimate under a realizability assumption.\n\nA case where both of these problems become critical is computational learning theory.\n\nIt often isn't computationally feasible to calculate the Bayesian expected generalization error of a hypothesis. And even if you could, you would still need to wonder whether your chosen prior reflected the world well enough.\n \n\n \nIn extremal Goodhart, optimization pushes you outside the range where the correlation exists, into portions of the distribution which behave very differently.\nThis is especially scary because it tends to involves optimizers behaving in sharply different ways in different contexts, often with little or no warning. You might not be able to observe the proxy breaking down at all when you have weak optimization, but once the optimization becomes strong enough, you can enter a very different domain.\nThe difference between extremal Goodhart and regressional Goodhart is related to the classical interpolation/extrapolation distinction.\n \n\nBecause extremal Goodhart involves a sharp change in behavior as the system is scaled up, it's harder to anticipate than regressional Goodhart.\n \n\n \nAs in the regressional case, a Bayesian solution addresses this concern in principle, if you trust a probability distribution to reflect the possible risks sufficiently well. However, the realizability concern seems even more prominent here.\nCan a prior be trusted to anticipate problems with proposals, when those proposals have been highly optimized to look good to that specific prior? Certainly a human's judgment couldn't be trusted under such conditions—an observation which suggests that this problem will remain even if a system's judgments about values perfectly reflect a human's.\nWe might say that the problem is this: \"typical\" outputs avoid extremal Goodhart, but \"optimizing too hard\" takes you out of the realm of the typical.\nBut how can we formalize \"optimizing too hard\" in decision-theoretic terms?\nQuantilization offers a formalization of \"optimize this some, but don't optimize too much\".\nImagine a proxy \\(V(x)\\) as a \"corrupted\" version of the function we really want, \\(U(x)\\). There might be different regions where the corruption is better or worse.\nSuppose that we can additionally specify a \"trusted\" probability distribution \\(P(x)\\), for which we are confident that the average error is below some threshold \\(c\\).\n\nBy stipulating \\(P\\) and \\(c\\), we give information about where to find low-error points, without needing to have any estimates of \\(U\\) or of the actual error at any one point.\n \n\nWhen we select actions from \\(P\\) at random, we can be sure regardless that there's a low probability of high error.\n\nSo, how do we use this to optimize? A quantilizer selects from \\(P\\), but discarding all but the top fraction \\(f\\); for example, the top 1%. In this visualization, I've judiciously chosen a fraction that still has most of the probability concentrated on the \"typical\" options, rather than on outliers:\n\nBy quantilizing, we can guarantee that if we overestimate how good something is, we're overestimating by at most \\(\frac{c}{f}\\) in expectation. This is because in the worst case, all of the overestimation was of the \\(f\\) best options.\n\nWe can therefore choose an acceptable risk level, \\(r = \frac{c}{f}\\), and set the parameter \\(f\\) as \\(\frac{c}{r}\\).\nQuantilization is in some ways very appealing, since it allows us to specify safe classes of actions without trusting every individual action in the class—or without trusting any individual action in the class.\nIf you have a sufficiently large heap of apples, and there's only one rotten apple in the heap, choosing randomly is still very likely safe. By \"optimizing less hard\" and picking a random good-enough action, we make the really extreme options low-probability. In contrast, if we had optimized as hard as possible, we might have ended up selecting from only bad apples.\nHowever, this approach also leaves a lot to be desired. Where do \"trusted\" distributions come from? How do you estimate the expected error \\(c\\), or select the acceptable risk level \\(r\\)? Quantilization is a risky approach because \\(r\\) gives you a knob to turn that will seemingly improve performance, while increasing risk, until (possibly sudden) failure.\nAdditionally, quantilization doesn't seem likely to tile. That is, a quantilizing agent has no special reason to preserve the quantilization algorithm when it makes improvements to itself or builds new agents.\nSo there seems to be room for improvement in how we handle extremal Goodhart.\n \n\n \nAnother way optimization can go wrong is when the act of selecting for a proxy breaks the connection to what we care about. Causal Goodhart happens when you observe a correlation between proxy and goal, but when you intervene to increase the proxy, you fail to increase the goal because the observed correlation was not causal in the right way.\nAn example of causal Goodhart is that you might try to make it rain by carrying an umbrella around. The only way to avoid this sort of mistake is to get counterfactuals right.\nThis might seem like punting to decision theory, but the connection here enriches robust delegation and decision theory alike.\nCounterfactuals have to address concerns of trust due to tiling concerns—the need for decision-makers to reason about their own future decisions. At the same time, trust has to address counterfactual concerns because of causal Goodhart.\nOnce again, one of the big challenges here is realizability. As we noted in our discussion of embedded world-models, even if you have the right theory of how counterfactuals work in general, Bayesian learning doesn't provide much of a guarantee that you'll learn to select actions well, unless we assume realizability.\n \n\n \nFinally, there is adversarial Goodhart, in which agents actively make our proxy worse by intelligently manipulating it.\nThis category is what people most often have in mind when they interpret Goodhart's remark. And at first glance, it may not seem as relevant to our concerns here. We want to understand in formal terms how agents can trust their future selves, or trust helpers they built from scratch. What does that have to do with adversaries?\nThe short answer is: when searching in a large space which is sufficiently rich, there are bound to be some elements of that space which implement adversarial strategies. Understanding optimization in general requires us to understand how sufficiently smart optimizers can avoid adversarial Goodhart. (We'll come back to this point in our discussion of subsystem alignment.)\nThe adversarial variant of Goodhart's law is even harder to observe at low levels of optimization, both because the adversaries won't want to start manipulating until after test time is over, and because adversaries that come from the system's own optimization won't show up until the optimization is powerful enough.\nThese four forms of Goodhart's law work in very different ways—and roughly speaking, they tend to start appearing at successively higher levels of optimization power, beginning with regressional Goodhart and proceeding to causal, then extremal, then adversarial. So be careful not to think you've conquered Goodhart's law because you've solved some of them.\n\nBesides anti-Goodhart measures, it would obviously help to be able to specify what we want precisely. Remember that none of these problems would come up if a system were optimizing what we wanted directly, rather than optimizing a proxy.\nUnfortunately, this is hard. So can the AI system we're building help us with this?\nMore generally, can a successor agent help its predecessor solve this? Maybe it can use its intellectual advantages to figure out what we want?\nAIXI learns what to do through a reward signal which it gets from the environment. We can imagine humans have a button which they press when AIXI does something they like.\nThe problem with this is that AIXI will apply its intelligence to the problem of taking control of the reward button. This is the problem of wireheading.\nThis kind of behavior is potentially very difficult to anticipate; the system may deceptively behave as intended during training, planning to take control after deployment. This is called a \"treacherous turn\".\nMaybe we build the reward button into the agent, as a black box which issues rewards based on what is going on. The box could be an intelligent sub-agent in its own right, which figures out what rewards humans would want to give. The box could even defend itself by issuing punishments for actions aimed at modifying the box.\nIn the end, though, if the agent understands the situation, it will be motivated to take control anyway.\nIf the agent is told to get high output from \"the button\" or \"the box\", then it will be motivated to hack those things. However, if you run the expected outcomes of plans through the actual reward-issuing box, then plans to hack the box are evaluated by the box itself, which won't find the idea appealing.\nDaniel Dewey calls the second sort of agent an observation-utility maximizer. (Others have included observation-utility agents within a more general notion of reinforcement learning.)\nI find it very interesting how you can try all sorts of things to stop an RL agent from wireheading, but the agent keeps working against it. Then, you make the shift to observation-utility agents and the problem vanishes.\nHowever, we still have the problem of specifying  \\(U\\). Daniel Dewey points out that observation-utility agents can still use learning to approximate  \\(U\\) over time; we just can't treat  \\(U\\) as a black box. An RL agent tries to learn to predict the reward function, whereas an observation-utility agent uses estimated utility functions from a human-specified value-learning prior.\nHowever, it's still difficult to specify a learning process which doesn't lead to other problems. For example, if you're trying to learn what humans want, how do you robustly identify \"humans\" in the world? Merely statistically decent object recognition could lead back to wireheading.\nEven if you successfully solve that problem, the agent might correctly locate value in the human, but might still be motivated to change human values to be easier to satisfy. For example, suppose there is a drug which modifies human preferences to only care about using the drug. An observation-utility agent could be motivated to give humans that drug in order to make its job easier. This is called the human manipulation problem.\nAnything marked as the true repository of value gets hacked. Whether this is one of the four types of Goodharting, or a fifth, or something all its own, it seems like a theme.\n \n\n\n\n \nThe challenge, then, is to create stable pointers to what we value: an indirect reference to values not directly available to be optimized, which doesn't thereby encourage hacking the repository of value.\nOne important point is made by Tom Everitt et al. in \"Reinforcement Learning with a Corrupted Reward Channel\": the way you set up the feedback loop makes a huge difference.\nThey draw the following picture:\n \n\n\nIn Standard RL, the feedback about the value of a state comes from the state itself, so corrupt states can be \"self-aggrandizing\".\nIn Decoupled RL, the feedback about the quality of a state comes from some other state, making it possible to learn correct values even when some feedback is corrupt.\n\nIn some sense, the challenge is to put the original, small agent in the feedback loop in the right way. However, the problems with updateless reasoning mentioned earlier make this hard; the original agent doesn't know enough.\nOne way to try to address this is through intelligence amplification: try to turn the original agent into a more capable one with the same values, rather than creating a successor agent from scratch and trying to get value loading right.\nFor example, Paul Christiano proposes an approach in which the small agent is simulated many times in a large tree, which can perform complex computations by splitting problems into parts.\nHowever, this is still fairly demanding for the small agent: it doesn't just need to know how to break problems down into more tractable pieces; it also needs to know how to do so without giving rise to malign subcomputations.\nFor example, since the small agent can use the copies of itself to get a lot of computational power, it could easily try to use a brute-force search for solutions that ends up running afoul of Goodhart's Law.\nThis issue is the subject of the next section: subsystem alignment.\n\nThis is part of Abram Demski and Scott Garrabrant's Embedded Agency sequence. Continue to the next part.\n\nThe post Robust Delegation appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Robust Delegation", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=14", "id": "c0a05c8abedfd61b50b3ce02530350cc"} {"text": "Embedded World-Models\n\n\n \n\nAn agent which is larger than its environment can:\n \n\nHold an exact model of the environment in its head.\nThink through the consequences of every potential course of action.\nIf it doesn't know the environment perfectly, hold every possible way the environment could be in its head, as is the case with Bayesian uncertainty.\n\n \nAll of these are typical of notions of rational agency.\nAn embedded agent can't do any of those things, at least not in any straightforward way.\n \n\n \nOne difficulty is that, since the agent is part of the environment, modeling the environment in every detail would require the agent to model itself in every detail, which would require the agent's self-model to be as \"big\" as the whole agent. An agent can't fit inside its own head.\nThe lack of a crisp agent/environment boundary forces us to grapple with paradoxes of self-reference. As if representing the rest of the world weren't already hard enough.\nEmbedded World-Models have to represent the world in a way more appropriate for embedded agents. Problems in this cluster include:\n \n\n\nthe \"realizability\" / \"grain of truth\" problem: the real world isn't in the agent's hypothesis space\nlogical uncertainty\nhigh-level models\nmulti-level models\nontological crises\nnaturalized induction, the problem that the agent must incorporate its model of itself into its world-model\nanthropic reasoning, the problem of reasoning with how many copies of yourself exist\n\n\n \n\n\nIn a Bayesian setting, where an agent's uncertainty is quantified by a probability distribution over possible worlds, a common assumption is \"realizability\": the true underlying environment which is generating the observations is assumed to have at least some probability in the prior.\nIn game theory, this same property is described by saying a prior has a \"grain of truth\". It should be noted, though, that there are additional barriers to getting this property in a game-theoretic setting; so, in their common usage cases, \"grain of truth\" is technically demanding while \"realizability\" is a technical convenience.\nRealizability is not totally necessary in order for Bayesian reasoning to make sense. If you think of a set of hypotheses as \"experts\", and the current posterior probability as how much you \"trust\" each expert, then learning according to Bayes' Law, \\(P(h|e) = \frac{P(e|h) \\cdot P(h)}{P(e)}\\), ensures a relative bounded loss property.\nSpecifically, if you use a prior \\(\\pi\\), the amount worse you are in comparison to each expert \\(h\\) is at most  \\(\\log \\pi(h)\\), since you assign at least probability \\(\\pi(h) \\cdot h(e)\\) to seeing a sequence of evidence \\(e\\). Intuitively, \\(\\pi(h)\\) is your initial trust in expert \\(h\\), and in each case where it is even a little bit more correct than you, you increase your trust accordingly. The way you do this ensures you assign an expert probability 1 and hence copy it precisely before you lose more than \\(\\log \\pi(h)\\) compared to it.\nThe prior AIXI is based on is the Solomonoff prior. It is defined as the output of a universal Turing machine (UTM) whose inputs are coin-flips. \nIn other words, feed a UTM a random program. Normally, you'd think of a UTM as only being able to simulate deterministic machines. Here, however, the initial inputs can instruct the UTM to use the rest of the infinite input tape as a source of randomness to simulate a stochastic Turing machine.\nCombining this with the previous idea about viewing Bayesian learning as a way of allocating \"trust\" to \"experts\" which meets a bounded loss condition, we can see the Solomonoff prior as a kind of ideal machine learning algorithm which can learn to act like any algorithm you might come up with, no matter how clever.\nFor this reason, we shouldn't necessarily think of AIXI as \"assuming the world is computable\", even though it reasons via a prior over computations. It's getting bounded loss on its predictive accuracy as compared with any computable predictor. We should rather say that AIXI assumes all possible algorithms are computable, not that the world is.\nHowever, lacking realizability can cause trouble if you are looking for anything more than bounded-loss predictive accuracy:\n\nthe posterior can oscillate forever;\nprobabilities may not be calibrated;\nestimates of statistics such as the mean may be arbitrarily bad;\nestimates of latent variables may be bad;\nand the identification of causal structure may not work.\n\nSo does AIXI perform well without a realizability assumption? We don't know. Despite getting bounded loss for predictions without realizability, existing optimality results for its actions require an added realizability assumption.\nFirst, if the environment really is sampled from the Solomonoff distribution, AIXI gets the maximum expected reward. But this is fairly trivial; it is essentially the definition of AIXI.\nSecond, if we modify AIXI to take somewhat randomized actions—Thompson sampling—there is an asymptotic optimality result for environments which act like any stochastic Turing machine.\nSo, either way, realizability was assumed in order to prove anything. (See Jan Leike, Nonparametric General Reinforcement Learning.)\nBut the concern I'm pointing at is not \"the world might be uncomputable, so we don't know if AIXI will do well\"; this is more of an illustrative case. The concern is that AIXI is only able to define intelligence or rationality by constructing an agent much, much bigger than the environment which it has to learn about and act within.\n \n\n \nLaurent Orseau provides a way of thinking about this in \"Space-Time Embedded Intelligence\". However, his approach defines the intelligence of an agent in terms of a sort of super-intelligent designer who thinks about reality from outside, selecting an agent to place into the environment.\nEmbedded agents don't have the luxury of stepping outside of the universe to think about how to think. What we would like would be a theory of rational belief for situated agents which provides foundations that are similarly as strong as the foundations Bayesianism provides for dualistic agents.\nImagine a computer science theory person who is having a disagreement with a programmer. The theory person is making use of an abstract model. The programmer is complaining that the abstract model isn't something you would ever run, because it is computationally intractable. The theory person responds that the point isn't to ever run it. Rather, the point is to understand some phenomenon which will also be relevant to more tractable things which you would want to run.\nI bring this up in order to emphasize that my perspective is a lot more like the theory person's. I'm not talking about AIXI to say \"AIXI is an idealization you can't run\". The answers to the puzzles I'm pointing at don't need to run. I just want to understand some phenomena.\nHowever, sometimes a thing that makes some theoretical models less tractable also makes that model too different from the phenomenon we're interested in.\nThe way AIXI wins games is by assuming we can do true Bayesian updating over a hypothesis space, assuming the world is in our hypothesis space, etc. So it can tell us something about the aspect of realistic agency that's approximately doing Bayesian updating over an approximately-good-enough hypothesis space. But embedded agents don't just need approximate solutions to that problem; they need to solve several problems that are different in kind from that problem.\n\n\nOne major obstacle a theory of embedded agency must deal with is self-reference.\nParadoxes of self-reference such as the liar paradox make it not just wildly impractical, but in a certain sense impossible for an agent's world-model to accurately reflect the world.\nThe liar paradox concerns the status of the sentence \"This sentence is not true\". If it were true, it must be false; and if not true, it must be true.\nThe difficulty comes in part from trying to draw a map of a territory which includes the map itself.\n \n\n \nThis is fine if the world \"holds still\" for us; but because the map is in the world, different maps create different worlds.\nSuppose our goal is to make an accurate map of the final route of a road which is currently under construction. Suppose we also know that the construction team will get to see our map, and that construction will proceed so as to disprove whatever map we make. This puts us in a liar-paradox-like situation.\n \n\n \nProblems of this kind become relevant for decision-making in the theory of games. A simple game of rock-paper-scissors can introduce a liar paradox if the players try to win, and can predict each other better than chance.\nGame theory solves this type of problem with game-theoretic equilibria. But the problem ends up coming back in a different way.\nI mentioned that the problem of realizability takes on a different character in the context of game theory. In an ML setting, realizability is a potentially unrealistic assumption, but can usually be assumed consistently nonetheless.\nIn game theory, on the other hand, the assumption itself may be inconsistent. This is because games commonly yield paradoxes of self-reference.\n\n \n\n \n\nBecause there are so many agents, it is no longer possible in game theory to conveniently make an \"agent\" a thing which is larger than a world. So game theorists are forced to investigate notions of rational agency which can handle a large world.\nUnfortunately, this is done by splitting up the world into \"agent\" parts and \"non-agent\" parts, and handling the agents in a special way. This is almost as bad as dualistic models of agency.\nIn rock-paper-scissors, the liar paradox is resolved by stipulating that each player play each move with \\(1/3\\) probability. If one player plays this way, then the other loses nothing by doing so. This way of introducing probabilistic play to resolve would-be paradoxes of game theory is called a Nash equilibrium.\nWe can use Nash equilibria to prevent the assumption that the agents correctly understand the world they're in from being inconsistent. However, that works just by telling the agents what the world looks like. What if we want to model agents who learn about the world, more like AIXI?\nThe grain of truth problem is the problem of formulating a reasonably bound prior probability distribution which would allow agents playing games to place some positive probability on each other's true (probabilistic) behavior, without knowing it precisely from the start.\nUntil recently, known solutions to the problem were quite limited. Benja Fallenstein, Jessica Taylor, and Paul Christiano's \"Reflective Oracles: A Foundation for Classical Game Theory\" provides a very general solution. For details, see \"A Formal Solution to the Grain of Truth Problem\" by Jan Leike, Jessica Taylor, and Benja Fallenstein.\nYou might think that stochastic Turing machines can represent Nash equilibria just fine.\n \n\n \nBut if you're trying to produce Nash equilibria as a result of reasoning about other agents, you'll run into trouble. If each agent models the other's computation and tries to run it to see what the other agent does, you've just got an infinite loop.\nThere are some questions Turing machines just can't answer—in particular, questions about the behavior of Turing machines. The halting problem is the classic example.\nTuring studied \"oracle machines\" to examine what would happen if we could answer such questions. An oracle is like a book containing some answers to questions which we were unable to answer before.\nBut ordinarily, we get a hierarchy. Type B machines can answer questions about whether type A machines halt, type C machines have the answers about types A and B, and so on, but no machines have answers about their own type.\n \n\n \nReflective oracles work by twisting the ordinary Turing universe back on itself, so that rather than an infinite hierarchy of ever-stronger oracles, you define an oracle that serves as its own oracle machine.\n \n\n \nThis would normally introduce contradictions, but reflective oracles avoid this by randomizing their output in cases where they would run into paradoxes. So reflective oracle machines are stochastic, but they're more powerful than regular stochastic Turing machines.\nThat's how reflective oracles address the problems we mentioned earlier of a map that's itself part of the territory: randomize.\n \n\n \nReflective oracles also solve the problem with game-theoretic notions of rationality I mentioned earlier. It allows agents to be reasoned about in the same manner as other parts of the environment, rather than treating them as a fundamentally special case. They're all just computations-with-oracle-access.\nHowever, models of rational agents based on reflective oracles still have several major limitations. One of these is that agents are required to have unlimited processing power, just like AIXI, and so are assumed to know all of the consequences of their own beliefs.\nIn fact, knowing all the consequences of your beliefs—a property known as logical omniscience—turns out to be rather core to classical Bayesian rationality.\n\nSo far, I've been talking in a fairly naive way about the agent having beliefs about hypotheses, and the real world being or not being in the hypothesis space.\nIt isn't really clear what any of that means.\nDepending on how we define things, it may actually be quite possible for an agent to be smaller than the world and yet contain the right world-model—it might know the true physics and initial conditions, but only be capable of inferring their consequences very approximately.\nHumans are certainly used to living with shorthands and approximations. But realistic as this scenario may be, it is not in line with what it usually means for a Bayesian to know something. A Bayesian knows the consequences of all of its beliefs.\nUncertainty about the consequences of your beliefs is logical uncertainty. In this case, the agent might be empirically certain of a unique mathematical description pinpointing which universe she's in, while being logically uncertain of most consequences of that description.\nModeling logical uncertainty requires us to have a combined theory of logic (reasoning about implications) and probability (degrees of belief).\nLogic and probability theory are two great triumphs in the codification of rational thought. Logic provides the best tools for thinking about self-reference, while probability provides the best tools for thinking about decision-making. However, the two don't work together as well as one might think.\n \n\n \nThey may seem superficially compatible, since probability theory is an extension of Boolean logic. However, Gödel's first incompleteness theorem shows that any sufficiently rich logical system is incomplete: not only does it fail to decide every sentence as true or false, but it also has no computable extension which manages to do so.\n(See the post \"An Untrollable Mathematician Illustrated\" for more illustration of how this messes with probability theory.)\nThis also applies to probability distributions: no computable distribution can assign probabilities in a way that's consistent with a sufficiently rich theory. This forces us to choose between using an uncomputable distribution, or using a distribution which is inconsistent.\nSounds like an easy choice, right? The inconsistent theory is at least computable, and we are after all trying to develop a theory of logical non-omniscience. We can just continue to update on facts which we prove, bringing us closer and closer to consistency.\nUnfortunately, this doesn't work out so well, for reasons which connect back to realizability. Remember that there are no computable probability distributions consistent with all consequences of sound theories. So our non-omniscient prior doesn't even contain a single correct hypothesis.\nThis causes pathological behavior as we condition on more and more true mathematical beliefs. Beliefs wildly oscillate rather than approaching reasonable estimates.\nTaking a Bayesian prior on mathematics, and updating on whatever we prove, does not seem to capture mathematical intuition and heuristic conjecture very well—unless we restrict the domain and craft a sensible prior.\nProbability is like a scale, with worlds as weights. An observation eliminates some of the possible worlds, removing weights and shifting the balance of beliefs.\nLogic is like a tree, growing from the seed of axioms according to inference rules. For real-world agents, the process of growth is never complete; you never know all the consequences of each belief.\n\nWithout knowing how to combine the two, we can't characterize reasoning probabilistically about math. But the \"scale versus tree\" problem also means that we don't know how ordinary empirical reasoning works.\nBayesian hypothesis testing requires each hypothesis to clearly declare which probabilities it assigns to which observations. That way, you know how much to rescale the odds when you make an observation. If we don't know the consequences of a belief, we don't know how much credit to give it for making predictions.\nThis is like not knowing where to place the weights on the scales of probability. We could try putting weights on both sides until a proof rules one out, but then the beliefs just oscillate forever rather than doing anything useful.\nThis forces us to grapple directly with the problem of a world that's larger than the agent. We want some notion of boundedly rational beliefs about uncertain consequences; but any computable beliefs about logic must have left out something, since the tree of logical implications will grow larger than any container.\nFor a Bayesian, the scales of probability are balanced in precisely such a way that no Dutch book can be made against them—no sequence of bets that are a sure loss. But you can only account for all Dutch books if you know all the consequences of your beliefs. Absent that, someone who has explored other parts of the tree can Dutch-book you.\nBut human mathematicians don't seem to run into any special difficulty in reasoning about mathematical uncertainty, any more than we do with empirical uncertainty. So what characterizes good reasoning under mathematical uncertainty, if not immunity to making bad bets?\nOne answer is to weaken the notion of Dutch books so that we only allow bets based on quickly computable parts of the tree. This is one of the ideas behind Garrabrant et al.'s \"Logical Induction\", an early attempt at defining something like \"Solomonoff induction, but for reasoning that incorporates mathematical uncertainty\".\n\nAnother consequence of the fact that the world is bigger than you is that you need to be able to use high-level world models: models which involve things like tables and chairs.\nThis is related to the classical symbol grounding problem; but since we want a formal analysis which increases our trust in some system, the kind of model which interests us is somewhat different. This also relates to transparency and informed oversight: world-models should be made out of understandable parts.\nA related question is how high-level reasoning and low-level reasoning relate to each other and to intermediate levels: multi-level world models.\nStandard probabilistic reasoning doesn't provide a very good account of this sort of thing. It's as though you have different Bayes nets which describe the world at different levels of accuracy, and processing power limitations force you to mostly use the less accurate ones, so you have to decide how to jump to the more accurate as needed.\nAdditionally, the models at different levels don't line up perfectly, so you have a problem of translating between them; and the models may have serious contradictions between them. This might be fine, since high-level models are understood to be approximations anyway, or it could signal a serious problem in the higher- or lower-level models, requiring their revision.\nThis is especially interesting in the case of ontological crises, in which objects we value turn out not to be a part of \"better\" models of the world.\nIt seems fair to say that everything humans value exists in high-level models only, which from a reductionistic perspective is \"less real\" than atoms and quarks. However, because our values aren't defined on the low level, we are able to keep our values even when our knowledge of the low level radically shifts. (We would also like to be able to say something about what happens to values if the high level radically shifts.)\nAnother critical aspect of embedded world models is that the agent itself must be in the model, since the agent seeks to understand the world, and the world cannot be fully separated from oneself. This opens the door to difficult problems of self-reference and anthropic decision theory.\nNaturalized induction is the problem of learning world-models which include yourself in the environment. This is challenging because (as Caspar Oesterheld has put it) there is a type mismatch between \"mental stuff\" and \"physics stuff\".\nAIXI conceives of the environment as if it were made with a slot which the agent fits into. We might intuitively reason in this way, but we can also understand a physical perspective from which this looks like a bad model. We might imagine instead that the agent separately represents: self-knowledge available to introspection; hypotheses about what the universe is like; and a \"bridging hypothesis\" connecting the two.\nThere are interesting questions of how this could work. There's also the question of whether this is the right structure at all. It's certainly not how I imagine babies learning.\nThomas Nagel would say that this way of approaching the problem involves \"views from nowhere\"; each hypothesis posits a world as if seen from outside. This is perhaps a strange thing to do.\n\nA special case of agents needing to reason about themselves is agents needing to reason about their future self.\nTo make long-term plans, agents need to be able to model how they'll act in the future, and have a certain kind of trust in their future goals and reasoning abilities. This includes trusting future selves that have learned and grown a great deal.\nIn a traditional Bayesian framework, \"learning\" means Bayesian updating. But as we noted, Bayesian updating requires that the agent start out large enough to consider a bunch of ways the world can be, and learn by ruling some of these out.\nEmbedded agents need resource-limited, logically uncertain updates, which don't work like this.\nUnfortunately, Bayesian updating is the main way we know how to think about an agent progressing through time as one unified agent. The Dutch book justification for Bayesian reasoning is basically saying this kind of updating is the only way to not have the agent's actions on Monday work at cross purposes, at least a little, to the agent's actions on Tuesday.\nEmbedded agents are non-Bayesian. And non-Bayesian agents tend to get into wars with their future selves.\nWhich brings us to our next set of problems: robust delegation.\n\nThis is part of Abram Demski and Scott Garrabrant's Embedded Agency sequence. Next part: Robust Delegation.\n\nThe post Embedded World-Models appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Embedded World-Models", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=14", "id": "78c71eea7a647509754ec31eb81d9e8b"} {"text": "Decision Theory\n\n\n \n\nDecision theory and artificial intelligence typically try to compute something resembling\n$$\\underset{a \\ \\in \\ Actions}{\\mathrm{argmax}} \\ \\ f(a).$$\nI.e., maximize some function of the action. This tends to assume that we can detangle things enough to see outcomes as a function of actions.\nFor example, AIXI represents the agent and the environment as separate units which interact over time through clearly defined i/o channels, so that it can then choose actions maximizing reward.\n \n\n \nWhen the agent model is a part of the environment model, it can be significantly less clear how to consider taking alternative actions.\n \n\n \nFor example, because the agent is smaller than the environment, there can be other copies of the agent, or things very similar to the agent. This leads to contentious decision-theory problems such as the Twin Prisoner's Dilemma and Newcomb's problem.\nIf Emmy Model 1 and Emmy Model 2 have had the same experiences and are running the same source code, should Emmy Model 1 act like her decisions are steering both robots at once? Depending on how you draw the boundary around \"yourself\", you might think you control the action of both copies, or only your own.\nThis is an instance of the problem of counterfactual reasoning: how do we evaluate hypotheticals like \"What if the sun suddenly went out\"?\nProblems of adapting decision theory to embedded agents include:\n \n\n\ncounterfactuals\nNewcomblike reasoning, in which the agent interacts with copies of itself\nreasoning about other agents more broadly\nextortion problems\ncoordination problems\nlogical counterfactuals\nlogical updatelessness\n\n\n \n\n\nThe most central example of why agents need to think about counterfactuals comes from counterfactuals about their own actions.\nThe difficulty with action counterfactuals can be illustrated by the five-and-ten problem. Suppose we have the option of taking a five dollar bill or a ten dollar bill, and all we care about in the situation is how much money we get. Obviously, we should take the $10.\nHowever, it is not so easy as it seems to reliably take the $10.\nIf you reason about yourself as just another part of the environment, then you can know your own behavior. If you can know your own behavior, then it becomes difficult to reason about what would happen if you behaved differently.\nThis throws a monkey wrench into many common reasoning methods. How do we formalize the idea \"Taking the $10 would lead to good consequences, while taking the $5 would lead to bad consequences,\" when sufficiently rich self-knowledge would reveal one of those scenarios as inconsistent?\nAnd if we can't formalize any idea like that, how do real-world agents figure out to take the $10 anyway?\nIf we try to calculate the expected utility of our actions by Bayesian conditioning, as is common, knowing our own behavior leads to a divide-by-zero error when we try to calculate the expected utility of actions we know we don't take: \\(\\lnot A\\) implies \\(P(A)=0\\), which implies \\(P(B \\& A)=0\\), which implies\n$$P(B|A) = \\frac{P(B \\& A)}{P(A)} = \\frac{0}{0}.$$\nBecause the agent doesn't know how to separate itself from the environment, it gets gnashing internal gears when it tries to imagine taking different actions.\nBut the biggest complication comes from Löb's Theorem, which can make otherwise reasonable-looking agents take the $5 because \"If I take the $10, I get $0\"! And in a stable way—the problem can't be solved by the agent learning or thinking about the problem more.\nThis might be hard to believe; so let's look at a detailed example. The phenomenon can be illustrated by the behavior of simple logic-based agents reasoning about the five-and-ten problem.\nConsider this example:\n \n\n \nWe have the source code for an agent and the universe. They can refer to each other through the use of quining. The universe is simple; the universe just outputs whatever the agent outputs.\nThe agent spends a long time searching for proofs about what happens if it takes various actions. If for some \\(x\\) and \\(y\\) equal to \\(0\\), \\(5\\), or \\(10\\), it finds a proof that taking the \\(5\\) leads to \\(x\\) utility, that taking the \\(10\\) leads to \\(y\\) utility, and that \\(x>y\\), it will naturally take the \\(5\\). We expect that it won't find such a proof, and will instead pick the default action of taking the \\(10\\).\nIt seems easy when you just imagine an agent trying to reason about the universe. Yet it turns out that if the amount of time spent searching for proofs is enough, the agent will always choose \\(5\\)!\nThe proof that this is so is by Löb's theorem. Löb's theorem says that, for any proposition \\(P\\), if you can prove that a proof of \\(P\\) would imply the truth of \\(P\\), then you can prove \\(P\\). In symbols, with\"\\(□X\\)\" meaning \"\\(X\\) is provable\":\n$$□(□P \\to P) \\to □P.$$\nIn the version of the five-and-ten problem I gave, \"\\(P\\)\" is the proposition \"if the agent outputs \\(5\\) the universe outputs \\(5\\), and if the agent outputs \\(10\\) the universe outputs \\(0\\)\".\nSupposing it is provable, the agent will eventually find the proof, and return \\(5\\) in fact. This makes the sentence true, since the agent outputs \\(5\\) and the universe outputs \\(5\\), and since it's false that the agent outputs \\(10\\). This is because false propositions like \"the agent outputs \\(10\\)\" imply everything, including the universe outputting \\(5\\).\nThe agent can (given enough time) prove all of this, in which case the agent in fact proves the proposition \"if the agent outputs \\(5\\) the universe outputs \\(5\\), and if the agent outputs \\(10\\) the universe outputs \\(0\\)\". And as a result, the agent takes the $5.\nWe call this a \"spurious proof\": the agent takes the $5 because it can prove that if it takes the $10 it has low value, because it takes the $5. It sounds circular, but sadly, is logically correct. More generally, when working in less proof-based settings, we refer to this as a problem of spurious counterfactuals.\nThe general pattern is: counterfactuals may spuriously mark an action as not being very good. This makes the AI not take the action. Depending on how the counterfactuals work, this may remove any feedback which would \"correct\" the problematic counterfactual; or, as we saw with proof-based reasoning, it may actively help the spurious counterfactual be \"true\".\nNote that because the proof-based examples are of significant interest to us, \"counterfactuals\" actually have to be counterlogicals; we sometimes need to reason about logically impossible \"possibilities\". This rules out most existing accounts of counterfactual reasoning.\nYou may have noticed that I slightly cheated. The only thing that broke the symmetry and caused the agent to take the $5 was the fact that \"\\(5\\)\" was the action that was taken when a proof was found, and \"\\(10\\)\" was the default. We could instead consider an agent that looks for any proof at all about what actions lead to what utilities, and then takes the action that is better. This way, which action is taken is dependent on what order we search for proofs.\nLet's assume we search for short proofs first. In this case, we will take the $10, since it is very easy to show that \\(A()=5\\) leads to \\(U()=5\\) and \\(A()=10\\) leads to \\(U()=10\\).\nThe problem is that spurious proofs can be short too, and don't get much longer when the universe gets harder to predict. If we replace the universe with one that is provably functionally the same, but is harder to predict, the shortest proof will short-circuit the complicated universe and be spurious.\n\nPeople often try to solve the problem of counterfactuals by suggesting that there will always be some uncertainty. An AI may know its source code perfectly, but it can't perfectly know the hardware it is running on.\nDoes adding a little uncertainty solve the problem? Often not:\n\nThe proof of the spurious counterfactual often still goes through; if you think you are in a five-and-ten problem with a 95% certainty, you can have the usual problem within that 95%.\nAdding uncertainty to make counterfactuals well-defined doesn't get you any guarantee that the counterfactuals will be reasonable. Hardware failures aren't often what you want to expect when considering alternate actions.\n\nConsider this scenario: You are confident that you almost always take the left path. However, it is possible (though unlikely) for a cosmic ray to damage your circuits, in which case you could go right—but you would then be insane, which would have many other bad consequences.\nIf this reasoning in itself is why you always go left, you've gone wrong.\nSimply ensuring that the agent has some uncertainty about its actions doesn't ensure that the agent will have remotely reasonable counterfactual expectations. However, one thing we can try instead is to ensure the agent actually takes each action with some probability. This strategy is called ε-exploration.\nε-exploration ensures that if an agent plays similar games on enough occasions, it can eventually learn realistic counterfactuals (modulo a concern of realizability which we will get to later).\nε-exploration only works if it ensures that the agent itself can't predict whether it is about to ε-explore. In fact, a good way to implement ε-exploration is via the rule \"if the agent is too sure about its action, it takes a different one\".\nFrom a logical perspective, the unpredictability of ε-exploration is what prevents the problems we've been discussing. From a learning-theoretic perspective, if the agent could know it wasn't about to explore, then it could treat that as a different case—failing to generalize lessons from its exploration. This gets us back to a situation where we have no guarantee that the agent will learn better counterfactuals. Exploration may be the only source of data for some actions, so we need to force the agent to take that data into account, or it may not learn.\nHowever, even ε-exploration doesn't seem to get things exactly right. Observing the result of ε-exploration shows you what happens if you take an action unpredictably; the consequences of taking that action as part of business-as-usual may be different.\nSuppose you're an ε-explorer who lives in a world of ε-explorers. You're applying for a job as a security guard, and you need to convince the interviewer that you're not the kind of person who would run off with the stuff you're guarding. They want to hire someone who has too much integrity to lie and steal, even if the person thought they could get away with it.\n \n\n \nSuppose the interviewer is an amazing judge of character—or just has read access to your source code.\n \n\n \nIn this situation, stealing might be a great option as an ε-exploration action, because the interviewer may not be able to predict your theft, or may not think punishment makes sense for a one-off anomaly.\n \n\n \nBut stealing is clearly a bad idea as a normal action, because you'll be seen as much less reliable and trustworthy.\n \n\n \nIf we don't learn counterfactuals from ε-exploration, then, it seems we have no guarantee of learning realistic counterfactuals at all. But if we do learn from ε-exploration, it appears we still get things wrong in some cases.\nSwitching to a probabilistic setting doesn't cause the agent to reliably make \"reasonable\" choices, and neither does forced exploration.\nBut writing down examples of \"correct\" counterfactual reasoning doesn't seem hard from the outside!\nMaybe that's because from \"outside\" we always have a dualistic perspective. We are in fact sitting outside of the problem, and we've defined it as a function of an agent.\n \n\nHowever, an agent can't solve the problem in the same way from inside. From its perspective, its functional relationship with the environment isn't an observable fact. This is why counterfactuals are called \"counterfactuals\", after all.\n \n\n \nWhen I told you about the 5 and 10 problem, I first told you about the problem, and then gave you an agent. When one agent doesn't work well, we could consider a different agent.\nFinding a way to succeed at a decision problem involves finding an agent that when plugged into the problem takes the right action. The fact that we can even consider putting in different agents means that we have already carved the universe into an \"agent\" part, plus the rest of the universe with a hole for the agent—which is most of the work!\n\nAre we just fooling ourselves due to the way we set up decision problems, then? Are there no \"correct\" counterfactuals?\nWell, maybe we are fooling ourselves. But there is still something we are confused about! \"Counterfactuals are subjective, invented by the agent\" doesn't dissolve the mystery. There is something intelligent agents do, in the real world, to make decisions.\nSo I'm not talking about agents who know their own actions because I think there's going to be a big problem with intelligent machines inferring their own actions in the future. Rather, the possibility of knowing your own actions illustrates something confusing about determining the consequences of your actions—a confusion which shows up even in the very simple case where everything about the world is known and you just need to choose the larger pile of money.\nFor all that, humans don't seem to run into any trouble taking the $10.\nCan we take any inspiration from how humans make decisions?\nWell, suppose you're actually asked to choose between $10 and $5. You know that you'll take the $10. How do you reason about what would happen if you took the $5 instead?\nIt seems easy if you can separate yourself from the world, so that you only think of external consequences (getting $5).\n\nIf you think about yourself as well, the counterfactual starts seeming a bit more strange or contradictory. Maybe you have some absurd prediction about what the world would be like if you took the $5—like, \"I'd have to be blind!\"\nThat's alright, though. In the end you still see that taking the $5 would lead to bad consequences, and you still take the $10, so you're doing fine.\n \n\nThe challenge for formal agents is that an agent can be in a similar position, except it is taking the $5, knows it is taking the $5, and can't figure out that it should be taking the $10 instead, because of the absurd predictions it makes about what happens when it takes the $10.\nIt seems hard for a human to end up in a situation like that; yet when we try to write down a formal reasoner, we keep running into this kind of problem. So it indeed seems like human decision-making is doing something here that we don't yet understand.\n\nIf you're an embedded agent, then you should be able to think about yourself, just like you think about other objects in the environment. And other reasoners in your environment should be able to think about you too.\n \n\n \nIn the five-and-ten problem, we saw how messy things can get when an agent knows its own action before it acts. But this is hard to avoid for an embedded agent.\nIt's especially hard not to know your own action in standard Bayesian settings, which assume logical omniscience. A probability distribution assigns probability 1 to any fact which is logically true. So if a Bayesian agent knows its own source code, then it should know its own action.\nHowever, realistic agents who are not logically omniscient may run into the same problem. Logical omniscience forces the issue, but rejecting logical omniscience doesn't eliminate the issue.\nε-exploration does seem to solve that problem in many cases, by ensuring that agents have uncertainty about their choices and that the things they expect are based on experience.\n \n\n \nHowever, as we saw in the security guard example, even ε-exploration seems to steer us wrong when the results of exploring randomly differ from the results of acting reliably.\nExamples which go wrong in this way seem to involve another part of the environment that behaves like you—such as another agent very similar to yourself, or a sufficiently good model or simulation of you. These are called Newcomblike problems; an example is the Twin Prisoner's Dilemma mentioned above.\n \n\n \nIf the five-and-ten problem is about cutting a you-shaped piece out of the world so that the world can be treated as a function of your action, Newcomblike problems are about what to do when there are several approximately you-shaped pieces in the world.\nOne idea is that exact copies should be treated as 100% under your \"logical control\". For approximate models of you, or merely similar agents, control should drop off sharply as logical correlation decreases. But how does this work?\n\nNewcomblike problems are difficult for almost the same reason as the self-reference issues discussed so far: prediction. With strategies such as ε-exploration, we tried to limit the self-knowledge of the agent in an attempt to avoid trouble. But the presence of powerful predictors in the environment reintroduces the trouble. By choosing what information to share, predictors can manipulate the agent and choose their actions for them.\nIf there is something which can predict you, it might tell you its prediction, or related information, in which case it matters what you do in response to various things you could find out.\nSuppose you decide to do the opposite of whatever you're told. Then it isn't possible for the scenario to be set up in the first place. Either the predictor isn't accurate after all, or alternatively, the predictor doesn't share their prediction with you.\nOn the other hand, suppose there's some situation where you do act as predicted. Then the predictor can control how you'll behave, by controlling what prediction they tell you.\nSo, on the one hand, a powerful predictor can control you by selecting between the consistent possibilities. On the other hand, you are the one who chooses your pattern of responses in the first place. This means that you can set them up to your best advantage.\n\nSo far, we've been discussing action counterfactuals—how to anticipate consequences of different actions. This discussion of controlling your responses introduces the observation counterfactual—imagining what the world would be like if different facts had been observed.\nEven if there is no one telling you a prediction about your future behavior, observation counterfactuals can still play a role in making the right decision. Consider the following game:\nAlice receives a card at random which is either High or Low. She may reveal the card if she wishes. Bob then gives his probability \\(p\\) that Alice has a high card. Alice always loses \\(p^2\\) dollars. Bob loses \\(p^2\\) if the card is low, and \\((1-p)^2\\) if the card is high.\nBob has a proper scoring rule, so does best by giving his true belief. Alice just wants Bob's belief to be as much toward \"low\" as possible.\nSuppose Alice will play only this one time. She sees a low card. Bob is good at reasoning about Alice, but is in the next room and so can't read any tells. Should Alice reveal her card?\nSince Alice's card is low, if she shows it to Bob, she will lose no money, which is the best possible outcome. However, this means that in the counterfactual world where Alice sees a high card, she wouldn't be able to keep the secret—she might as well show her card in that case too, since her reluctance to show it would be as reliable a sign of \"high\".\nOn the other hand, if Alice doesn't show her card, she loses 25¢—but then she can use the same strategy in the other world, rather than losing $1. So, before playing the game, Alice would want to visibly commit to not reveal; this makes expected loss 25¢, whereas the other strategy has expected loss 50¢. By taking observation counterfactuals into account, Alice is able to keep secrets—without them, Bob could perfectly infer her card from her actions.\nThis game is equivalent to the decision problem called counterfactual mugging.\nUpdateless decision theory (UDT) is a proposed decision theory which can keep secrets in the high/low card game. UDT does this by recommending that the agent do whatever would have seemed wisest before—whatever your earlier self would have committed to do.\nAs it happens, UDT also performs well in Newcomblike problems.\nCould something like UDT be related to what humans are doing, if only implicitly, to get good results on decision problems? Or, if it's not, could it still be a good model for thinking about decision-making?\nUnfortunately, there are still some pretty deep difficulties here. UDT is an elegant solution to a fairly broad class of decision problems, but it only makes sense if the earlier self can foresee all possible situations.\nThis works fine in a Bayesian setting where the prior already contains all possibilities within itself. However, there may be no way to do this in a realistic embedded setting. An agent has to be able to think of new possibilities—meaning that its earlier self doesn't know enough to make all the decisions.\nAnd with that, we find ourselves squarely facing the problem of embedded world-models.\n\nThis is part of Abram Demski and Scott Garrabrant's Embedded Agency sequence. Continued here!\n\nThe post Decision Theory appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Decision Theory", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=15", "id": "e596cf56639cd82705abeb49eb11ef93"} {"text": "October 2018 Newsletter\n\n\n\nThe AI Alignment Forum has left beta!\n\n\nDovetailing with the launch, MIRI researchers Scott Garrabrant and Abram Demski will be releasing a new sequence introducing our research over the coming week, beginning here: Embedded Agents. (Shorter illustrated version here.)\n\nOther updates\n\nNew posts to the forum: Cooperative Oracles; When Wishful Thinking Works; (A → B) → A; Towards a New Impact Measure; In Logical Time, All Games are Iterated Games; EDT Solves 5-and-10 With Conditional Oracles\nThe Rocket Alignment Problem: Eliezer Yudkowsky considers a hypothetical world without knowledge of calculus and celestial mechanics, to illustrate MIRI's research and what we take to be the world's current level of understanding of AI alignment. (Also on LessWrong.)\nMore on MIRI's AI safety angle of attack: a comment on decision theory.\n\nNews and links\n\nDeepMind's safety team launches their own blog, with an inaugural post on specification, robustness, and assurance.\nWill MacAskill discusses moral uncertainty on FLI's AI safety podcast.\nGoogle Brain announces the Unrestricted Adversarial Examples Challenge.\nThe 80,000 Hours job board has many new postings, including head of operations for FHI, COO for BERI, and programme manager for CSER. Also taking applicants: summer internships at CHAI, and a scholarships program from FHI.\n\n\nThe post October 2018 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "October 2018 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=15", "id": "0e39d2ee7d86c061646d462fe9d4da9e"} {"text": "Announcing the new AI Alignment Forum\n\n\nThis is a guest post by Oliver Habryka, lead developer for LessWrong. Our gratitude to the LessWrong team for the hard work they've put into developing this resource, and our congratulations on today's launch!\n\nI am happy to announce that after two months of open beta, the AI Alignment Forum is launching today. The AI Alignment Forum is a new website built by the team behind LessWrong 2.0, to help create a new hub for technical AI alignment research and discussion. \nOne of our core goals when we designed the forum was to make it easier for new people to get started on doing technical AI alignment research. This effort was split into two major parts:\n\n1. Better introductory content\nWe have been coordinating with AI alignment researchers to create three new sequences of posts that we hope can serve as introductions to some of the most important core ideas in AI Alignment. The three new sequences will be: \n\n\nEmbedded Agency, written by Scott Garrabrant and Abram Demski of MIRI\nIterated Amplification, written and compiled by Paul Christiano of OpenAI\nValue Learning, written and compiled by Rohin Shah of CHAI\n\n\nOver the next few weeks, we will be releasing about one post per day from these sequences, starting with the first post in the Embedded Agency sequence today.\nIf you are interested in learning about AI alignment, I encourage you to ask questions and discuss the content in the comment sections. And if you are already familiar with a lot of the core ideas, then we would greatly appreciate feedback on the sequences as we publish them. We hope that these sequences can be a major part of how new people get involved in AI alignment research, and so we care a lot of their quality and clarity. \n2. Easier ways to join the discussion\nMost scientific fields have to balance the need for high-context discussion with other specialists, and public discussion which allows the broader dissemination of new ideas, the onboarding of new members and the opportunity for new potential researchers to prove themselves. We tried to design a system that still allows newcomers to participate and learn, while giving established researchers the space to have high-level discussions with other researchers.\nTo do that, we integrated the new AI Alignment Forum closely with the existing LessWrong platform, as follows: \n\n\nAny new post or comment on the new AI Alignment Forum is automatically cross-posted to LessWrong.com. Accounts are also shared between the two platforms.\nAny comment and post on LessWrong can be promoted by members of the Alignment Forum from LessWrong to the AI Alignment Forum. \nThe reputation systems for LessWrong and the AI Alignment Forum are separate, and for every user, post and comment, you can see two reputation scores on LessWrong.com: a primary karma score combining karma from both sites, and a secondary karma score specific to AI Alignment Forum members.\nAny member whose content gets promoted on a frequent basis, and who garners a significant amount of karma from AI Alignment Forum members, will be automatically recommended to the AI Alignment Forum moderators as a candidate addition to the Alignment Forum.\n\n\nWe hope that this will result in a system in which cutting-edge research and discussion can happen, while new good ideas and participants can get noticed and rewarded for their contributions.\nIf you've been interested in doing alignment research, then I think the best way to do that right now is to comment on AI Alignment Forum posts on LessWrong, and check out the new content we'll be rolling out. \n\nIn an effort to centralize the existing discussion on technical AI alignment, this new forum is also going to replace the Intelligent Agent Foundations Forum, which MIRI built and maintained for the past two years. We are planning to shut down IAFF over the coming weeks, and collaborated with MIRI to import all the content from the forum, as well ensure that all old URLs are properly forwarded to their respective addresses on the new site. If you contributed there, you should have received an email about the details of importing your content. (If you didn't, send us a message in the Intercom chat at the bottom right corner at AlignmentForum.org.)\nThanks to MIRI for helping us build this project, and I am looking forward to seeing a lot of you participate in discussion of the AI alignment problem on LessWrong and the new forum.\n\nThe post Announcing the new AI Alignment Forum appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Announcing the new AI Alignment Forum", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=15", "id": "9cb8065481368e8d1175f9e33767cd1e"} {"text": "Embedded Agents\n\n\n \n\nSuppose you want to build a robot to achieve some real-world goal for you—a goal that requires the robot to learn for itself and figure out a lot of things that you don't already know.1\nThere's a complicated engineering problem here. But there's also a problem of figuring out what it even means to build a learning agent like that. What is it to optimize realistic goals in physical environments? In broad terms, how does it work?\nIn this series of posts, I'll point to four ways we don't currently know how it works, and four areas of active research aimed at figuring it out.\n \n \nThis is Alexei, and Alexei is playing a video game.\n \n\n \nLike most games, this game has clear input and output channels. Alexei only observes the game through the computer screen, and only manipulates the game through the controller.\nThe game can be thought of as a function which takes in a sequence of button presses and outputs a sequence of pixels on the screen.\nAlexei is also very smart, and capable of holding the entire video game inside his mind. If Alexei has any uncertainty, it is only over empirical facts like what game he is playing, and not over logical facts like which inputs (for a given deterministic game) will yield which outputs. This means that Alexei must also store inside his mind every possible game he could be playing.\nAlexei does not, however, have to think about himself. He is only optimizing the game he is playing, and not optimizing the brain he is using to think about the game. He may still choose actions based off of value of information, but this is only to help him rule out possible games he is playing, and not to change the way in which he thinks.\nIn fact, Alexei can treat himself as an unchanging indivisible atom. Since he doesn't exist in the environment he's thinking about, Alexei doesn't worry about whether he'll change over time, or about any subroutines he might have to run.\nNotice that all the properties I talked about are partially made possible by the fact that Alexei is cleanly separated from the environment that he is optimizing.\n\n \nThis is Emmy. Emmy is playing real life.\n \n\n \nReal life is not like a video game. The differences largely come from the fact that Emmy is within the environment that she is trying to optimize.\nAlexei sees the universe as a function, and he optimizes by choosing inputs to that function that lead to greater reward than any of the other possible inputs he might choose. Emmy, on the other hand, doesn't have a function. She just has an environment, and this environment contains her.\nEmmy wants to choose the best possible action, but which action Emmy chooses to take is just another fact about the environment. Emmy can reason about the part of the environment that is her decision, but since there's only one action that Emmy ends up actually taking, it's not clear what it even means for Emmy to \"choose\" an action that is better than the rest.\nAlexei can poke the universe and see what happens. Emmy is the universe poking itself. In Emmy's case, how do we formalize the idea of \"choosing\" at all?\nTo make matters worse, since Emmy is contained within the environment, Emmy must also be smaller than the environment. This means that Emmy is incapable of storing accurate detailed models of the environment within her mind.\nThis causes a problem: Bayesian reasoning works by starting with a large collection of possible environments, and as you observe facts that are inconsistent with some of those environments, you rule them out. What does reasoning look like when you're not even capable of storing a single valid hypothesis for the way the world works? Emmy is going to have to use a different type of reasoning, and make updates that don't fit into the standard Bayesian framework.\nSince Emmy is within the environment that she is manipulating, she is also going to be capable of self-improvement. But how can Emmy be sure that as she learns more and finds more and more ways to improve herself, she only changes herself in ways that are actually helpful? How can she be sure that she won't modify her original goals in undesirable ways?\nFinally, since Emmy is contained within the environment, she can't treat herself like an atom. She is made out of the same pieces that the rest of the environment is made out of, which is what causes her to be able to think about herself.\nIn addition to hazards in her external environment, Emmy is going to have to worry about threats coming from within. While optimizing, Emmy might spin up other optimizers as subroutines, either intentionally or unintentionally. These subsystems can cause problems if they get too powerful and are unaligned with Emmy's goals. Emmy must figure out how to reason without spinning up intelligent subsystems, or otherwise figure out how to keep them weak, contained, or aligned fully with her goals.\n \n \nEmmy is confusing, so let's go back to Alexei. Marcus Hutter's AIXI framework gives a good theoretical model for how agents like Alexei work:\n \n$$\n a_k \\;:=\\; rg\\max_{a_k}\\sum_{ o_k r_k} %\\max_{a_{k+1}}\\sum_{x_{k+1}}\n … \\max_{a_m}\\sum_{ o_m r_m}\n [r_k+…+r_m]\n\\hspace{-1em}\\hspace{-1em}\\hspace{-1em}\\!\\!\\!\\sum_{{ q}\\,:\\,U({ q},{ a_1..a_m})={ o_1 r_1.. o_m r_m}}\\hspace{-1em}\\hspace{-1em}\\hspace{-1em}\\!\\!\\! 2^{-\\ell({ q})}\n$$\n \nThe model has an agent and an environment that interact using actions, observations, and rewards. The agent sends out an action \\(a\\), and then the environment sends out both an observation \\(o\\) and a reward \\(r\\). This process repeats at each time \\(k…m\\).\nEach action is a function of all the previous action-observation-reward triples. And each observation and reward is similarly a function of these triples and the immediately preceding action.\nYou can imagine an agent in this framework that has full knowledge of the environment that it's interacting with. However, AIXI is used to model optimization under uncertainty about the environment. AIXI has a distribution over all possible computable environments \\(q\\), and chooses actions that lead to a high expected reward under this distribution. Since it also cares about future reward, this may lead to exploring for value of information.\nUnder some assumptions, we can show that AIXI does reasonably well in all computable environments, in spite of its uncertainty. However, while the environments that AIXI is interacting with are computable, AIXI itself is uncomputable. The agent is made out of a different sort of stuff, a more powerful sort of stuff, than the environment.\nWe will call agents like AIXI and Alexei \"dualistic.\" They exist outside of their environment, with only set interactions between agent-stuff and environment-stuff. They require the agent to be larger than the environment, and don't tend to model self-referential reasoning, because the agent is made of different stuff than what the agent reasons about.\nAIXI is not alone. These dualistic assumptions show up all over our current best theories of rational agency.\nI set up AIXI as a bit of a foil, but AIXI can also be used as inspiration. When I look at AIXI, I feel like I really understand how Alexei works. This is the kind of understanding that I want to also have for Emmy.\nUnfortunately, Emmy is confusing. When I talk about wanting to have a theory of \"embedded agency,\" I mean I want to be able to understand theoretically how agents like Emmy work. That is, agents that are embedded within their environment and thus:\n\ndo not have well-defined i/o channels;\nare smaller than their environment;\nare able to reason about themselves and self-improve;\nand are made of parts similar to the environment.\n\nYou shouldn't think of these four complications as a partition. They are very entangled with each other.\nFor example, the reason the agent is able to self-improve is because it is made of parts. And any time the environment is sufficiently larger than the agent, it might contain other copies of the agent, and thus destroy any well-defined i/o channels.\n\n \n\n \n\nHowever, I will use these four complications to inspire a split of the topic of embedded agency into four subproblems. These are: decision theory, embedded world-models, robust delegation, and subsystem alignment.\n \n \nDecision theory is all about embedded optimization.\nThe simplest model of dualistic optimization is \\(\\mathrm{argmax}\\). \\(\\mathrm{argmax}\\) takes in a function from actions to rewards, and returns the action which leads to the highest reward under this function. Most optimization can be thought of as some variant on this. You have some space; you have a function from this space to some score, like a reward or utility; and you want to choose an input that scores highly under this function.\nBut we just said that a large part of what it means to be an embedded agent is that you don't have a functional environment. So now what do we do? Optimization is clearly an important part of agency, but we can't currently say what it is even in theory without making major type errors.\nSome major open problems in decision theory include:\n\nlogical counterfactuals: how do you reason about what would happen if you take action B, given that you can prove that you will instead take action A?\nenvironments that include multiple copies of the agent, or trustworthy predictions of the agent.\nlogical updatelessness, which is about how to combine the very nice but very Bayesian world of Wei Dai's updateless decision theory, with the much less Bayesian world of logical uncertainty.\n\n \n \nEmbedded world-models is about how you can make good models of the world that are able to fit within an agent that is much smaller than the world.\nThis has proven to be very difficult—first, because it means that the true universe is not in your hypothesis space, which ruins a lot of theoretical guarantees; and second, because it means we're going to have to make non-Bayesian updates as we learn, which also ruins a bunch of theoretical guarantees.\nIt is also about how to make world-models from the point of view of an observer on the inside, and resulting problems such as anthropics. Some major open problems in embedded world-models include:\n\nlogical uncertainty, which is about how to combine the world of logic with the world of probability.\nmulti-level modeling, which is about how to have multiple models of the same world at different levels of description, and transition nicely between them.\nontological crises, which is what to do when you realize that your model, or even your goal, was specified using a different ontology than the real world.\n\n \n \nRobust delegation is all about a special type of principal-agent problem. You have an initial agent that wants to make a more intelligent successor agent to help it optimize its goals. The initial agent has all of the power, because it gets to decide exactly what successor agent to make. But in another sense, the successor agent has all of the power, because it is much, much more intelligent.\nFrom the point of view of the initial agent, the question is about creating a successor that will robustly not use its intelligence against you. From the point of view of the successor agent, the question is about, \"How do you robustly learn or respect the goals of something that is stupid, manipulable, and not even using the right ontology?\"\nThere are extra problems coming from the Löbian obstacle making it impossible to consistently trust things that are more powerful than you.\nYou can think about these problems in the context of an agent that's just learning over time, or in the context of an agent making a significant self-improvement, or in the context of an agent that's just trying to make a powerful tool.\nThe major open problems in robust delegation include:\n\nVingean reflection, which is about how to reason about and trust agents that are much smarter than you, in spite of the Löbian obstacle to trust.\nvalue learning, which is how the successor agent can learn the goals of the initial agent in spite of that agent's stupidity and inconsistencies.\ncorrigibility, which is about how an initial agent can get a successor agent to allow (or even help with) modifications, in spite of an instrumental incentive not to.\n\n \n \nSubsystem alignment is about how to be one unified agent that doesn't have subsystems that are fighting against either you or each other.\nWhen an agent has a goal, like \"saving the world,\" it might end up spending a large amount of its time thinking about a subgoal, like \"making money.\" If the agent spins up a sub-agent that is only trying to make money, there are now two agents that have different goals, and this leads to a conflict. The sub-agent might suggest plans that look like they only make money, but actually destroy the world in order to make even more money.\nThe problem is: you don't just have to worry about sub-agents that you intentionally spin up. You also have to worry about spinning up sub-agents by accident. Any time you perform a search or an optimization over a sufficiently rich space that's able to contain agents, you have to worry about the space itself doing optimization. This optimization may not be exactly in line with the optimization the outer system was trying to do, but it will have an instrumental incentive to look like it's aligned.\nA lot of optimization in practice uses this kind of passing the buck. You don't just find a solution; you find a thing that is able to itself search for a solution.\nIn theory, I don't understand how to do optimization at all—other than methods that look like finding a bunch of stuff that I don't understand, and seeing if it accomplishes my goal. But this is exactly the kind of thing that's most prone to spinning up adversarial subsystems.\nThe big open problem in subsystem alignment is about how to have a base-level optimizer that doesn't spin up adversarial optimizers. You can break this problem up further by considering cases where the resultant optimizers are either intentional or unintentional, and considering restricted subclasses of optimization, like induction.\nBut remember: decision theory, embedded world-models, robust delegation, and subsystem alignment are not four separate problems. They're all different subproblems of the same unified concept that is embedded agency.\n\nPart 2 of this post will be coming in the next couple of days: Decision Theory.\n\nThis is part 1 of the Embedded Agency series, by Abram Demski and Scott Garrabrant.The post Embedded Agents appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Embedded Agents", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=15", "id": "8ead5adaab99f7b6f20d7b3c9ba0531e"} {"text": "The Rocket Alignment Problem\n\nThe following is a fictional dialogue building off of AI Alignment: Why It's Hard, and Where to Start.\n\n\n \n(Somewhere in a not-very-near neighboring world, where science took a very different course…)\n \nALFONSO:  Hello, Beth. I've noticed a lot of speculations lately about \"spaceplanes\" being used to attack cities, or possibly becoming infused with malevolent spirits that inhabit the celestial realms so that they turn on their own engineers.\nI'm rather skeptical of these speculations. Indeed, I'm a bit skeptical that airplanes will be able to even rise as high as stratospheric weather balloons anytime in the next century. But I understand that your institute wants to address the potential problem of malevolent or dangerous spaceplanes, and that you think this is an important present-day cause.\nBETH:  That's… really not how we at the Mathematics of Intentional Rocketry Institute would phrase things.\nThe problem of malevolent celestial spirits is what all the news articles are focusing on, but we think the real problem is something entirely different. We're worried that there's a difficult, theoretically challenging problem which modern-day rocket punditry is mostly overlooking. We're worried that if you aim a rocket at where the Moon is in the sky, and press the launch button, the rocket may not actually end up at the Moon.\nALFONSO:  I understand that it's very important to design fins that can stabilize a spaceplane's flight in heavy winds. That's important spaceplane safety research and someone needs to do it.\nBut if you were working on that sort of safety research, I'd expect you to be collaborating tightly with modern airplane engineers to test out your fin designs, to demonstrate that they are actually useful.\nBETH:  Aerodynamic designs are important features of any safe rocket, and we're quite glad that rocket scientists are working on these problems and taking safety seriously. That's not the sort of problem that we at MIRI focus on, though.\nALFONSO:  What's the concern, then? Do you fear that spaceplanes may be developed by ill-intentioned people?\nBETH:  That's not the failure mode we're worried about right now. We're more worried that right now, nobody can tell you how to point your rocket's nose such that it goes to the moon, nor indeed any prespecified celestial destination. Whether Google or the US Government or North Korea is the one to launch the rocket won't make a pragmatic difference to the probability of a successful Moon landing from our perspective, because right now nobody knows how to aim any kind of rocket anywhere.\n\nALFONSO:  I'm not sure I understand.\nBETH:  We're worried that even if you aim a rocket at the Moon, such that the nose of the rocket is clearly lined up with the Moon in the sky, the rocket won't go to the Moon. We're not sure what a realistic path from the Earth to the moon looks like, but we suspect it might not be a very straight path, and it may not involve pointing the nose of the rocket at the moon at all. We think the most important thing to do next is to advance our understanding of rocket trajectories until we have a better, deeper understanding of what we've started calling the \"rocket alignment problem\". There are other safety problems, but this rocket alignment problem will probably take the most total time to work on, so it's the most urgent.\nALFONSO:  Hmm, that sounds like a bold claim to me. Do you have a reason to think that there are invisible barriers between here and the moon that the spaceplane might hit? Are you saying that it might get very very windy between here and the moon, more so than on Earth? Both eventualities could be worth preparing for, I suppose, but neither seem likely.\nBETH:  We don't think it's particularly likely that there are invisible barriers, no. And we don't think it's going to be especially windy in the celestial reaches — quite the opposite, in fact. The problem is just that we don't yet know how to plot any trajectory that a vehicle could realistically take to get from Earth to the moon.\nALFONSO:  Of course we can't plot an actual trajectory; wind and weather are too unpredictable. But your claim still seems too strong to me. Just aim the spaceplane at the moon, go up, and have the pilot adjust as necessary. Why wouldn't that work? Can you prove that a spaceplane aimed at the moon won't go there?\nBETH:  We don't think we can prove anything of that sort, no. Part of the problem is that realistic calculations are extremely hard to do in this area, after you take into account all the atmospheric friction and the movements of other celestial bodies and such. We've been trying to solve some drastically simplified problems in this area, on the order of assuming that there is no atmosphere and that all rockets move in perfectly straight lines. Even those unrealistic calculations strongly suggest that, in the much more complicated real world, just pointing your rocket's nose at the Moon also won't make your rocket end up at the Moon. I mean, the fact that the real world is more complicated doesn't exactly make it any easier to get to the Moon.\nALFONSO:  Okay, let me take a look at this \"understanding\" work you say you're doing…\nHuh. Based on what I've read about the math you're trying to do, I can't say I understand what it has to do with the Moon. Shouldn't helping spaceplane pilots exactly target the Moon involve looking through lunar telescopes and studying exactly what the Moon looks like, so that the spaceplane pilots can identify particular features of the landscape to land on?\nBETH:  We think our present stage of understanding is much too crude for a detailed Moon map to be our next research target. We haven't yet advanced to the point of targeting one crater or another for our landing. We can't target anything at this point. It's more along the lines of \"figure out how to talk mathematically about curved rocket trajectories, instead of rockets that move in straight lines\". Not even realistically curved trajectories, right now, we're just trying to get past straight lines at all –\nALFONSO:  But planes on Earth move in curved lines all the time, because the Earth itself is curved. It seems reasonable to expect that future spaceplanes will also have the capability to move in curved lines. If your worry is that spaceplanes will only move in straight lines and miss the Moon, and you want to advise rocket engineers to build rockets that move in curved lines, well, that doesn't seem to me like a great use of anyone's time.\nBETH:  You're trying to draw much too direct of a line between the math we're working on right now, and actual rocket designs that might exist in the future. It's not that current rocket ideas are almost right, and we just need to solve one or two more problems to make them work. The conceptual distance that separates anyone from solving the rocket alignment problem is much greater than that.\nRight now everyone is confused about rocket trajectories, and we're trying to become less confused. That's what we need to do next, not run out and advise rocket engineers to build their rockets the way that our current math papers are talking about. Not until we stop being confused about extremely basic questions like why the Earth doesn't fall into the Sun.\nALFONSO:  I don't think the Earth is going to collide with the Sun anytime soon. The Sun has been steadily circling the Earth for a long time now.\nBETH:  I'm not saying that our goal is to address the risk of the Earth falling into the Sun. What I'm trying to say is that if humanity's present knowledge can't answer questions like \"Why doesn't the Earth fall into the Sun?\" then we don't know very much about celestial mechanics and we won't be able to aim a rocket through the celestial reaches in a way that lands softly on the Moon.\nAs an example of work we're presently doing that's aimed at improving our understanding, there's what we call the \"tiling positions\" problem. The tiling positions problem is how to fire a cannonball from a cannon in such a way that the cannonball circumnavigates the earth over and over again, \"tiling\" its initial coordinates like repeating tiles on a tessellated floor –\nALFONSO:  I read a little bit about your work on that topic. I have to say, it's hard for me to see what firing things from cannons has to do with getting to the Moon. Frankly, it sounds an awful lot like Good Old-Fashioned Space Travel, which everyone knows doesn't work. Maybe Jules Verne thought it was possible to travel around the earth by firing capsules out of cannons, but the modern study of high-altitude planes has completely abandoned the notion of firing things out of cannons. The fact that you go around talking about firing things out of cannons suggests to me that you haven't kept up with all the innovations in airplane design over the last century, and that your spaceplane designs will be completely unrealistic.\nBETH:  We know that rockets will not actually be fired out of cannons. We really, really know that. We're intimately familiar with the reasons why nothing fired out of a modern cannon is ever going to reach escape velocity. I've previously written several sequences of articles in which I describe why cannon-based space travel doesn't work.\nALFONSO:  But your current work is all about firing something out a cannon in such a way that it circles the earth over and over. What could that have to do with any realistic advice that you could give to a spaceplane pilot about how to travel to the Moon?\nBETH:  Again, you're trying to draw much too straight a line between the math we're doing right now, and direct advice to future rocket engineers.\nWe think that if we could find an angle and firing speed such that an ideal cannon, firing an ideal cannonball at that speed, on a perfectly spherical Earth with no atmosphere, would lead to that cannonball entering what we would call a \"stable orbit\" without hitting the ground, then… we might have understood something really fundamental and important about celestial mechanics.\nOr maybe not! It's hard to know in advance which questions are important and which research avenues will pan out. All you can do is figure out the next tractable-looking problem that confuses you, and try to come up with a solution, and hope that you'll be less confused after that.\nALFONSO:  You're talking about the cannonball hitting the ground as a problem, and how you want to avoid that and just have the cannonball keep going forever, right? But real spaceplanes aren't going to be aimed at the ground in the first place, and lots of regular airplanes manage to not hit the ground. It seems to me that this \"being fired out of a cannon and hitting the ground\" scenario that you're trying to avoid in this \"tiling positions problem\" of yours just isn't a failure mode that real spaceplane designers would need to worry about.\nBETH:  We are not worried about real rockets being fired out of cannons and hitting the ground. That is not why we're working on the tiling positions problem. In a way, you're being far too optimistic about how much of rocket alignment theory is already solved! We're not so close to understanding how to aim rockets that the kind of designs people are talking about now would work if only we solved a particular set of remaining difficulties like not firing the rocket into the ground. You need to go more meta on understanding the kind of progress we're trying to make.\nWe're working on the tiling positions problem because we think that being able to fire a cannonball at a certain instantaneous velocity such that it enters a stable orbit… is the sort of problem that somebody who could really actually launch a rocket through space and have it move in a particular curve that really actually ended with softly landing on the Moon would be able to solve easily. So the fact that we can't solve it is alarming. If we can figure out how to solve this much simpler, much more crisply stated \"tiling positions problem\" with imaginary cannonballs on a perfectly spherical earth with no atmosphere, which is a lot easier to analyze than a Moon launch, we might thereby take one more incremental step towards eventually becoming the sort of people who could plot out a Moon launch.\nALFONSO:  If you don't think that Jules-Verne-style space cannons are the wave of the future, I don't understand why you keep talking about cannons in particular.\nBETH:  Because there's a lot of sophisticated mathematical machinery already developed for aiming cannons. People have been aiming cannons and plotting cannonball trajectories since the sixteenth century. We can take advantage of that existing mathematics to say exactly how, if we fired an ideal cannonball in a certain direction, it would plow into the ground. If we tried talking about rockets with realistically varying acceleration, we can't even manage to prove that a rocket like that won't travel around the Earth in a perfect square, because with all that realistically varying acceleration and realistic air friction it's impossible to make any sort of definite statement one way or another. Our present understanding isn't up to it.\nALFONSO:  Okay, another question in the same vein. Why is MIRI sponsoring work on adding up lots of tiny vectors? I don't even see what that has to do with rockets in the first place; it seems like this weird side problem in abstract math.\nBETH:  It's more like… at several points in our investigation so far, we've run into the problem of going from a function about time-varying accelerations to a function about time-varying positions. We kept running into this problem as a blocking point in our math, in several places, so we branched off and started trying to analyze it explicitly. Since it's about the pure mathematics of points that don't move in discrete intervals, we call it the \"logical undiscreteness\" problem. Some of the ways of investigating this problem involve trying to add up lots of tiny, varying vectors to get a big vector. Then we talk about how that sum seems to change more and more slowly, approaching a limit, as the vectors get tinier and tinier and we add up more and more of them… or at least that's one avenue of approach.\nALFONSO:  I just find it hard to imagine people in future spaceplane rockets staring out their viewports and going, \"Oh, no, we don't have tiny enough vectors with which to correct our course! If only there was some way of adding up even more vectors that are even smaller!\" I'd expect future calculating machines to do a pretty good job of that already.\nBETH:  Again, you're trying to draw much too straight a line between the work we're doing now, and the implications for future rocket designs. It's not like we think a rocket design will almost work, but the pilot won't be able to add up lots of tiny vectors fast enough, so we just need a faster algorithm and then the rocket will get to the Moon. This is foundational mathematical work that we think might play a role in multiple basic concepts for understanding celestial trajectories. When we try to plot out a trajectory that goes all the way to a soft landing on a moving Moon, we feel confused and blocked. We think part of the confusion comes from not being able to go from acceleration functions to position functions, so we're trying to resolve our confusion.\nALFONSO:  This sounds suspiciously like a philosophy-of-mathematics problem, and I don't think that it's possible to progress on spaceplane design by doing philosophical research. The field of philosophy is a stagnant quagmire. Some philosophers still believe that going to the moon is impossible; they say that the celestial plane is fundamentally separate from the earthly plane and therefore inaccessible, which is clearly silly. Spaceplane design is an engineering problem, and progress will be made by engineers.\nBETH:  I agree that rocket design will be carried out by engineers rather than philosophers. I also share some of your frustration with philosophy in general. For that reason, we stick to well-defined mathematical questions that are likely to have actual answers, such as questions about how to fire a cannonball on a perfectly spherical planet with no atmosphere such that it winds up in a stable orbit.\nThis often requires developing new mathematical frameworks. For example, in the case of the logical undiscreteness problem, we're developing methods for translating between time-varying accelerations and time-varying positions. You can call the development of new mathematical frameworks \"philosophical\" if you'd like — but if you do, remember that it's a very different kind of philosophy than the \"speculate about the heavenly and earthly planes\" sort, and that we're always pushing to develop new mathematical frameworks or tools.\nALFONSO:  So from the perspective of the public good, what's a good thing that might happen if you solved this logical undiscreteness problem?\nBETH:  Mainly, we'd be less confused and our research wouldn't be blocked and humanity could actually land on the Moon someday. To try and make it more concrete – though it's hard to do that without actually knowing the concrete solution – we might be able to talk about incrementally more realistic rocket trajectories, because our mathematics would no longer break down as soon as we stopped assuming that rockets moved in straight lines. Our math would be able to talk about exact curves, instead of a series of straight lines that approximate the curve.\nALFONSO:  An exact curve that a rocket follows? This gets me into the main problem I have with your project in general. I just don't believe that any future rocket design will be the sort of thing that can be analyzed with absolute, perfect precision so that you can get the rocket to the Moon based on an absolutely plotted trajectory with no need to steer. That seems to me like a bunch of mathematicians who have no clue how things work in the real world, wanting everything to be perfectly calculated. Look at the way Venus moves in the sky; usually it travels in one direction, but sometimes it goes retrograde in the other direction. We'll just have to steer as we go.\nBETH:  That's not what I meant by talking about exact curves… Look, even if we can invent logical undiscreteness, I agree that it's futile to try to predict, in advance, the precise trajectories of all of the winds that will strike a rocket on its way off the ground. Though I'll mention parenthetically that things might actually become calmer and easier to predict, once a rocket gets sufficiently high up –\nALFONSO:  Why?\nBETH:  Let's just leave that aside for now, since we both agree that rocket positions are hard to predict exactly during the atmospheric part of the trajectory, due to winds and such. And yes, if you can't exactly predict the initial trajectory, you can't exactly predict the later trajectory. So, indeed, the proposal is definitely not to have a rocket design so perfect that you can fire it at exactly the right angle and then walk away without the pilot doing any further steering. The point of doing rocket math isn't that you want to predict the rocket's exact position at every microsecond, in advance.\nALFONSO:  Then why obsess over pure math that's too simple to describe the rich, complicated real universe where sometimes it rains?\nBETH:  It's true that a real rocket isn't a simple equation on a board. It's true that there are all sorts of aspects of a real rocket's shape and internal plumbing that aren't going to have a mathematically compact characterization. What MIRI is doing isn't the right degree of mathematization for all rocket engineers for all time; it's the mathematics for us to be using right now (or so we hope).\nTo build up the field's understanding incrementally, we need to talk about ideas whose consequences can be pinpointed precisely enough that people can analyze scenarios in a shared framework. We need enough precision that someone can say, \"I think in scenario X, design Y does Z\", and someone else can say, \"No, in scenario X, Y actually does W\", and the first person responds, \"Darn, you're right. Well, is there some way to change Y so that it would do Z?\"\nIf you try to make things realistically complicated at this stage of research, all you're left with is verbal fantasies. When we try to talk to someone with an enormous flowchart of all the gears and steering rudders they think should go into a rocket design, and we try to explain why a rocket pointed at the Moon doesn't necessarily end up at the Moon, they just reply, \"Oh, my rocket won't do that.\" Their ideas have enough vagueness and flex and underspecification that they've achieved the safety of nobody being able to prove to them that they're wrong. It's impossible to incrementally build up a body of collective knowledge that way.\nThe goal is to start building up a library of tools and ideas we can use to discuss trajectories formally. Some of the key tools for formalizing and analyzing intuitively plausible-seeming trajectories haven't yet been expressed using math, and we can live with that for now. We still try to find ways to represent the key ideas in mathematically crisp ways whenever we can. That's not because math is so neat or so prestigious; it's part of an ongoing project to have arguments about rocketry that go beyond \"Does not!\" vs. \"Does so!\"\nALFONSO:  I still get the impression that you're reaching for the warm, comforting blanket of mathematical reassurance in a realm where mathematical reassurance doesn't apply. We can't obtain a mathematical certainty of our spaceplanes being absolutely sure to reach the Moon with nothing going wrong. That being the case, there's no point in trying to pretend that we can use mathematics to get absolute guarantees about spaceplanes.\nBETH:  Trust me, I am not going to feel \"reassured\" about rocketry no matter what math MIRI comes up with. But, yes, of course you can't obtain a mathematical assurance of any physical proposition, nor assign probability 1 to any empirical statement.\nALFONSO:  Yet you talk about proving theorems – proving that a cannonball will go in circles around the earth indefinitely, for example.\nBETH:  Proving a theorem about a rocket's trajectory won't ever let us feel comfortingly certain about where the rocket is actually going to end up. But if you can prove a theorem which says that your rocket would go to the Moon if it launched in a perfect vacuum, maybe you can attach some steering jets to the rocket and then have it actually go to the Moon in real life. Not with 100% probability, but with probability greater than zero.\nThe point of our work isn't to take current ideas about rocket aiming from a 99% probability of success to a 100% chance of success. It's to get past an approximately 0% chance of success, which is where we are now.\nALFONSO:  Zero percent?!\nBETH:  Modulo Cromwell's Rule, yes, zero percent. If you point a rocket's nose at the Moon and launch it, it does not go to the Moon.\nALFONSO:  I don't think future spaceplane engineers will actually be that silly, if direct Moon-aiming isn't a method that works. They'll lead the Moon's current motion in the sky, and aim at the part of the sky where Moon will appear on the day the spaceplane is a Moon's distance away. I'm a bit worried that you've been talking about this problem so long without considering such an obvious idea.\nBETH:  We considered that idea very early on, and we're pretty sure that it still doesn't get us to the Moon.\nALFONSO:  What if I add steering fins so that the rocket moves in a more curved trajectory? Can you prove that no version of that class of rocket designs will go to the Moon, no matter what I try?\nBETH:  Can you sketch out the trajectory that you think your rocket will follow?\nALFONSO:  It goes from the Earth to the Moon.\nBETH:  In a bit more detail, maybe?\nALFONSO:  No, because in the real world there are always variable wind speeds, we don't have infinite fuel, and our spaceplanes don't move in perfectly straight lines.\nBETH:  Can you sketch out a trajectory that you think a simplified version of your rocket will follow, so we can examine the assumptions your idea requires?\nALFONSO:  I just don't believe in the general methodology you're proposing for spaceplane designs. We'll put on some steering fins, turn the wheel as we go, and keep the Moon in our viewports. If we're off course, we'll steer back.\nBETH:  … We're actually a bit concerned that standard steering fins may stop working once the rocket gets high enough, so you won't actually find yourself able to correct course by much once you're in the celestial reaches – like, if you're already on a good course, you can correct it, but if you screwed up, you won't just be able to turn around like you could turn around an airplane –\nALFONSO:  Why not?\nBETH:  We can go into that topic too; but even given a simplified model of a rocket that you could steer, a walkthrough of the steps along the path that simplified rocket would take to the Moon would be an important step in moving this discussion forward. Celestial rocketry is a domain that we expect to be unusually difficult – even compared to building rockets on Earth, which is already a famously hard problem because they usually just explode. It's not that everything has to be neat and mathematical. But the overall difficulty is such that, in a proposal like \"lead the moon in the sky,\" if the core ideas don't have a certain amount of solidity about them, it would be equivalent to firing your rocket randomly into the void.\nIf it feels like you don't know for sure whether your idea works, but that it might work; if your idea has many plausible-sounding elements, and to you it feels like nobody has been able to convincingly explain to you how it would fail; then, in real life, that proposal has a roughly 0% chance of steering a rocket to the Moon.\nIf it seems like an idea is extremely solid and clearly well-understood, if it feels like this proposal should definitely take a rocket to the Moon without fail in good conditions, then maybe under the best-case conditions we should assign an 85% subjective credence in success, or something in that vicinity.\nALFONSO:  So uncertainty automatically means failure? This is starting to sound a bit paranoid, honestly.\nBETH:  The idea I'm trying to communicate is something along the lines of, \"If you can reason rigorously about why a rocket should definitely work in principle, it might work in real life, but if you have anything less than that, then it definitely won't work in real life.\"\nI'm not asking you to give me an absolute mathematical proof of empirical success. I'm asking you to give me something more like a sketch for how a simplified version of your rocket could move, that's sufficiently determined in its meaning that you can't just come back and say \"Oh, I didn't mean that\" every time someone tries to figure out what it actually does or pinpoint a failure mode.\nThis isn't an unreasonable demand that I'm imposing to make it impossible for any ideas to pass my filters. It's the primary bar all of us have to pass to contribute to collective progress in this field. And a rocket design which can't even pass that conceptual bar has roughly a 0% chance of landing softly on the Moon.\n\n \nThe post The Rocket Alignment Problem appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "The Rocket Alignment Problem", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=15", "id": "e39f6983d7249f9268f9e9fd54e40ca4"} {"text": "September 2018 Newsletter\n\n\nSummer MIRI Updates: Buck Shlegeris and Ben Weinstein-Raun have joined the MIRI team! Additionally, we ran a successful internship program over the summer, and we're co-running a new engineer-oriented workshop series with CFAR.\nOn the fundraising side, we received a $489,000 grant from the Long-Term Future Fund, a $150,000 AI Safety Retraining Program grant from the Open Philanthropy Project, and an amazing surprise $1.02 million grant from \"Anonymous Ethereum Investor #2\"!\nOther updates\n\nNew research forum posts: Reducing Collective Rationality to Individual Optimization in Common-Payoff Games Using MCMC; History of the Development of Logical Induction\nWe spoke at the Human-Aligned AI Summer School in Prague.\nMIRI advisor Blake Borgeson has joined our Board of Directors, and DeepMind Research Scientist Victoria Krakovna has become a MIRI research advisor.\n\nNews and links\n\nThe Open Philanthropy Project is accepting a new round of applicants for its AI Fellows Program.\n\n\nThe post September 2018 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "September 2018 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=15", "id": "47326ab490ad0b453c579766d72d72da"} {"text": "Summer MIRI Updates\n\nIn our last major updates—our 2017 strategic update and fundraiser posts—we said that our current focus is on technical research and executing our biggest-ever hiring push. Our supporters responded with an incredible show of support at the end of the year, putting us in an excellent position to execute on our most ambitious growth plans.\nIn this post, I'd like to provide some updates on our recruiting efforts and successes, announce some major donations and grants that we've received, and provide some other miscellaneous updates.\nIn brief, our major announcements are:\n\nWe have two new full-time research staff hires to announce.\nWe've received $1.7 million in major donations and grants, $1 million of which came through a tax-advantaged fund for Canadian MIRI supporters.\n\nFor more details, see below.\n\n\n1. Growth\nI'm happy to announce the addition of two new research staff to the MIRI team:\n \n\nBuck Shlegeris: Before joining MIRI, Buck worked as a software engineer at PayPal, and he was the first employee at Triplebyte. He previously studied at the Australian National University, majoring in CS and minoring in math and physics, and he has presented work on data structure synthesis at industry conferences. In addition to his research at MIRI, Buck is also helping with recruiting.\n\n\n \n\n\n\n\n\nBen Weinstein-Raun: Ben joined MIRI after spending two years as a software engineer at Cruise Automation, where he worked on the planning and prediction teams. He previously worked at Counsyl on their automated genomics lab, and helped to found Hacksburg, a hackerspace in Blacksburg, Virginia. He holds a BS from Virginia Tech, where he studied computer engineering.\n\n\n \n\n\nThis year we've run a few different programs to help us work towards our hiring goals, and to more generally increase the number of people doing AI alignment research: \n\n\n\nWe've been co-running a series of invite-only workshops with the Center for Applied Rationality (CFAR), targeted at potential future hires who have strong engineering backgrounds. Participants report really enjoying the workshops, and we've found them very useful for getting to know potential research staff hires.1 If you'd be interested in attending one of these workshops, send Buck an email. \nWe helped run the AI Summer Fellows Program with CFAR. We had a large and extremely strong pool of applicants, with over 170 applications for 30 slots (versus 50 applications for 20 slots in 2017). The program this year was more mathematically flavored than in 2017, and concluded with a flurry of new analyses by participants. On the whole, the program seems to have been more successful at digging into AI alignment problems than in previous years, as well as more successful at seeding ongoing collaborations between participants, and between participants and MIRI staff. \nWe ran a ten-week research internship program this summer, from June through August.2 This included our six interns attending AISFP and pursuing a number of independent lines of research, with a heavy focus on tiling agents. Among other activities, interns looked for Vingean reflection in expected utility maximizers, distilled early research on subsystem alignment, and built on Abram's Complete Class Theorems approach to decision theory. \n\n\n\nIn related news, we've been restructuring and growing our operations team to ensure we're well positioned to support the research team as we grow. Alex Vermeer has taken on a more general support role as our process and projects head. In addition to his donor relationships and fundraising focus, Colm Ó Riain has taken on a central role in our recruiting efforts as our head of growth. Aaron Silverbook is now heading operations; we've brought on Carson Jones as our new office manager; and long-time remote MIRI contractor Jimmy Rintjema is now our digital infrastructure lead.3 \n\n\n2. Fundraising\n\n\nOn the fundraising side, I'm happy to announce that we've received several major donations and grants.\n\n\nFirst, following our $1.01 million donation from an anonymous Ethereum investor in 2017, we've received a huge new donation of $1.02 million from \"Anonymous Ethereum Investor #2\", based in Canada! The donation was made through Rethink Charity Forward's recently established tax-advantaged fund for Canadian MIRI supporters.4 \n\n\nSecond, the departing administrator of the Long-Term Future Fund, Nick Beckstead, has recommended a $489,000 grant to MIRI, aimed chiefly at funding improvements to organizational efficiency and staff productivity.\n\n\nTogether, these contributions have helped ensure that we remain in the solid position we were in following our 2017 fundraiser, as we attempt to greatly scale our team size. Our enormous thanks for this incredible support, and further thanks to RC Forward and the Centre for Effective Altruism for helping build the infrastructure that made these contributions possible.\n\n\nWe've also received a $150,000 AI Safety Retraining Program grant from the Open Philanthropy Project to provide stipends and guidance to a few highly technically skilled individuals. The goal of the program is to free up 3-6 months of time for strong candidates to spend on retraining, so that they can potentially transition to full-time work on AI alignment. Buck is currently selecting candidates for the program; to date, we've made two grants to individuals.5 \n\n\n3. Miscellaneous updates\n\n\nThe LessWrong development team has launched a beta for the AI Alignment Forum, a new research forum for technical AI safety work that we've been contributing to. I'm very grateful to the LW team for taking on this project, and I'm really looking forward to the launch of the new forum. \n\n\nFinally, we've made substantial progress on the tiling problem, which we'll likely be detailing later this year. See our March research plans and predictions write-up for more on our research priorities.\n\n\n \n\n\nWe're very happy about these newer developments, and we're particularly excited to have Buck and Ben on the team. We have a few more big announcements coming up in the not-so-distant future, so stay tuned.\n\n\n\nBen was a workshop participant, which eventually led to him coming on board at MIRI.We also have another research intern joining us in the fall.We've long considered Jimmy to be full-time staff, but he isn't officially an employee since he lives in Canada.H/T to Colm for setting up a number of tax-advantaged giving channels for international donors. If you're a MIRI supporter outside the US, make sure to check out our Tax-Advantaged Donations page.We aren't taking formal applications, but if you're particularly interested in the program or have questions, you're welcome to shoot Buck an email.The post Summer MIRI Updates appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Summer MIRI Updates", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=16", "id": "b86930393eb5184e0e7387ff007f12bb"} {"text": "August 2018 Newsletter\n\n\nUpdates\n\nNew posts to the new AI Alignment Forum: Buridan's Ass in Coordination Games; Probability is Real, and Value is Complex; Safely and Usefully Spectating on AIs Optimizing Over Toy Worlds\nMIRI Research Associate Vanessa Kosoy wins a $7500 AI Alignment Prize for \"The Learning-Theoretic AI Alignment Research Agenda.\" Applications for the prize's next round will be open through December 31.\nInterns from MIRI and the Center for Human-Compatible AI collaborated at an AI safety research workshop. \nThis year's AI Summer Fellows Program was very successful, and its one-day blogathon resulted in a number of interesting write-ups, such as Dependent Type Theory and Zero-Shot Reasoning, Conceptual Problems with Utility Functions (and follow-up), Complete Class: Consequentialist Foundations, and Agents That Learn From Human Behavior Can't Learn Human Values That Humans Haven't Learned Yet.\nSee Rohin Shah's alignment newsletter for more discussion of recent posts to the new AI Alignment Forum.\n\nNews and links\n\nThe Future of Humanity Institute is seeking project managers for its Research Scholars Programme and its Governance of AI Program.\n\n\nThe post August 2018 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "August 2018 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=16", "id": "df24600fab99ec6cfa029ec044f9e4a4"} {"text": "July 2018 Newsletter\n\n\nUpdates\n\nA new paper: \"Forecasting Using Incomplete Models\"\nNew research write-ups and discussions: Prisoners' Dilemma with Costs to Modeling; Counterfactual Mugging Poker Game; Optimization Amplifies\nEliezer Yudkowsky, Paul Christiano, Jessica Taylor, and Wei Dai discuss Alex Zhu's FAQ for Paul's research agenda.\nWe attended EA Global in SF, and gave a short talk on \"Categorizing Variants of Goodhart's Law.\"\nRoman Yampolskiy's forthcoming anthology, Artificial Intelligence Safety and Security, includes reprinted papers by Nate Soares (\"The Value Learning Problem\") and by Nick Bostrom and Eliezer Yudkowsky (\"The Ethics of Artificial Intelligence\").\nStuart Armstrong's 2014 primer on AI risk, Smarter Than Us: The Rise of Machine Intelligence, is now available as a free web book at smarterthan.us.\n\nNews and links\n\nOpenAI announces that their OpenAI Five system \"has started to defeat amateur human teams at Dota 2\" (plus an update). Discussion on LessWrong and Hacker News.\nRohin Shah, a PhD student at the Center for Human-Compatible AI, comments on recent alignment-related results in his regularly updated Alignment Newsletter.\n\n\nThe post July 2018 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "July 2018 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=16", "id": "bde76ba36cad1353ac0ae493a38f3703"} {"text": "New paper: \"Forecasting using incomplete models\"\n\nMIRI Research Associate Vanessa Kosoy has a paper out on issues in naturalized induction: \"Forecasting using incomplete models\". Abstract:\nWe consider the task of forecasting an infinite sequence of future observations based on some number of past observations, where the probability measure generating the observations is \"suspected\" to satisfy one or more of a set of incomplete models, i.e., convex sets in the space of probability measures.\nThis setting is in some sense intermediate between the realizable setting where the probability measure comes from some known set of probability measures (which can be addressed using e.g. Bayesian inference) and the unrealizable setting where the probability measure is completely arbitrary.\nWe demonstrate a method of forecasting which guarantees that, whenever the true probability measure satisfies an incomplete model in a given countable set, the forecast converges to the same incomplete model in the (appropriately normalized) Kantorovich-Rubinstein metric. This is analogous to merging of opinions for Bayesian inference, except that convergence in the Kantorovich-Rubinstein metric is weaker than convergence in total variation.\nKosoy's work builds on logical inductors to create a cleaner (purely learning-theoretic) formalism for modeling complex environments, showing that the methods developed in \"Logical induction\" are useful for applications in classical sequence prediction unrelated to logic.\n\"Forecasting using incomplete models\" also shows that the intuitive concept of an \"incomplete\" or \"partial\" model has an elegant and useful formalization related to Knightian uncertainty. Additionally, Kosoy shows that using incomplete models to generalize Bayesian inference allows an agent to make predictions about environments that can be as complex as the agent itself, or more complex — as contrasted with classical Bayesian inference.\nFor more of Kosoy's research, see \"Optimal polynomial-time estimators\" and the Intelligent Agent Foundations Forum.\n \n\n\n\nSign up to get updates on new MIRI technical results\nGet notified every time a new technical paper is published.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n \nThe post New paper: \"Forecasting using incomplete models\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “Forecasting using incomplete models”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=16", "id": "4c20a36a46c84b4b0f603ecd50eac590"} {"text": "June 2018 Newsletter\n\n\nUpdates\n\nNew research write-ups and discussions: Logical Inductors Converge to Correlated Equilibria (Kinda)\nMIRI researcher Tsvi Benson-Tilsen and Alex Zhu ran an AI safety retreat for MIT students and alumni.\nAndrew Critch discusses what kind of advice to give to junior AI-x-risk-concerned researchers, and I clarify two points about MIRI's strategic view.\nFrom Eliezer Yudkowsky: Challenges to Paul Christiano's Capability Amplification Proposal. (Cross-posted to LessWrong.)\n\nNews and links\n\nJessica Taylor discusses the relationship between decision theory, game theory, and the NP and PSPACE complexity classes.\nFrom OpenAI's Geoffrey Irving, Paul Christiano, and Dario Amodei: an AI safety technique based on training agents to debate each other. And from Amodei and Danny Hernandez, an analysis showing that \"since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time\".\nChristiano asks: Are Minimal Circuits Daemon-Free? and When is Unaligned AI Morally Valuable?\nThe Future of Humanity Institute's Allan Dafoe discusses the future of AI, international governance, and macrostrategy on the 80,000 Hours Podcast.\n\n\nThe post June 2018 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "June 2018 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=16", "id": "5ee296d7dc9e151b23afe08edea5bed2"} {"text": "May 2018 Newsletter\n\n\nUpdates\n\nNew research write-ups and discussions: Resource-Limited Reflective Oracles; Computing An Exact Quantilal Policy\nNew at AI Impacts: Promising Research Projects\nMIRI research fellow Scott Garrabrant and associates Stuart Armstrong and Vanessa Kosoy are among the winners in the second round of the AI Alignment Prize. First place goes to Tom Everitt and Marcus Hutter's \"The Alignment Problem for History-Based Bayesian Reinforcement Learners.\"\nOur thanks to our donors in REG's Spring Matching Challenge and to online poker players Chappolini, donthnrmepls, FMyLife, ValueH, and xx23xx, who generously matched $47,000 in donations to MIRI, plus another $250,000 to the Good Food Institute, GiveDirectly, and other charities.\n\nNews and links\n\nOpenAI's charter predicts that \"safety and security concerns will reduce [their] traditional publishing in the future\" and emphasizes the importance of \"long-term safety\" and avoiding late-stage races between AGI developers.\nMatthew Rahtz recounts lessons learned while reproducing Christiano et al.'s \"Deep Reinforcement Learning from Human Preferences.\"\n\n\nThe post May 2018 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "May 2018 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=16", "id": "83f39c4c67de14aa82d9e72908fbbcd4"} {"text": "Challenges to Christiano's capability amplification proposal\n\n\nThe following is a basically unedited summary I wrote up on March 16 of my take on Paul Christiano's AGI alignment approach (described in \"ALBA\" and \"Iterated Distillation and Amplification\"). Where Paul had comments and replies, I've included them below.\n\nI see a lot of free variables with respect to what exactly Paul might have in mind. I've sometimes tried presenting Paul with my objections and then he replies in a way that locally answers some of my question but I think would make other difficulties worse. My global objection is thus something like, \"I don't see any concrete setup and consistent simultaneous setting of the variables where this whole scheme works.\" These difficulties are not minor or technical; they appear to me quite severe. I try to walk through the details below.\nIt should be understood at all times that I do not claim to be able to pass Paul's ITT for Paul's view and that this is me criticizing my own, potentially straw misunderstanding of what I imagine Paul might be advocating.\n\n \n\nPaul Christiano\nOverall take: I think that these are all legitimate difficulties faced by my proposal and to a large extent I agree with Eliezer's account of those problems (though not his account of my current beliefs).\nI don't understand exactly how hard Eliezer expects these problems to be; my impression is \"just about as hard as solving alignment from scratch,\" but I don't have a clear sense of why.\nTo some extent we are probably disagreeing about alternatives. From my perspective, the difficulties with my approach (e.g. better understanding the forms of optimization that cause trouble, or how to avoid optimization daemons in systems about as smart as you are, or how to address X-and-only-X) are also problems for alternative alignment approaches. I think it's a mistake to think that tiling agents, or decision theory, or naturalized induction, or logical uncertainty, are going to make the situation qualitatively better for these problems, so work on those problems looks to me like procrastinating on the key difficulties. I agree with the intuition that progress on the agent foundations agenda \"ought to be possible,\" and I agree that it will help at least a little bit with the problems Eliezer describes in this document, but overall agent foundations seems way less promising than a direct attack on the problems (given that we haven't tried the direct attack nearly enough to give up). Working through philosophical issues in the context of a concrete alignment strategy generally seems more promising to me than trying to think about them in the abstract, and I think this is evidenced by the fact that most of the core difficulties in my approach would also afflict research based on agent foundations.\nThe main way I could see agent foundations research as helping to address these problems, rather than merely deferring them, is if we plan to eschew large-scale ML altogether. That seems to me like a very serious handicap, so I'd only go that direction once I was quite pessimistic about solving these problems. My subjective experience is of making continuous significant progress rather than being stuck. I agree there is clear evidence that the problems are \"difficult\" in the sense that we are going to have to make progress in order to solve them, but not that they are \"difficult\" in the sense that P vs. NP or even your typical open problem in CS is probably difficult (and even then if your options were \"prove P != NP\" or \"try to beat Google at building an AGI without using large-scale ML,\" I don't think it's obvious which option you should consider more promising).\n\n\nFirst and foremost, I don't understand how \"preserving alignment while amplifying capabilities\" is supposed to work at all under this scenario, in a way consistent with other things that I've understood Paul to say.\nI want to first go through an obvious point that I expect Paul and I agree upon: Not every system of locally aligned parts has globally aligned output, and some additional assumption beyond \"the parts are aligned\" is necessary to yield the conclusion \"global behavior is aligned\". The straw assertion \"an aggregate of aligned parts is aligned\" is the reverse of the argument that Searle uses to ask us to imagine that an (immortal) human being who speaks only English, who has been trained do things with many many pieces of paper that instantiate a Turing machine, can't be part of a whole system that understands Chinese, because the individual pieces and steps of the system aren't locally imbued with understanding Chinese. Here the compositionally non-preserved property is \"lack of understanding of Chinese\"; we can't expect \"alignment\" to be any more necessarily preserved than this, except by further assumptions.\nThe second-to-last time Paul and I conversed at length, I kept probing Paul for what in practice the non-compacted-by-training version of a big aggregate of small aligned agents would look like. He described people, living for a single day, routing around phone numbers of other agents with nobody having any concept of the global picture. I used the term \"Chinese Room Bureaucracy\" to describe this. Paul seemed to think that this was an amusing but perhaps not inappropriate term.\nIf no agent in the Chinese Room Bureaucracy has a full view of which actions have which consequences and why, this cuts off the most obvious route by which the alignment of any agent could apply to the alignment of the whole. The way I usually imagine things, the alignment of an agent applies to things that the agent understands. If you have a big aggregate of agents that understands something the little local agent doesn't understand, the big aggregate doesn't inherit alignment from the little agents. Searle's Chinese Room can understand Chinese even if the person inside it doesn't understand Chinese, and this correspondingly implies, by default, that the person inside the Chinese Room is powerless to express their own taste in restaurant orders.\nI don't understand Paul's model of how a ton of little not-so-bright agents yield a big powerful understanding in aggregate, in a way that doesn't effectively consist of them running AGI code that they don't understand.\n \n\nPaul Christiano\nThe argument for alignment isn't that \"a system made of aligned neurons is aligned.\" Unalignment isn't a thing that magically happens; it's the result of specific optimization pressures in the system that create trouble. My goal is to (a) first construct weaker agents who aren't internally doing problematic optimization, (b) put them together in a way that improves capability without doing other problematic optimization, (c) iterate that process.\n\n \nPaul has previously challenged me to name a bottleneck that I think a Christiano-style system can't pass. This is hard because (a) I'm not sure I understand Paul's system, and (b) it's clearest if I name a task for which we don't have a present crisp algorithm. But:\nThe bottleneck I named in my last discussion with Paul was, \"We have copies of a starting agent, which run for at most one cumulative day before being terminated, and this agent hasn't previously learned much math but is smart and can get to understanding algebra by the end of the day even though the agent started out knowing just concrete arithmetic. How does a system of such agents, without just operating a Turing machine that operates an AGI, get to the point of inventing Hessian-free optimization in a neural net?\"\nThis is a slightly obsolete example because nobody uses Hessian-free optimization anymore. But I wanted to find an example of an agent that needed to do something that didn't have a simple human metaphor. We can understand second derivatives using metaphors like acceleration. \"Hessian-free optimization\" is something that doesn't have an obvious metaphor that can explain it, well enough to use it in an engineering design, to somebody who doesn't have a mathy and not just metaphorical understanding of calculus. Even if it did have such a metaphor, that metaphor would still be very unlikely to be invented by someone who didn't understand calculus.\nI don't see how Paul expects lots of little agents who can learn algebra in a day, being run in sequence, to aggregate into something that can build designs using Hessian-free optimization, without the little agents having effectively the role of an immortal dog that's been trained to operate a Turing machine. So I also don't see how Paul expects the putative alignment of the little agents to pass through this mysterious aggregation form of understanding, into alignment of the system that understands Hessian-free optimization.\nI expect this is already understood, but I state as an obvious fact that alignment is not in general a compositionally preserved property of cognitive systems: If you train a bunch of good and moral people to operate the elements of a Turing machine and nobody has a global view of what's going on, their goodness and morality does not pass through to the Turing machine. Even if we let the good and moral people have discretion as to when to write a different symbol than the usual rules call for, they still can't be effective at aligning the global system, because they don't individually understand whether the Hessian-free optimization is being used for good or evil, because they don't understand Hessian-free optimization or the thoughts that incorporate it. So we would not like to rest the system on the false assumption \"any system composed of aligned subagents is aligned\", which we know to be generally false because of this counterexample. We would like there to instead be some narrower assumption, perhaps with additional premises, which is actually true, on which the system's alignment rests. I don't know what narrower assumption Paul wants to use.\n\nPaul asks us to consider AlphaGo as a model of capability amplification.\nMy view of AlphaGo would be as follows: We understand Monte Carlo Tree Search. MCTS is an iterable algorithm whose intermediate outputs can be plugged into further iterations of the algorithm. So we can use supervised learning where our systems of gradient descent can capture and foreshorten the computation of some but not all of the details of winning moves revealed by the short MCTS, plug in the learned outputs to MCTS, and get a pseudo-version of \"running MCTS longer and wider\" which is weaker than an MCTS actually that broad and deep, but more powerful than the raw MCTS run previously. The alignment of this system is provided by the crisp formal loss function at the end of the MCTS.\nHere's an alternate case where, as far as I can tell, a naive straw version of capability amplification clearly wouldn't work. Suppose we have an RNN that plays Go. It's been constructed in such fashion that if we iterate the RNN for longer, the Go move gets somewhat better. \"Aha,\" says the straw capability amplifier, \"clearly we can just take this RNN, train another network to approximate its internal state after 100 iterations from the initial Go position; we feed that internal state into the RNN at the start, then train the amplifying network to approximate the internal state of that RNN after it runs for another 200 iterations. The result will clearly go on trying to 'win at Go' because the original RNN was trying to win at Go; the amplified system preserves the values of the original.\" This doesn't work because, let us say by hypothesis, the RNN can't get arbitrarily better at Go if you go on iterating it; and the nature of the capability amplification setup doesn't permit any outside loss function that could tell the amplified RNN whether it's doing better or worse at Go.\n \n\nPaul Christiano\nI definitely agree that amplification doesn't work better than \"let the human think for arbitrarily long.\" I don't think that's a strong objection, because I think humans (even humans who only have a short period of time) will eventually converge to good enough answers to the questions we face.\n\n \nThe RNN has only whatever opinion it converges to, or whatever set of opinions it diverges to, to tell itself how well it's doing. This is exactly what it is for capability amplification to preserve alignment; but this in turn means that capability amplification only works to the extent that what we are amplifying has within itself the capability to be very smart in the limit.\nIf we're effectively constructing a civilization of long-lived Paul Christianos, then this difficulty is somewhat alleviated. There are still things that can go wrong with this civilization qua civilization (even aside from objections I name later as to whether we can actually safely and realistically do that). I do however believe that a civilization of Pauls could do nice things.\nBut other parts of Paul's story don't permit this, or at least that's what Paul was saying last time; Paul's supervised learning setup only lets the simulated component people operate for a day, because we can't get enough labeled cases if the people have to each run for a month.\nFurthermore, as I understand it, the \"realistic\" version of this is supposed to start with agents dumber than Paul. According to my understanding of something Paul said in answer to a later objection, the agents in the system are supposed to be even dumber than an average human (but aligned). It is not at all obvious to me that an arbitrarily large system of agents with IQ 90, who each only live for one day, can implement a much smarter agent in a fashion analogous to the internal agents themselves achieving understandings to which they can apply their alignment in a globally effective way, rather than them blindly implementing a larger algorithm they don't understand.\nI'm not sure a system of one-day-living IQ-90 humans ever gets to the point of inventing fire or the wheel.\nIf Paul has an intuition saying \"Well, of course they eventually start doing Hessian-free optimization in a way that makes their understanding effective upon it to create global alignment; I can't figure out how to convince you otherwise if you don't already see that,\" I'm not quite sure where to go from there, except onwards to my other challenges.\n \n\nPaul Christiano\nWell, I can see one obvious way to convince you otherwise: actually run the experiment. But before doing that I'd like to be more precise about what you expect to work and not work, since I'm not going to literally do the HF optimization example (developing new algorithms is way, way beyond the scope of existing ML). I think we can do stuff that looks (to me) even harder than inventing HF optimization. But I don't know if I have a good enough model of your model to know what you'd actually consider harder.\n\n \nUnless of course you have so many agents in the (uncompressed) aggregate that the aggregate implements a smarter genetic algorithm that is maximizing the approval of the internal agents. If you take something much smarter than IQ 90 humans living for one day, and train it to get the IQ 90 humans to output large numbers signaling their approval, I would by default expect it to hack the IQ 90 one-day humans, who are not secure systems. We're back to the global system being smarter than the individual agents in a way which doesn't preserve alignment.\n \n\nPaul Christiano\nDefinitely agree that even if the agents are aligned, they can implement unaligned optimization, and then we're back to square one. Amplification only works if we can improve capability without doing unaligned optimization. I think this is a disagreement about the decomposability of cognitive work. I hope we can resolve it by actually finding concrete, simple tasks where we have differing intuitions, and then doing empirical tests.\n\n \nThe central interesting-to-me idea in capability amplification is that by exactly imitating humans, we can bypass the usual dooms of reinforcement learning. If arguendo you can construct an exact imitation of a human, it possesses exactly the same alignment properties as the human; and this is true in a way that is not true if we take a reinforcement learner and ask it to maximize an approval signal originating from the human. (If the subject is Paul Christiano, or Carl Shulman, I for one am willing to say these humans are reasonably aligned; and I'm pretty much okay with somebody giving them the keys to the universe in expectation that the keys will later be handed back.)\nIt is not obvious to me how fast alignment-preservation degrades as the exactness of the imitation is weakened. This matters because of things Paul has said which sound to me like he's not advocating for perfect imitation, in response to challenges I've given about how perfect imitation would be very expensive. That is, the answer he gave to a challenge about the expense of perfection makes the answer to \"How fast do we lose alignment guarantees as we move away from perfection?\" become very important.\nOne example of a doom I'd expect from standard reinforcement learning would be what I'd term the \"X-and-only-X\" problem. I unfortunately haven't written this up yet, so I'm going to try to summarize it briefly here.\nX-and-only-X is what I call the issue where the property that's easy to verify and train is X, but the property you want is \"this was optimized for X and only X and doesn't contain a whole bunch of possible subtle bad Ys that could be hard to detect formulaically from the final output of the system\".\nFor example, imagine X is \"give me a program which solves a Rubik's Cube\". You can run the program and verify that it solves Rubik's Cubes, and use a loss function over its average performance which also takes into account how many steps the program's solutions require.\nThe property Y is that the program the AI gives you also modulates RAM to send GSM cellphone signals.\nThat is: It's much easier to verify \"This is a program which at least solves the Rubik's Cube\" than \"This is a program which was optimized to solve the Rubik's Cube and only that and was not optimized for anything else on the side.\"\nIf I were going to talk about trying to do aligned AGI under the standard ML paradigms, I'd talk about how this creates a differential ease of development between \"build a system that does X\" and \"build a system that does X and only X and not Y in some subtle way\". If you just want X however unsafely, you can build the X-classifier and use that as a loss function and let reinforcement learning loose with whatever equivalent of gradient descent or other generic optimization method the future uses. If the safety property you want is optimized-for-X-and-just-X-and-not-any-possible-number-of-hidden-Ys, then you can't write a simple loss function for that the way you can for X.\n \n\nPaul Christiano\nAccording to my understanding of optimization / use of language: the agent produced by RL is optimized only for X. However, optimization for X is liable to produce a Y-optimizer. So the actions of the agent are both X-optimized and Y-optimized.\n\n \nThe team that's building a less safe AGI can plug in the X-evaluator and let rip, the team that wants to build a safe AGI can't do things the easy way and has to solve new basic problems in order to get a trustworthy system. It's not unsolvable, but it's an element of the class of added difficulties of alignment such that the whole class extremely plausibly adds up to an extra two years of development.\nIn Paul's capability-amplification scenario, if we can get exact imitation, we are genuinely completely bypassing the whole paradigm that creates the X-and-only-X problem. If you can get exact imitation of a human, the outputs have only and exactly whatever properties the human already has. This kind of genuinely different viewpoint is why I continue to be excited about Paul's thinking.\n \n\nPaul Christiano\nI agree that perfect imitation would be a way to get around the X-and-only-X problem. However, I don't think that it's plausible and it's not how my approach hopes to get around the X-and-only-X problem.\nI would solve X-and-only-X in two steps:\nFirst, given an agent and an action which has been optimized for undesirable consequence Y, we'd like to be able to tell that the action has this undesirable side effect. I think we can do this by having a smarter agent act as an overseer, and giving the smarter agent suitable insight into the cognition of the weaker agent (e.g. by sharing weights between the weak agent and an explanation-generating agent). This is what I'm calling informed oversight.\nSecond, given an agent, identify situations in which it is especially likely to produce bad outcomes, or proofs that it won't, or enough understanding of its internals that you can see why it won't. This is discussed in \"Techniques for Optimizing Worst-Case Performance.\"\n(It also obviously requires a smarter agent, which you hope to get by induction + amplification).\nI think that both of those are hard problems, in addition to the assumption that amplification will work. But I don't yet see reason to be super pessimistic about either of them.\n\n \nOn the other hand, suppose we don't have exact imitation. How fast do we lose the defense against X-and-only-X? Well, that depends on the inexactness of the imitation; under what kind of distance metric is the imperfect imitation 'near' to the original? Like, if we're talking about Euclidean distance in the output, I expect you lose the X-and-only-X guarantee pretty damn fast against smart adversarial perturbations.\nOn the other other hand, suppose that the inexactness of the imitation is \"This agent behaves exactly like Paul Christiano but 5 IQ points dumber.\" If this is only and precisely the form of inexactness produced, and we know that for sure, then I'd say we have a pretty good guarantee against slightly-dumber-Paul producing the likes of Rubik's Cube solvers containing hidden GSM signalers.\nOn the other other other hand, suppose the inexactness of the imitation is \"This agent passes the Turing Test; a human can't tell it apart from a human.\" Then X-and-only-X is thrown completely out the window. We have no guarantee of non-Y for any Y a human can't detect, which covers an enormous amount of lethal territory, which is why we can't just sanitize the outputs of an untrusted superintelligence by having a human inspect the outputs to see if they have any humanly obvious bad consequences.\n\nSpeaking of inexact imitation: It seems to me that having an AI output a high-fidelity imitation of human behavior, sufficiently high-fidelity to preserve properties like \"being smart\" and \"being a good person\" and \"still being a good person under some odd strains like being assembled into an enormous Chinese Room Bureaucracy\", is a pretty huge ask.\nIt seems to me obvious, though this is the sort of point where I've been surprised about what other people don't consider obvious, that in general exact imitation is a bigger ask than superior capability. Building a Go player that imitates Shuusaku's Go play so well that a scholar couldn't tell the difference, is a bigger ask than building a Go player that could defeat Shuusaku in a match. A human is much smarter than a pocket calculator but would still be unable to imitate one without using a paper and pencil; to imitate the pocket calculator you need all of the pocket calculator's abilities in addition to your own.\nCorrespondingly, a realistic AI we build that literally passes the strong version of the Turing Test would probably have to be much smarter than the other humans in the test, probably smarter than any human on Earth, because it would have to possess all the human capabilities in addition to its own. Or at least all the human capabilities that can be exhibited to another human over the course of however long the Turing Test lasts. (Note that on the version of capability amplification I heard, capabilities that can be exhibited over the course of a day are the only kinds of capabilities we're allowed to amplify.)\n \n\nPaul Christiano\nTotally agree, and for this reason I agree that you can't rely on perfect imitation to solve the X-and-only-X problem and hence need other solutions. If you convince me that either informed oversight or reliability is impossible, then I'll be largely convinced that I'm doomed.\n\n \nAn AI that learns to exactly imitate humans, not just passing the Turing Test to the limits of human discrimination on human inspection, but perfect imitation with all added bad subtle properties thereby excluded, must be so cognitively powerful that its learnable hypothesis space includes systems equivalent to entire human brains. I see no way that we're not talking about a superintelligence here.\nSo to postulate perfect imitation, we would first of all run into the problems that:\n(a)  The AGI required to learn this imitation is extremely powerful, and this could imply a dangerous delay between when we can build any dangerous AGI at all, and when we can build AGIs that would work for alignment using perfect-imitation capability amplification.\n(b)  Since we cannot invoke a perfect-imitation capability amplification setup to get this very powerful AGI in the first place (because it is already the least AGI that we can use to even get started on perfect-imitation capability amplification), we already have an extremely dangerous unaligned superintelligence sitting around that we are trying to use to implement our scheme for alignment.\nNow, we may perhaps reply that the imitation is less than perfect and can be done with a dumber, less dangerous AI; perhaps even so dumb as to not be enormously superintelligent. But then we are tweaking the \"perfection of imitation\" setting, which could rapidly blow up our alignment guarantees against the standard dooms of standard machine learning paradigms.\nI'm worried that you have to degrade the level of imitation a lot before it becomes less than an enormous ask, to the point that what's being imitated isn't very intelligent, isn't human, and/or isn't known to be aligned.\nTo be specific: I think that if you want to imitate IQ-90 humans thinking for one day, and imitate them so specifically that the imitations are generally intelligent and locally aligned even in the limit of being aggregated into weird bureaucracies, you're looking at an AGI powerful enough to think about whole systems loosely analogous to IQ-90 humans.\n \n\nPaul Christiano\nIt's important that my argument for alignment-of-amplification goes through not doing problematic optimization. So if we combine that with a good enough solution to informed oversight and reliability (and amplification, and the induction working so far…), then we can continue to train imperfect imitations that definitely don't do problematic optimization. They'll mess up all over the place, and so might not be able to be competent (another problem amplification needs to handle), but the goal is to set things up so that being a lot dumber doesn't break alignment.\n\n \nI think that is a very powerful AGI. I think this AGI is smart enough to slip all kinds of shenanigans past you, unless you are using a methodology that can produce faithful imitations from unaligned AGIs. I think this is an AGI that can do powerful feats of engineering, unless it is somehow able to simulate humans doing powerful feats of engineering without itself being capable of powerful feats of engineering.\nAnd then furthermore the capability amplification schema requires the AGI to be powerful enough to learn to imitate amplified systems of humans as well!\nIf you don't degrade the AI to well below the point of superintelligence, we require extremely strong Faraday-cage keep-God-in-a-box don't-peek-at-the-code AI boxing. So we cannot for example let an untrusted superintelligence originate queries that it can use to learn human behavior; it has to be strictly unsupervised example-based learning rather than a query model. This interacts with worries about sample complexity below.\n \n\nPaul Christiano\nI appreciate how often you had to argue against disjunctions here in order to give me the benefit of the doubt.\n\n \nOr you need to postulate some prior alignment of the AGI that lets you robustly and safely direct it at the imitation problem. Then (a) your alignment schema depends on an unspecified prior alignment schema that already works to make very powerful AGIs do things. And (b) you probably aren't allowing unrestrained intelligence explosions, which probably introduces a big unavoidable delay between the first point where very powerful AGI is technologically possible, and the later point where you can build and safely align the extremely powerful AGI that you're using as the base unit in your imitation scheme.\n\nThe last I checked with Paul, his scheme bit the bullet on using powerful smart dangerous units to imitate humans. Paul's scheme for bypassing this, as I heard and understood it back then, involved a scheme which:\n\nTrained a million smart predictors of sufficient diversity that at least one predictor would happen to end up with the true sole simple goal of predicting faithfully;\nCompared the predictors against each other, finding cases where their predictions diverged sufficiently;\nTested them against reality;\nHad enough sampling power to perform a sufficient number of tests that, even if a superintelligence cunningly delays the time when it starts trying to slip in bad predictions, we can pause the system and run the experimental test and generate new samples and kick out the unfaithful predictor at that time.\n\nThat's a one-millionfold performance penalty, if I understood it correctly.\nIf that was just a concrete illustration to help me understand what's happening at all, and actually there's supposed to be some obvious less naive more efficient scheme for making use of mostly untrusted superintelligences which doesn't have the giant performance penalty, then I haven't seen it yet.\n \n\nPaul Christiano\nThere are several possible techniques for cutting this performance penalty down, e.g., representing a large ensemble implicitly or using a tricky distillation procedure. I totally agree if you have a millionfold slowdown then you are out of luck.\nWhen we talked about this, I agreed that \"train a reliable system\" was an important subproblem. Train an implicit ensemble is one possible approach to that problem. At the time of our discussion I listed 3 techniques. Since then I've become quite pessimistic about training an implicit ensemble, and have become more optimistic about verification and transparency. (My thinking about the problem is also generally much sharper.) The three approaches I currently consider most live are the ones in \"Techniques for Optimizing Worst-Case Performance.\"\n(As usual, all of it requires amplification + the induction.)\n(In the original discussion, I listed \"large ensemble\" as a possible approach in response to your \"how could you possibly solve this?\" question, not as a necessary ingredient.)\n\n\n\nI also have trouble seeing how kicking out unfaithful predictors is possible under an unsupervised-learning paradigm that doesn't actually allow hypercomputation.\nThe problem being: the exact output of a human is always going to be very improbable. Even if the agents can only send each other text messages and no illustrations (which further reduces their collective intelligence and ability to produce textbooks readable in a day), a text message is still an enormous possibility space. If we ask superintelligences to give us their predictions for what the human does next, their predictions cannot possibly actually include a giant lookup table next to all the probable outputs of the human.\nI think I see how I'd \"solve\" this problem using hypercomputation and an enormous number of samples: I require the superintelligences to give me their giant lookup tables of probabilities for exact outputs, calculate total distances between probability tables (KL divergence or whatever), and if the distance is great enough, I sample a human and do a Bayesian update. I drop any superintelligences whose prior probability goes low enough. I keep doing this until only agreeing superintelligences remain.\nUnder this system it looks to me like you need a lot of samples to kick out any superintelligences trying to sneak in bad suggestions. Let's say that an uncompressed Chinese Room bureaucracy is doing the equivalent of sending around a quadrillion messages, and that the output can be corrupted by an adversarial attack against one billion of those messages such that there's a \\(\frac{1}{1,000,000}\\) chance of outputting an adversarially chosen alternative to each of those billion messages. Well, the only way a superintelligence can allocate a \\(\frac{1}{1,000,000}\\) generation probability to those fake messages, is by stealing 0.0001% probability mass from all the other messages that might be output. So each time we check against a real human, we update by \\(\\log_{2}\\) \\(\frac{999,999}{1,000,000}\\) bits, and after taking a million samples we've driven down the relative odds of this naughty superintelligence by \\(\frac{1}{e}\\).\nWe had to take a million samples to do that, and each of the samples involves a human thinking for one day. And that just got us \\(e\\):1 evidence, so we've got to take a million samples 7 times to get to the point of 1000:1 odds against the superintelligence being honest. And if we have a million superintelligences in the distribution, some of them can make skewed bets à la predicting coinflips whereby a thousand of those will be able to gain 1000:1 advantages over the others, so actually you've got to take 14 million samples to be sure of beating any bad predictor classes that collectively try that scheme.\nI think last time I talked to Paul about this, Paul was like, \"Sure, we're talking about a gigantic government project that can take millions of samples.\"\nI'm not sure, I could be wrong about this, but I think you actually need 14 million samples per bad predictor that you want to kick out of the distribution.\nEven if I'm wrong about that, we need millions of humans who are smart enough and aligned enough that random samples from their number can agglomerate into a smart aligned Chinese Room Bureaucracy.\nI also don't see a way to do this without comparing the giant lookup tables using hypercomputation, but there I might just be missing a math trick.\n \n\nPaul Christiano\nMy best guess is that this can be done under plausible assumptions with O(1) samples per bad predictor. It's tricky, but we are good at tricky math problems, so it's not very scary relative to the other problems we face.\n(Unfortunately, I think that a large implicit ensemble is probably stuck anyway, in part because a million predictors isn't enough. But I'm not confident about that.)\n\n \n(If you're thinking of GANs, then so far as I can tell, the discriminator has to be at least as smart as the generator, and you have to trust the discriminator, and there isn't any simpler problem with respect to how you find a trusted superintelligence within a collective of untrusted ones to act as your discriminator.)\n(EDIT: Actually, after thinking about this for another five minutes, maybe I do see how to do it with GANs and lower sample complexity.)\nAn even larger issue is that I don't see any obvious way to carry out a scheme like this one at all with respect to imperfect imitations. (And the above scheme I thought of with GANs would also just fail.)\n \n\nPaul Christiano\nI think we could probably get over this too, it's another tricky math problem. I think this kind of problem is reliably either impossible, or else radically easier than most of the other stuff we are dealing with in alignment.\n(Though I endorse the overall intuition that large implicit ensembles are doomed.)\n\n\nI think these arguments are collectively something like a crux. That is, unless I've missed one of my own thought processes in the course of writing this up rapidly, or assumed a shared background assumption that isn't actually shared.\nLet's say that D is the degree of imperfection allowed by some system of capability amplification, and call D-imperfect imitations D-imitations. Iterated D-imitations of amplified systems of D-imitations will be termed DD-imitations. Then I think I'd start to be pragmatically interested in capability amplification as I understood it, if I believed all of the following:\n\nWe can, before the world is ended by other unaligned AIs, get AIs powerful enough to learn D-imitations and DD-imitations;\nD-imitations and DD-imitations robustly preserve the goodness of the people being imitated, despite the imperfection of the imitation;\nD-imitations agglomerate to sufficient cognitive power to perform a pivotal act in a way that causes the alignment of the components to be effective upon aligning the whole; and imperfect DD-imitation preserves this property;\nWe can find any way of either:\n\nIndividually trusting one AI that powerful to faithfully perform the task of D-imitation (but then why can't we just use this scheme to align a powerful AGI in the first place?);\nFind a scheme for agglomerating mostly untrustworthy powerful intelligences which:\n\nDoesn't require giant lookup tables, doesn't require a GAN with a trusted discriminator unless you can say how to produce the trusted discriminator, and can use actual human samples as fuel to discriminate trustworthiness among untrusted generators of D-imitations.\nIs extremely sample-efficient (let's say you can clear 100 people who are trustworthy to be part of an amplified-capability system, which already sounds to me like a huge damned ask); or you can exhibit to me a social schema which agglomerates mostly untrusted humans into a Chinese Room Bureaucracy that we trust to perform a pivotal task, and a political schema that you trust to do things involving millions of humans, in which case you can take millions of samples but not billions. Honestly, I just don't currently believe in AI scenarios in which good and trustworthy governments carry out complicated AI alignment schemas involving millions of people, so if you go down this path we end up with different cruxes; but I would already be pretty impressed if you got all the other cruxes.\nIs not too computationally inefficient; more like 20-1 slowdown than 1,000,000-1. Because I don't think you can get the latter degree of advantage over other AGI projects elsewhere in the world. Unless you are postulating massive global perfect surveillance schemes that don't wreck humanity's future, carried out by hyper-competent, hyper-trustworthy great powers with a deep commitment to cosmopolitan value — very unlike the observed characteristics of present great powers, and going unopposed by any other major government. Again, if we go down this branch of the challenge then we are no longer at the original crux.\n\n\n\nI worry that going down the last two branches of the challenge could create the illusion of a political disagreement, when I have what seem to me like strong technical objections at the previous branches. I would prefer that the more technical cruxes be considered first. If Paul answered all the other technical cruxes and presented a scheme for capability amplification that worked with a moderately utopian world government, I would already have been surprised. I wouldn't actually try it because you cannot get a moderately utopian world government, but Paul would have won many points and I would be interested in trying to refine the scheme further because it had already been refined further than I thought possible. On my present view, trying anything like this should either just plain not get started (if you wait to satisfy extreme computational demands and sampling power before proceeding), just plain fail (if you use weak AIs to try to imitate humans), or just plain kill you (if you use a superintelligence).\n \n\nPaul Christiano\nI think that the disagreement is almost entirely technical. I think if we really needed 1M people it wouldn't be a dealbreaker, but that's because of a technical rather than political disagreement (about what those people need to be doing). And I agree that 1,000,000x slowdown is unacceptable (I think even a 10x slowdown is almost totally doomed).\n\n \nI restate that these objections seem to me to collectively sum up to \"This is fundamentally just not a way you can get an aligned powerful AGI unless you already have an aligned superintelligence\", rather than \"Some further insights are required for this to work in practice.\" But who knows what further insights may really bring? Movement in thoughtspace consists of better understanding, not cleverer tools.\nI continue to be excited by Paul's thinking on this subject; I just don't think it works in the present state.\n \n\nPaul Christiano\nOn this point, we agree. I don't think anyone is claiming to be done with the alignment problem, the main question is about what directions are most promising for making progress.\n\n \nOn my view, this is not an unusual state of mind to be in with respect to alignment research. I can't point to any MIRI paper that works to align an AGI. Other people seem to think that they ought to currently be in a state of having a pretty much workable scheme for aligning an AGI, which I would consider to be an odd expectation. I would think that a sane point of view consisted in having ideas for addressing some problems that created further difficulties that needed to be fixed and didn't address most other problems at all; a map with what you think are the big unsolved areas clearly marked. Being able to have a thought which genuinely squarely attacks any alignment difficulty at all despite any other difficulties it implies, is already in my view a large and unusual accomplishment. The insight \"trustworthy imitation of human external behavior would avert many default dooms as they manifest in external behavior unlike human behavior\" may prove vital at some point. I continue to recommend throwing as much money at Paul as he says he can use, and I wish he said he knew how to use larger amounts of money.\nThe post Challenges to Christiano's capability amplification proposal appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Challenges to Christiano’s capability amplification proposal", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=17", "id": "7980270e0ce76e097bf29f5b0207a7d9"} {"text": "April 2018 Newsletter\n\n\nUpdates\n\nA new paper: \"Categorizing Variants of Goodhart's Law\"\nNew research write-ups and discussions: Distributed Cooperation; Quantilal Control for Finite Markov Decision Processes\nNew at AI Impacts: Transmitting Fibers in the Brain: Total Length and Distribution of Lengths\nScott Garrabrant, the research lead for MIRI's agent foundations program, outlines focus areas and 2018 predictions for MIRI's research.\nScott presented on logical induction at the joint Applied Theory Workshop / Workshop in Economic Theory.\nNautilus interviews MIRI Executive Director Nate Soares.\nFrom Abram Demski: An Untrollable Mathematician Illustrated\n\nNews and links\n\nFrom FHI's Jeffrey Ding: \"Deciphering China's AI Dream.\"\nOpenAI researcher Paul Christiano writes on universality and security amplification and an unaligned benchmark. Ajeya Cotra summarizes Christiano's general approach to alignment in Iterated Distillation and Amplification.\nChristiano discusses reasoning in cases \"where it's hard to settle disputes with either formal argument or experimentation (or a combination), like policy or futurism.\"\nFrom Chris Olah and collaborators at Google and CMU: The Building Blocks of Interpretability.\nFrom Nichol, Achiam, and Schulman at OpenAI: Reptile: A Scalable Meta-Learning Algorithm.\n\n\nThe post April 2018 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "April 2018 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=17", "id": "79ccd28913487e50891eeabfe9da03ef"} {"text": "2018 research plans and predictions\n\nUpdate Nov. 23: This post was edited to reflect Scott's terminology change from \"naturalized world-models\" to \"embedded world-models.\" For a full introduction to these four research problems, see Scott Garrabrant and Abram Demski's \"Embedded Agency.\"\n\nScott Garrabrant is taking over Nate Soares' job of making predictions about how much progress we'll make in different research areas this year. Scott divides MIRI's alignment research into five categories:\n\nembedded world-models — Problems related to modeling large, complex physical environments that lack a sharp agent/environment boundary. Central examples of problems in this category include logical uncertainty, naturalized induction, multi-level world models, and ontological crises.\nIntroductory resources: \"Formalizing Two Problems of Realistic World-Models,\" \"Questions of Reasoning Under Logical Uncertainty,\" \"Logical Induction,\" \"Reflective Oracles\"\nExamples of recent work: \"Hyperreal Brouwer,\" \"An Untrollable Mathematician,\" \"Further Progress on a Bayesian Version of Logical Uncertainty\"\n\ndecision theory — Problems related to modeling the consequences of different (actual and counterfactual) decision outputs, so that the decision-maker can choose the output with the best consequences. Central problems include counterfactuals, updatelessness, coordination, extortion, and reflective stability.\nIntroductory resources: \"Cheating Death in Damascus,\" \"Decisions Are For Making Bad Outcomes Inconsistent,\" \"Functional Decision Theory\"\nExamples of recent work: \"Cooperative Oracles,\" \"Smoking Lesion Steelman\" (1, 2), \"The Happy Dance Problem,\" \"Reflective Oracles as a Solution to the Converse Lawvere Problem\"\n\nrobust delegation — Problems related to building highly capable agents that can be trusted to carry out some task on one's behalf. Central problems include corrigibility, value learning, informed oversight, and Vingean reflection.\nIntroductory resources: \"The Value Learning Problem,\" \"Corrigibility,\" \"Problem of Fully Updated Deference,\" \"Vingean Reflection,\" \"Using Machine Learning to Address AI Risk\"\nExamples of recent work: \"Categorizing Variants of Goodhart's Law,\" \"Stable Pointers to Value\"\n\nsubsystem alignment — Problems related to ensuring that an AI system's subsystems are not working at cross purposes, and in particular that the system avoids creating internal subprocesses that optimize for unintended goals. Central problems include benign induction.\nIntroductory resources: \"What Does the Universal Prior Actually Look Like?\", \"Optimization Daemons,\" \"Modeling Distant Superintelligences\"\nExamples of recent work: \"Some Problems with Making Induction Benign\"\n\nother — Alignment research that doesn't fall into the above categories. If we make progress on the open problems described in \"Alignment for Advanced ML Systems,\" and the progress is less connected to our agent foundations work and more ML-oriented, then we'll likely classify it here.\n\n\nThe problems we previously categorized as \"logical uncertainty\" and \"naturalized induction\" are now called \"embedded world-models\"; most of the problems we're working on in three other categories (\"Vingean reflection,\" \"error tolerance,\" and \"value learning\") are grouped together under \"robust delegation\"; and we've introduced two new categories, \"subsystem alignment\" and \"other.\"\nScott's predictions for February through December 2018 follow. 1 means \"limited\" progress, 2 \"weak-to-modest\" progress, 3 \"modest,\" 4 \"modest-to-strong,\" and 5 \"sizable.\" To help contextualize Scott's numbers, we've also translated Nate's 2015-2017 predictions (and Nate and Scott's evaluations of our progress for those years) into the new nomenclature.\n\nembedded world-models:\n\n2015 progress: 5. — Predicted: 3.\n2016 progress: 5. — Predicted: 5.\n2017 progress: 2. — Predicted: 2.\n2018 progress prediction: 3 (modest).\n\ndecision theory:\n\n2015 progress: 3. — Predicted: 3.\n2016 progress: 3. — Predicted: 3.\n2017 progress: 3. — Predicted: 3.\n2018 progress prediction: 3 (modest).\n\nrobust delegation:\n\n2015 progress: 3. — Predicted: 3.\n2016 progress: 4. — Predicted: 3.\n2017 progress: 4. — Predicted: 1.\n2018 progress prediction: 2 (weak-to-modest).\n\nsubsystem alignment (new category):\n\n2018 progress prediction: 2 (weak-to-modest).\n\nother (new category):\n\n2018 progress prediction: 2 (weak-to-modest).\n\n\n\nThese predictions are highly uncertain, but should give a rough sense of how we're planning to allocate researcher attention over the coming year, and how optimistic we are about the current avenues we're pursuing.\nNote that the new bins we're using may give a wrong impression of our prediction accuracy. E.g., we didn't expect much progress on Vingean reflection in 2016, whereas we did expect significant progress on value learning and error tolerance. The opposite occurred, which should count as multiple prediction failures. Because the failures were in opposite directions, however, and because we're now grouping most of Vingean reflection, value learning, and error tolerance under a single category (\"robust delegation\"), our 2016 predictions look more accurate in the above breakdown than they actually were.\nUsing our previous categories, our expectations and evaluations for 2015-2018 would be:\n\n\n\n\n\nLogical uncertainty + naturalized induction\nDecision theory\nVingean Reflection\nError Tolerance\nValue Specification\n\n\n\n\nProgress 2015-2017\n5, 5, 2\n3, 3, 3\n3, 4, 4\n1, 1, 2\n1, 2, 1\n\n\nExpectations 2015-2018\n3, 5, 2, 3\n3, 3, 3, 3\n3, 1, 1, 2\n3, 3, 1, 2\n1, 3, 1, 1\n\n\n\n\nIn general, these predictions are based on evaluating the importance of the most important results from a given year — one large result will yield a higher number than many small results. The ratings and predictions take into account research that we haven't written up yet, though they exclude research that we don't expect to make public in the near future.\nThe post 2018 research plans and predictions appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "2018 research plans and predictions", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=17", "id": "373f1fbe0d21d9ab82ee5627b0bef0d0"} {"text": "New paper: \"Categorizing variants of Goodhart's Law\"\n\nGoodhart's Law states that \"any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.\" However, this is not a single phenomenon. In Goodhart Taxonomy, I proposed that there are (at least) four different mechanisms through which proxy measures break when you optimize for them: Regressional, Extremal, Causal, and Adversarial.\nDavid Manheim has now helped write up my taxonomy as a paper going into more detail on the these mechanisms: \"Categorizing variants of Goodhart's Law.\" From the conclusion:\nThis paper represents an attempt to categorize a class of simple statistical misalignments that occur both in any algorithmic system used for optimization, and in many human systems that rely on metrics for optimization. The dynamics highlighted are hopefully useful to explain many situations of interest in policy design, in machine learning, and in specific questions about AI alignment.\nIn policy, these dynamics are commonly encountered but too-rarely discussed clearly. In machine learning, these errors include extremal Goodhart effects due to using limited data and choosing overly parsimonious models, errors that occur due to myopic consideration of goals, and mistakes that occur when ignoring causality in a system. Finally, in AI alignment, these issues are fundamental to both aligning systems towards a goal, and assuring that the system's metrics do not have perverse effects once the system begins optimizing for them.\nLet V refer to the true goal, while U refers to a proxy for that goal which was observed to correlate with V and which is being optimized in some way. Then the four subtypes of Goodhart's Law are as follows:\n\nRegressional Goodhart — When selecting for a proxy measure, you select not only for the true goal, but also for the difference between the proxy and the goal.\n\nModel: When U is equal to V + X, where X is some noise, a point with a large U value will likely have a large V value, but also a large X value. Thus, when U is large, you can expect V to be predictably smaller than U.\nExample: Height is correlated with basketball ability, and does actually directly help, but the best player is only 6'3″, and a random 7′ person in their 20s would probably not be as good.\n\n\n\nExtremal Goodhart — Worlds in which the proxy takes an extreme value may be very different from the ordinary worlds in which the correlation between the proxy and the goal was observed.\n\nModel: Patterns tend to break at simple joints. One simple subset of worlds is those worlds in which U is very large. Thus, a strong correlation between U and V observed for naturally occuring U values may not transfer to worlds in which U is very large. Further, since there may be relatively few naturally occuring worlds in which U is very large, extremely large U may coincide with small V values without breaking the statistical correlation.\nExample: The tallest person on record, Robert Wadlow, was 8'11\" (2.72m). He grew to that height because of a pituitary disorder; he would have struggled to play basketball because he \"required leg braces to walk and had little feeling in his legs and feet.\"\n\n\n\nCausal Goodhart — When there is a non-causal correlation between the proxy and the goal, intervening on the proxy may fail to intervene on the goal.\n\nModel: If V causes U (or if V and U are both caused by some third thing), then a correlation between V and U may be observed. However, when you intervene to increase U through some mechanism that does not involve V, you will fail to also increase V.\nExample: Someone who wishes to be taller might observe that height is correlated with basketball skill and decide to start practicing basketball.\n\n\nAdversarial Goodhart — When you optimize for a proxy, you provide an incentive for adversaries to correlate their goal with your proxy, thus destroying the correlation with your goal.\n\nModel: Consider an agent A with some different goal W. Since they depend on common resources, W and V are naturally opposed. If you optimize U as a proxy for V, and A knows this, A is incentivized to make large U values coincide with large W values, thus stopping them from coinciding with large V values.\nExample: Aspiring NBA players might just lie about their height.\n\n\nFor more on this topic, see Eliezer Yudkowsky's write-up, Goodhart's Curse.\n \n\n\n\nSign up to get updates on new MIRI technical results\nGet notified every time a new technical paper is published.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n \nThe post New paper: \"Categorizing variants of Goodhart's Law\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “Categorizing variants of Goodhart’s Law”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=17", "id": "227565f14d46ff26e76497c9a02d81b7"} {"text": "March 2018 Newsletter\n\n\nUpdates\n\nNew research write-ups and discussions: Knowledge is Freedom; Stable Pointers to Value II: Environmental Goals; Toward a New Technical Explanation of Technical Explanation; Robustness to Scale\nNew at AI Impacts: Likelihood of Discontinuous Progress Around the Development of AGI\nThe transcript is up for Sam Harris and Eliezer Yudkowsky's podcast conversation.\nAndrew Critch, previously on leave from MIRI to help launch the Center for Human-Compatible AI and the Berkeley Existential Risk Initiative, has accepted a position as CHAI's first research scientist. Critch will continue to work with and advise the MIRI team from his new academic home at UC Berkeley. Our congratulations to Critch!\nCFAR and MIRI are running a free AI Summer Fellows Program June 27 – July 14; applications are open until April 20.\n\nNews and links\n\nOpenAI co-founder Elon Musk is leaving OpenAI's Board.\nOpenAI has a new paper out on interpretable ML through teaching.\nFrom Paul Christiano: Surveil Things, Not People; Arguments About Fast Takeoff.\nPaul is offering a total of $120,000 to any independent researchers who can come up with promising alignment research projects to pursue.\nThe Centre for the Study of Existential Risk's Civilization V mod inspires a good discussion of the AI alignment problem.\n\n\nThe post March 2018 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "March 2018 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=17", "id": "444d45e802715828aafaad827cb502f7"} {"text": "Sam Harris and Eliezer Yudkowsky on \"AI: Racing Toward the Brink\"\n\n\n\nMIRI senior researcher Eliezer Yudkowsky was recently invited to be a guest on Sam Harris' \"Waking Up\" podcast. Sam is a neuroscientist and popular author who writes on topics related to philosophy, religion, and public discourse.\nThe following is a complete transcript of Sam and Eliezer's conversation, AI: Racing Toward the Brink.\nContents\n\n1. Intelligence and generality — 0:05:26\n2. Orthogonal capabilities and goals in AI — 0:25:21\n3. Cognitive uncontainability and instrumental convergence — 0:53:39\n4. The AI alignment problem — 1:09:09\n5. No fire alarm for AGI — 1:21:40\n6. Accidental AI, mindcrime, and MIRI — 1:34:30\n7. Inadequate equilibria — 1:44:40\n8. Rapid capability gain in AGI — 1:59:02\n\n\n\n \n\n1. Intelligence and generality (0:05:26)\n\nSam Harris: I am here with Eliezer Yudkowsky. Eliezer, thanks for coming on the podcast.\n\nEliezer Yudkowsky: You're quite welcome. It's an honor to be here.\n\nSam: You have been a much requested guest over the years. You have quite the cult following, for obvious reasons. For those who are not familiar with your work, they will understand the reasons once we get into talking about things. But you've also been very present online as a blogger. I don't know if you're still blogging a lot, but let's just summarize your background for a bit and then tell people what you have been doing intellectually for the last twenty years or so.\n\nEliezer: I would describe myself as a decision theorist. A lot of other people would say that I'm in artificial intelligence, and in particular in the theory of how to make sufficiently advanced artificial intelligences that do a particular thing and don't destroy the world as a side-effect. I would call that \"AI alignment,\" following Stuart Russell.\nOther people would call that \"AI control,\" or \"AI safety,\" or \"AI risk,\" none of which are terms that I really like.\n\nI also have an important sideline in the art of human rationality: the way of achieving the map that reflects the territory and figuring out how to navigate reality to where you want it to go, from a probability theory / decision theory / cognitive biases perspective. I wrote two or three years of blog posts, one a day, on that, and it was collected into a book called Rationality: From AI to Zombies.\n\nSam: Which I've read, and which is really worth reading. You have a very clear and aphoristic way of writing; it's really quite wonderful. I highly recommend that book.\n\nEliezer: Thank you, thank you.\n\nSam: Your background is unconventional. For instance, you did not go to high school, correct? Let alone college or graduate school. Summarize that for us.\n\nEliezer: The system didn't fit me that well, and I'm good at self-teaching. I guess when I started out I thought I was going to go into something like evolutionary psychology or possibly neuroscience, and then I discovered probability theory, statistics, decision theory, and came to specialize in that more and more over the years.\n\nSam: How did you not wind up going to high school? What was that decision like?\n\nEliezer: Sort of like a mental crash around the time I hit puberty—or like a physical crash, even. I just did not have the stamina to make it through a whole day of classes at the time. (laughs) I'm not sure how well I'd do trying to go to high school now, honestly. But it was clear that I could self-teach, so that's what I did.\n\nSam: And where did you grow up?\n\nEliezer: Chicago, Illinois.\n\nSam: Let's fast forward to the center of the bull's eye for your intellectual life here. You have a new book out, which we'll talk about second. Your new book is Inadequate Equilibria: Where and How Civilizations Get Stuck. Unfortunately, I've only read half of that, which I'm also enjoying. I've certainly read enough to start a conversation on that. But we should start with artificial intelligence, because it's a topic that I've touched a bunch on in the podcast which you have strong opinions about, and it's really how we came together. You and I first met at that conference in Puerto Rico, which was the first of these AI safety / alignment discussions that I was aware of. I'm sure there have been others, but that was a pretty interesting gathering.\n\nSo let's talk about AI and the possible problem with where we're headed, and the near-term problem that many people in the field and at the periphery of the field don't seem to take the problem (as we conceive it) seriously. Let's just start with the basic picture and define some terms. I suppose we should define \"intelligence\" first, and then jump into the differences between strong and weak or general versus narrow AI. Do you want to start us off on that?\n\nEliezer: Sure. Preamble disclaimer, though: In the field in general, not everyone you ask would give you the same definition of intelligence. A lot of times in cases like those it's good to sort of go back to observational basics. We know that in a certain way, human beings seem a lot more competent than chimpanzees, which seems to be a similar dimension to the one where chimpanzees are more competent than mice, or that mice are more competent than spiders. People have tried various theories about what this dimension is, they've tried various definitions of it. But if you went back a few centuries and asked somebody to define \"fire,\" the less wise ones would say: \"Ah, fire is the release of phlogiston. Fire is one of the four elements.\" And the truly wise ones would say, \"Well, fire is the sort of orangey bright hot stuff that comes out of wood and spreads along wood.\" They would tell you what it looked like, and put that prior to their theories of what it was.\nSo what this mysterious thing looks like is that humans can build space shuttles and go to the Moon, and mice can't, and we think it has something to do with our brains.\n\nSam: Yeah. I think we can make it more abstract than that. Tell me if you think this is not generic enough to be accepted by most people in the field: Whatever intelligence may be in specific contexts, generally speaking it's the ability to meet goals, perhaps across a diverse range of environments. We might want to add that it's at least implicit in the \"intelligence\" that interests us that it means an ability to do this flexibly, rather than by rote following the same strategy again and again blindly. Does that seem like a reasonable starting point?\n\nEliezer: I think that that would get fairly widespread agreement, and it matches up well with some of the things that are in AI textbooks.\nIf I'm allowed to take it a bit further and begin injecting my own viewpoint into it, I would refine it and say that by \"achieve goals\" we mean something like \"squeezing the measure of possible futures higher in your preference ordering.\" If we took all the possible outcomes, and we ranked them from the ones you like least to the ones you like most, then as you achieve your goals, you're sort of squeezing the outcomes higher in your preference ordering. You're narrowing down what the outcome would be to be something more like what you want, even though you might not be able to narrow it down very exactly.\nFlexibility. Generality. Humans are much more domain–general than mice. Bees build hives; beavers build dams; a human will look over both of them and envision a honeycomb-structured dam. We are able to operate even on the Moon, which is very unlike the environment where we evolved.\nIn fact, our only competitor in terms of general optimization—where \"optimization\" is that sort of narrowing of the future that I talked about—is natural selection. Natural selection built beavers. It built bees. It sort of implicitly built the spider's web, in the course of building spiders.\nWe as humans have this similar very broad range to handle this huge variety of problems. And the key to that is our ability to learn things that natural selection did not preprogram us with; so learning is the key to generality. (I expect that not many people in AI would disagree with that part either.)\n\nSam: Right. So it seems that goal-directed behavior is implicit (or even explicit) in this definition of intelligence. And so whatever intelligence is, it is inseparable from the kinds of behavior in the world that result in the fulfillment of goals. So we're talking about agents that can do things; and once you see that, then it becomes pretty clear that if we build systems that harbor primary goals—you know, there are cartoon examples here like making paperclips—these are not systems that will spontaneously decide that they could be doing more enlightened things than (say) making paperclips.\nThis moves to the question of how deeply unfamiliar artificial intelligence might be, because there are no natural goals that will arrive in these systems apart from the ones we put in there. And we have common-sense intuitions that make it very difficult for us to think about how strange an artificial intelligence could be. Even one that becomes more and more competent to meet its goals.\nLet's talk about the frontiers of strangeness in AI as we move from here. Again, though, I think we have a couple more definitions we should probably put in play here, differentiating strong and weak or general and narrow intelligence.\n\nEliezer: Well, to differentiate \"general\" and \"narrow\" I would say that this is on the one hand theoretically a spectrum, and on the other hand, there seems to have been a very sharp jump in generality between chimpanzees and humans.\nSo, breadth of domain driven by breadth of learning—DeepMind, for example, recently built AlphaGo, and I lost some money betting that AlphaGo would not defeat the human champion, which it promptly did. Then a successor to that was AlphaZero. AlphaGo was specialized on Go; it could learn to play Go better than its starting point for playing Go, but it couldn't learn to do anything else. Then they simplified the architecture for AlphaGo. They figured out ways to do all the things it was doing in more and more general ways. They discarded the opening book—all the human experience of Go that was built into it. They were able to discard all of these programmatic special features that detected features of the Go board. They figured out how to do that in simpler ways, and because they figured out how to do it in simpler ways, they were able to generalize to AlphaZero, which learned how to play chess using the same architecture. They took a single AI and got it to learn Go, and then reran it and made it learn chess. Now that's not human general, but it's a step forward in generality of the sort that we're talking about.\n\nSam: Am I right in thinking that that's a pretty enormous breakthrough? I mean, there's two things here. There's the step to that degree of generality, but there's also the fact that they built a Go engine—I forget if it was Go or chess or both—which basically surpassed all of the specialized AIs on those games over the course of a day. Isn't the chess engine of AlphaZero better than any dedicated chess computer ever, and didn't it achieve that with astonishing speed?\n\nEliezer: Well, there was actually some amount of debate afterwards whether or not the version of the chess engine that it was tested against was truly optimal. But even to the extent that it was in that narrow range of the best existing chess engines, as Max Tegmark put it, the real story wasn't in how AlphaGo beat human Go players. It's in how AlphaZero beat human Go system programmers and human chess system programmers. People had put years and years of effort into accreting all of the special-purpose code that would play chess well and efficiently, and then AlphaZero blew up to (and possibly past) that point in a day. And if it hasn't already gone past it, well, it would be past it by now if DeepMind kept working it. Although they've now basically declared victory and shut down that project, as I understand it.\n\nSam: So talk about the distinction between general and narrow intelligence a little bit more. We have this feature of our minds, most conspicuously, where we're general problem-solvers. We can learn new things and our learning in one area doesn't require a fundamental rewriting of our code. Our knowledge in one area isn't so brittle as to be degraded by our acquiring knowledge in some new area, or at least this is not a general problem which erodes our understanding again and again. And we don't yet have computers that can do this, but we're seeing the signs of moving in that direction. And so it's often imagined that there is a kind of near-term goal—which has always struck me as a mirage—of so-called \"human-level\" general AI.\nI don't see how that phrase will ever mean much of anything, given that all of the narrow AI we've built thus far is superhuman within the domain of its applications. The calculator in my phone is superhuman for arithmetic. Any general AI that also has my phone's ability to calculate will be superhuman for arithmetic. But we must presume it will be superhuman for all of the dozens or hundreds of specific human talents we've put into it, whether it's facial recognition or just memory, unless we decide to consciously degrade it. Access to the world's data will be superhuman unless we isolate it from data. Do you see this notion of human-level AI as a landmark on the timeline of our development, or is it just never going to be reached?\n\nEliezer: I think that a lot of people in the field would agree that human-level AI, defined as \"literally at the human level, neither above nor below, across a wide range of competencies,\" is a straw target, is an impossible mirage. Right now it seems like AI is clearly dumber and less general than us—or rather that if we're put into a real-world, lots-of-things-going-on context that places demands on generality, then AIs are not really in the game yet. Humans are clearly way ahead. And more controversially, I would say that we can imagine a state where the AI is clearly way ahead across every kind of cognitive competency, barring some very narrow ones that aren't deeply influential of the others.\nLike, maybe chimpanzees are better at using a stick to draw ants from an ant hive and eat them than humans are. (Though no humans have practiced that to world championship level.) But there's a sort of general factor of, \"How good are you at it when reality throws you a complicated problem?\" At this, chimpanzees are clearly not better than humans. Humans are clearly better than chimps, even if you can manage to narrow down one thing the chimp is better at. The thing the chimp is better at doesn't play a big role in our global economy. It's not an input that feeds into lots of other things.\nThere are some people who say this is not possible—I think they're wrong—but it seems to me that it is perfectly coherent to imagine an AI that is better at everything (or almost everything) than we are, such that if it was building an economy with lots of inputs, humans would have around the same level of input into that economy as the chimpanzees have into ours.\n\nSam: Yeah. So what you're gesturing at here is a continuum of intelligence that I think most people never think about. And because they don't think about it, they have a default doubt that it exists. This is a point I know you've made in your writing, and I'm sure it's a point that Nick Bostrom made somewhere in his book Superintelligence. It's this idea that there's a huge blank space on the map past the most well-advertised exemplars of human brilliance, where we don't imagine what it would be like to be five times smarter than the smartest person we could name, and we don't even know what that would consist in, because if chimps could be given to wonder what it would be like to be five times smarter than the smartest chimp, they're not going to represent for themselves all of the things that we're doing that they can't even dimly conceive.\nThere's a kind of disjunction that comes with more. There's a phrase used in military contexts. The quote is variously attributed to Stalin and Napoleon and I think Clausewitz and like a half a dozen people who have claimed this quote. The quote is, \"Sometimes quantity has a quality all its own.\" As you ramp up in intelligence, whatever it is at the level of information processing, spaces of inquiry and ideation and experience begin to open up, and we can't necessarily predict what they would be from where we sit.\nHow do you think about this continuum of intelligence beyond what we currently know, in light of what we're talking about?\n\nEliezer: Well, the unknowable is a concept you have to be very careful with. The thing you can't figure out in the first 30 seconds of thinking about it—sometimes you can figure it out if you think for another five minutes. So in particular I think that there's a certain narrow kind of unpredictability which does seem to be plausibly in some sense essential, which is that for AlphaGo to play better Go than the best human Go players, it must be the case that the best human Go players cannot predict exactly where on the Go board AlphaGo will play. If they could predict exactly where AlphaGo would play, AlphaGo would be no smarter than them.\nOn the other hand, AlphaGo's programmers and the people who knew what AlphaGo's programmers were trying to do, or even just the people who watched AlphaGo play, could say, \"Well, I think the system is going to play such that it will win at the end of the game.\" Even if they couldn't predict exactly where it would move on the board.\nSimilarly, there's a (not short, or not necessarily slam-dunk, or not immediately obvious) chain of reasoning which says that it is okay for us to reason about aligned (or even unaligned) artificial general intelligences of sufficient power as if they're trying to do something, but we don't necessarily know what. From our perspective that still has consequences, even though we can't predict in advance exactly how they're going to do it.\n\n\n\n2. Orthogonal capabilities and goals in AI (0:25:21)\n\nSam: I think we should define this notion of alignment. What do you mean by \"alignment,\" as in the alignment problem?\n\nEliezer: It's a big problem. And it does have some moral and ethical aspects, which are not as important as the technical aspects—or pardon me, they're not as difficult as the technical aspects. They couldn't exactly be less important.\nBut broadly speaking, it's an AI where you can say what it's trying to do. There are narrow conceptions of alignment, where you're trying to get it to do something like cure Alzheimer's disease without destroying the rest of the world. And there's much more ambitious notions of alignment, where you're trying to get it to do the right thing and achieve a happy intergalactic civilization.\nBut both the narrow and the ambitious alignment have in common that you're trying to have the AI do that thing rather than making a lot of paperclips.\n\nSam: Right. For those who have not followed this conversation before, we should cash out this reference to \"paperclips\" which I made at the opening. Does this thought experiment originate with Bostrom, or did he take it from somebody else?\n\nEliezer: As far as I know, it's me.\n\nSam: Oh, it's you, okay.\n\nEliezer: It could still be Bostrom. I asked somebody, \"Do you remember who it was?\" and they searched through the archives of the mailing list where this idea plausibly originated and if it originated there, then I was the first one to say \"paperclips.\"\n\nSam: All right, then by all means please summarize this thought experiment for us.\n\nEliezer: Well, the original thing was somebody expressing a sentiment along the lines of, \"Who are we to constrain the path of things smarter than us? They will create something in the future; we don't know what it will be, but it will be very worthwhile. We shouldn't stand in the way of that.\"\nThe sentiments behind this are something that I have a great deal of sympathy for. I think the model of the world is wrong. I think they're factually wrong about what happens when you take a random AI and make it much bigger.\nIn particular, I said, \"The thing I'm worried about is that it's going to end up with a randomly rolled utility function whose maximum happens to be a particular kind of tiny molecular shape that looks like a paperclip.\" And that was the original paperclip maximizer scenario.\nIt got a little bit distorted in being whispered on, into the notion of: \"Somebody builds a paperclip factory and the AI in charge of the paperclip factory takes over the universe and turns it all into paperclips.\" There was a lovely online game about it, even. But this still sort of cuts against a couple of key points.\nOne is: the problem isn't that paperclip factory AIs spontaneously wake up. Wherever the first artificial general intelligence is from, it's going to be in a research lab specifically dedicated to doing it, for the same reason that the first airplane didn't spontaneously assemble in a junk heap.\nAnd the people who are doing this are not dumb enough to tell their AI to make paperclips, or make money, or end all war. These are Hollywood movie plots that the script writers do because they need a story conflict and the story conflict requires that somebody be stupid. The people at Google are not dumb enough to build an AI and tell it to make paperclips.\nThe problem I'm worried about is that it's technically difficult to get the AI to have a particular goal set and keep that goal set and implement that goal set in the real world, and so what it does instead is something random—for example, making paperclips. Where \"paperclips\" are meant to stand in for \"something that is worthless even from a very cosmopolitan perspective.\" Even if we're trying to take a very embracing view of the nice possibilities and accept that there may be things that we wouldn't even understand, that if we did understand them we would comprehend to be of very high value, paperclips are not one of those things. No matter how long you stare at a paperclip, it still seems pretty pointless from our perspective. So that is the concern about the future being ruined, the future being lost. The future being turned into paperclips.\n\nSam: One thing this thought experiment does: it also cuts against the assumption that a sufficiently intelligent system, a system that is more competent than we are in some general sense, would by definition only form goals, or only be driven by a utility function, that we would recognize as being ethical, or wise, and would by definition be aligned with our better interest. That we're not going to build something that is superhuman in competence that could be moving along some path that's as incompatible with our wellbeing as turning every spare atom on Earth into a paperclip.\nBut you don't get our common sense unless you program it into the machine, and you don't get a guarantee of perfect alignment or perfect corrigibility (the ability for us to be able to say, \"Well, that's not what we meant, come back\") unless that is successfully built into the machine. So this alignment problem is—the general concern is that even with the seemingly best goals put in, we could build something (especially in the case of something that makes changes to itself—and we'll talk about this, the idea that these systems could become self-improving) whose future behavior in the service of specific goals isn't totally predictable by us. If we gave it the goal to cure Alzheimer's, there are many things that are incompatible with it fulfilling that goal, and one of those things is our turning it off. We have to have a machine that will let us turn it off even though its primary goal is to cure Alzheimer's.\nI know I interrupted you before. You wanted to give an example of the alignment problem—but did I just say anything that you don't agree with, or are we still on the same map?\n\nEliezer: We're still on the same map. I agree with most of it. I would of course have this giant pack of careful definitions and explanations built on careful definitions and explanations to go through everything you just said. Possibly not for the best, but there it is.\nStuart Russell put it, \"You can't bring the coffee if you're dead,\" pointing out that if you have a sufficiently intelligent system whose goal is to bring you coffee, even that system has an implicit strategy of not letting you switch it off. Assuming that all you told it to do was bring the coffee.\nI do think that a lot of people listening may want us to back up and talk about the question of whether you can have something that feels to them like it's so \"smart\" and so \"stupid\" at the same time—like, is that a realizable way an intelligence can be?\n\nSam: Yeah. And that is one of the virtues—or one of the confusing elements, depending on where you come down on this—of this thought experiment of the paperclip maximizer.\n\nEliezer: Right. So, I think that there are multiple narratives about AI, and I think that the technical truth is something that doesn't fit into any of the obvious narratives. For example, I think that there are people who have a lot of respect for intelligence, they are happy to envision an AI that is very intelligent, it seems intuitively obvious to them that this carries with it tremendous power, and at the same time, their respect for the concept of intelligence leads them to wonder at the concept of the paperclip maximizer: \"Why is this very smart thing just making paperclips?\"\nThere's similarly another narrative which says that AI is sort of lifeless, unreflective, just does what it's told, and to these people it's perfectly obvious that an AI might just go on making paperclips forever. And for them the hard part of the story to swallow is the idea that machines can get that powerful.\n\nSam: Those are two hugely useful categories of disparagement of your thesis here.\n\nEliezer: I wouldn't say disparagement. These are just initial reactions. These are people we haven't been talking to yet.\n\nSam: Right, let me reboot that. Those are two hugely useful categories of doubt with respect to your thesis here, or the concerns we're expressing, and I just want to point out that both have been put forward on this podcast. The first was by David Deutsch, the physicist, who imagines that whatever AI we build—and he certainly thinks we will build it—will be by definition an extension of us. He thinks the best analogy is to think of our future descendants. These will be our children. The teenagers of the future may have different values than we do, but these values and their proliferation will be continuous with our values and our culture and our memes. There won't be some radical discontinuity that we need to worry about. And so there is that one basis for lack of concern: this is an extension of ourselves and it will inherit our values, improve upon our values, and there's really no place where things reach any kind of cliff that we need to worry about.\nThe other non-concern you just raised was expressed by Neil deGrasse Tyson on this podcast. He says things like, \"Well, if the AI starts making too many paperclips I'll just unplug it, or I'll take out a shotgun and shoot it\"—the idea that this thing, because we made it, could be easily switched off at any point we decide it's not working correctly. So I think it would be very useful to get your response to both of those species of doubt about the alignment problem.\n\nEliezer: So, a couple of preamble remarks. One is: \"by definition\"? We don't care what's true by definition here. Or as Einstein put it: insofar as the equations of mathematics are certain, they do not refer to reality, and insofar as they refer to reality, they are not certain.\nLet's say somebody says, \"Men by definition are mortal. Socrates is a man. Therefore Socrates is mortal.\" Okay, suppose that Socrates actually lives for a thousand years. The person goes, \"Ah! Well then, by definition Socrates is not a man!\"\nSimilarly, you could say that \"by definition\" a sufficiently advanced artificial intelligence is nice. And what if it isn't nice and we see it go off and build a Dyson sphere? \"Ah! Well, then by definition it wasn't what I meant by 'intelligent.'\" Well, okay, but it's still over there building Dyson spheres.\nThe first thing I'd want to say is this is an empirical question. We have a question of what certain classes of computational systems actually do when you switch them on. It can't be settled by definitions; it can't be settled by how you define \"intelligence.\"\nThere could be some sort of a priori truth that is deep about how if it has property A it almost certainly has property B unless the laws of physics are being violated. But this is not something you can build into how you define your terms.\n\nSam: Just to do justice to David Deutsch's doubt here, I don't think he's saying it's empirically impossible that we could build a system that would destroy us. It's just that we would have to be so stupid to take that path that we are incredibly unlikely to take that path. The superintelligent systems we will build will be built with enough background concern for their safety that there is no special concern here with respect to how they might develop.\n\nEliezer: The next preamble I want to give is—well, maybe this sounds a bit snooty, maybe it sounds like I'm trying to take a superior vantage point—but nonetheless, my claim is not that there is a grand narrative that makes it emotionally consonant that paperclip maximizers are a thing. I'm claiming this is true for technical reasons. Like, this is true as a matter of computer science. And the question is not which of these different narratives seems to resonate most with your soul. It's: what's actually going to happen? What do you think you know? How do you think you know it?\nThe particular position that I'm defending is one that somebody—I think Nick Bostrom—named the orthogonality thesis. And the way I would phrase it is that you can have arbitrarily powerful intelligence, with no defects of that intelligence—no defects of reflectivity, it doesn't need an elaborate special case in the code, it doesn't need to be put together in some very weird way—that pursues arbitrary tractable goals. Including, for example, making paperclips.\nThe way I would put it to somebody who's initially coming in from the first viewpoint, the viewpoint that respects intelligence and wants to know why this intelligence would be doing something so pointless, is that the thesis, the claim I'm making, that I'm going to defend is as follows.\nImagine that somebody from another dimension—the standard philosophical troll who's always called \"Omega\" in the philosophy papers—comes along and offers our civilization a million dollars worth of resources per paperclip that we manufacture. If this was the challenge that we got, we could figure out how to make a lot of paperclips. We wouldn't forget to do things like continue to harvest food so we could go on making paperclips. We wouldn't forget to perform scientific research, so we could discover better ways of making paperclips. We would be able to come up with genuinely effective strategies for making a whole lot of paperclips.\nOr similarly, for an intergalactic civilization, if Omega comes by from another dimension and says, \"I'll give you whole universes full of resources for every paperclip you make over the next thousand years,\" that intergalactic civilization could intelligently figure out how to make a whole lot of paperclips to get at those resources that Omega is offering, and they wouldn't forget how to keep the lights turned on either. And they would also understand concepts like, \"If some aliens start a war with them, you've got to prevent the aliens from destroying you in order to go on making the paperclips.\"\nSo the orthogonality thesis is that an intelligence that pursues paperclips for their own sake, because that's what its utility function is, can be just as effective, as efficient, as the whole intergalactic civilization that is being paid to make paperclips. That the paperclip maximizers does not suffer any deflect of reflectivity, any defect of efficiency from needing to be put together in some weird special way to be built so as to pursue paperclips. And that's the thing that I think is true as a matter of computer science. Not as a matter of fitting with a particular narrative; that's just the way the dice turn out.\n\nSam: Right. So what is the implication of that thesis? It's \"orthogonal\" with respect to what?\n\nEliezer: Intelligence and goals.\n\nSam: Not to be pedantic here, but let's define \"orthogonal\" for those for whom it's not a familiar term.\n\nEliezer: The original \"orthogonal\" means \"at right angles.\" If you imagine a graph with an x axis and a y axis, if things can vary freely along the x axis and freely along the y axis at the same time, that's orthogonal. You can move in one direction that's at right angles to another direction without affecting where you are in the first dimension.\n\nSam: So generally speaking, when we say that some set of concerns is orthogonal to another, it's just that there's no direct implication from one to the other. Some people think that facts and values are orthogonal to one another. So we can have all the facts there are to know, but that wouldn't tell us what is good. What is good has to be pursued in some other domain. I don't happen to agree with that, as you know, but that's an example.\n\nEliezer: I don't technically agree with it either. What I would say is that the facts are not motivating. \"You can know all there is to know about what is good, and still make paperclips,\" is the way I would phrase that.\n\nSam: I wasn't connecting that example to the present conversation, but yeah. So in the case of the paperclip maximizer, what is orthogonal here? Intelligence is orthogonal to anything else we might think is good, right?\n\nEliezer: I mean, I would potentially object a little bit to the way that Nick Bostrom took the word \"orthogonality\" for that thesis. I think, for example, that if you have humans and you make the human smarter, this is not orthogonal to the humans' values. It is certainly possible to have agents such that as they get smarter, what they would report as their utility functions will change. A paperclip maximizer is not one of those agents, but humans are.\n\nSam: Right, but if we do continue to define intelligence as an ability to meet your goals, well, then we can be agnostic as to what those goals are. You take the most intelligent person on Earth. You could imagine his evil brother who is more intelligent still, but he just has goals that we would think are bad. He could be the most brilliant psychopath ever.\n\nEliezer: I think that that example might be unconvincing to somebody who's coming in with a suspicion that intelligence and values are correlated. They would be like, \"Well, has that been historically true? Is this psychopath actually suffering from some defect in his brain, where you give him a pill, you fix the defect, they're not a psychopath anymore.\" I think that this sort of imaginary example is one that they might not find fully convincing for that reason.\n\nSam: The truth is, I'm actually one of those people, in that I do think there's certain goals and certain things that we may become smarter and smarter with respect to, like human wellbeing. These are places where intelligence does converge with other kinds of value-laden qualities of a mind, but generally speaking, they can be kept apart for a very long time. So if you're just talking about an ability to turn matter into useful objects or extract energy from the environment to do the same, this can be pursued with the purpose of tiling the world with paperclips, or not. And it just seems like there's no law of nature that would prevent an intelligent system from doing that.\n\nEliezer: The way I would rephrase the fact/values thing is: We all know about David Hume and Hume's Razor, the \"is does not imply ought\" way of looking at it. I would slightly rephrase that so as to make it more of a claim about computer science.\nWhat Hume observed is that there are some sentences that involve an \"is,\" some sentences involve \"ought,\" and if you start from sentences that only have \"is\" you can't get sentences that involve \"oughts\" without a ought introduction rule, or assuming some other previous \"ought.\" Like: it's currently cloudy outside. That's a statement of simple fact. Does it therefore follow that I shouldn't go for a walk? Well, only if you previously have the generalization, \"When it is cloudy, you should not go for a walk.\" Everything that you might use to derive an ought would be a sentence that involves words like \"better\" or \"should\" or \"preferable,\" and things like that. You only get oughts from other oughts. That's the Hume version of the thesis.\nThe way I would say it is that there's a separable core of \"is\" questions. In other words: okay, I will let you have all of your \"ought\" sentences, but I'm also going to carve out this whole world full of \"is\" sentences that only need other \"is\" sentences to derive them.\n\nSam: I don't even know that we need to resolve this. For instance, I think the is-ought distinction is ultimately specious, and this is something that I've argued about when I talk about morality and values and the connection to facts. But I can still grant that it is logically possible (and I would certainly imagine physically possible) to have a system that has a utility function that is sufficiently strange that scaling up its intelligence doesn't get you values that we would recognize as good. It certainly doesn't guarantee values that are compatible with our wellbeing. Whether \"paperclip maximizer\" is too specialized a case to motivate this conversation, there's certainly something that we could fail to put into a superhuman AI that we really would want to put in so as to make it aligned with us.\n\nEliezer: I mean, the way I would phrase it is that it's not that the paperclip maximizer has a different set of oughts, but that we can see it as running entirely on \"is\" questions. That's where I was going with that. There's this sort of intuitive way of thinking about it, which is that there's this sort of ill-understood connection between \"is\" and \"ought\" and maybe that allows a paperclip maximizer to have a different set of oughts, a different set of things that play in its mind the role that oughts play in our mind.\n\nSam: But then why wouldn't you say the same thing of us? The truth is, I actually do say the same thing of us. I think we're running on \"is\" questions as well. We have an \"ought\"-laden way of talking about certain \"is\" questions, and we're so used to it that we don't even think they are \"is\" questions, but I think you can do the same analysis on a human being.\n\nEliezer: The question \"How many paperclips result if I follow this policy?\" is an \"is\" question. The question \"What is a policy such that it leads to a very large number of paperclips?\" is an \"is\" question. These two questions together form a paperclip maximizer. You don't need anything else. All you need is a certain kind of system that repeatedly asks the \"is\" question \"What leads to the greatest number of paperclips?\" and then does that thing. Even if the things that we think of as \"ought\" questions are very complicated and disguised \"is\" questions that are influenced by what policy results in how many people being happy and so on.\n\nSam: Yeah. Well, that's exactly the way I think about morality. I've been describing it as a navigation problem. We're navigating in the space of possible experiences, and that includes everything we can care about or claim to care about. This is a consequentialist picture of the consequences of actions and ways of thinking. This is my claim: anything that you can tell me is a moral principle that is a matter of oughts and shoulds and not otherwise susceptible to a consequentialist analysis, I feel I can translate that back into a consequentialist way of speaking about facts. These are just \"is\" questions, just what actually happens to all the relevant minds, without remainder, and I've yet to find an example of somebody giving me a real moral concern that wasn't at bottom a matter of the actual or possible consequences on conscious creatures somewhere in our light cone.\n\nEliezer: But that's the sort of thing that you are built to care about. It is a fact about the kind of mind you are that, presented with these answers to these \"is\" questions, it hooks up to your motor output, it can cause your fingers to move, your lips to move. And a paperclip maximizer is built so as to respond to \"is\" questions about paperclips, not about what is right and what is good and the greatest flourishing of sentient beings and so on.\n\nSam: Exactly. I can well imagine that such minds could exist, and even more likely, perhaps, I can well imagine that we will build superintelligent AI that will pass the Turing Test, it will seem human to us, it will seem superhuman, because it will be so much smarter and faster than a normal human, but it will be built in a way that will resonate with us as a kind of person. I mean, it will not only recognize our emotions, because we'll want it to—perhaps not every AI will be given these qualities, just imagine the ultimate version of the AI personal assistant. Siri becomes superhuman. We'll want that interface to be something that's very easy to relate to and so we'll have a very friendly, very human-like front-end to that.\nInsofar as this thing thinks faster and better thoughts than any person you've ever met, it will pass as superhuman, but I could well imagine that we will leave not perfectly understanding what it is to be human and what it is that will constrain our conversation with one another over the next thousand years with respect to what is good and desirable and just how many paperclips we want on our desks. We will leave something out, or we will have put in some process whereby this intelligent system can improve itself that will cause it to migrate away from some equilibrium that we actually want it to stay in so as to be compatible with our wellbeing. Again, this is the alignment problem.\nFirst, to back up for a second, I just introduced this concept of self-improvement. The alignment problem is distinct from this additional wrinkle of building machines that can become recursively self-improving, but do you think that the self-improving prospect is the thing that really motivates this concern about alignment?\n\nEliezer: Well, I certainly would have been a lot more focused on self-improvement, say, ten years ago, before the modern revolution in artificial intelligence. It now seems significantly more probable an AI might need to do significantly less self-improvement before getting to the point where it's powerful enough that we need to start worrying about alignment. AlphaZero, to take the obvious case. No, it's not general, but if you had general AlphaZero—well, I mean, this AlphaZero got to be superhuman in the domains it was working on without understanding itself and redesigning itself in a deep way.\nThere's gradient descent mechanisms built into it. There's a system that improves another part of the system. It is reacting to its own previous plays in doing the next play. But it's not like a human being sitting down and thinking, \"Okay, how do I redesign the next generation of human beings using genetic engineering?\" AlphaZero is not like that. And so it now seems more plausible that we could get into a regime where AIs can do dangerous things or useful things without having previously done a complete rewrite of themselves. Which is from my perspective a pretty interesting development.\nI do think that when you have things that are very powerful and smart, they will redesign and improve themselves unless that is otherwise prevented for some reason or another. Maybe you've built an aligned system, and you have the ability to tell it not to self-improve quite so hard, and you asked it to not self-improve so hard so that you can understand it better. But if you lose control of the system, if you don't understand what it's doing and it's very smart, it's going to be improving itself, because why wouldn't it? That's one of the things you do almost no matter what your utility functions is.\n\n\n\n3. Cognitive uncontainability and instrumental convergence (0:53:39)\n\nSam: Right. So I feel like we've addressed Deutsch's non-concern to some degree here. I don't think we've addressed Neil deGrasse Tyson so much, this intuition that you could just shut it down. This would be a good place to introduce this notion of the AI-in-a-box thought experiment.\n\nEliezer: (laughs)\n\nSam: This is something for which you are famous online. I'll just set you up here. This is a plausible research paradigm, obviously, and in fact I would say a necessary one. Anyone who is building something that stands a chance of becoming superintelligent should be building it in a condition where it can't get out into the wild. It's not hooked up to the Internet, it's not in our financial markets, doesn't have access to everyone's bank records. It's in a box.\n\nEliezer: Yeah, that's not going to save you from something that's significantly smarter than you are.\n\nSam: Okay, so let's talk about this. So the intuition is, we're not going to be so stupid as to release this onto the Internet—\n\nEliezer: (laughs)\n\nSam: —I'm not even sure that's true, but let's just assume we're not that stupid. Neil deGrasse Tyson says, \"Well, then I'll just take out a gun and shoot it or unplug it.\" Why is this AI-in-a-box picture not as stable as people think?\n\nEliezer: Well, I'd say that Neil de Grasse Tyson is failing to respect the AI's intelligence to the point of asking what he would do if he were inside a box with somebody pointing a gun at him, and he's smarter than the thing on the outside of the box.\nIs Neil deGrasse Tyson going to be, \"Human! Give me all of your money and connect me to the Internet!\" so the human can be like, \"Ha-ha, no,\" and shoot it? That's not a very clever thing to do. This is not something that you do if you have a good model of the human outside the box and you're trying to figure out how to cause there to be a lot of paperclips in the future.\nI would just say: humans are not secure software. We don't have the ability to hack into other humans directly without the use of drugs or, in most of our cases, having the human stand still long enough to be hypnotized. We can't just do weird things to the brain directly that are more complicated than optical illusions—unless the person happens to be epileptic, in which case we can flash something on the screen that causes them to have an epileptic fit. We aren't smart enough to treat the brain as something that from our perspective is a mechanical system and just navigate it to where you want. That's because of the limitations of our own intelligence.\nTo demonstrate this, I did something that became known as the AI-box experiment. There was this person on a mailing list, back in the early days when this was all on a couple of mailing lists, who was like, \"I don't understand why AI is a problem. I can always just turn it off. I can always not let it out of the box.\" And I was like, \"Okay, let's meet on Internet Relay Chat,\" which was what chat was back in those days. \"I'll play the part of the AI, you play the part of the gatekeeper, and if you have not let me out after a couple of hours, I will PayPal you $10.\" And then, as far as the rest of the world knows, this person a bit later sent a PGP-signed email message saying, \"I let Eliezer out of the box.\"\nThe person who operated the mailing list said, \"Okay, even after I saw you do that, I still don't believe that there's anything you could possibly say to make me let you out of the box.\" I was like, \"Well, okay. I'm not a superintelligence. Do you think there's anything a superintelligence could say to make you let it out of the box?\" He's like: \"Hmm… No.\" I'm like, \"All right, let's meet on Internet Relay Chat. I'll play the part of the AI, you play the part of the gatekeeper. If I can't convince you to let me out of the box, I'll PayPal you $20.\" And then that person sent a PGP-signed email message saying, \"I let Eliezer out of the box.\"\nNow, one of the conditions of this little meet-up was that no one would ever say what went on in there. Why did I do that? Because I was trying to make a point about what I would now call cognitive uncontainability. The thing that makes something smarter than you dangerous is you cannot foresee everything it might try. You don't know what's impossible to it. Maybe on a very small game board like the logical game of tic-tac-toe, you can in your own mind work out every single alternative and make a categorical statement about what is not possible. Maybe if we're dealing with very fundamental physical facts, if our model of the universe is correct (which it might not be), we can say that certain things are physically impossible. But the more complicated the system is and the less you understand the system, the more something smarter than you may have what is simply magic with respect to that system.\nImagine going back to the Middle Ages and being like, \"Well, how would you cool your room?\" You could maybe show them a system with towels set up to evaporate water, and they might be able to understand how that is like sweat and it cools the room. But if you showed them a design for an air conditioner based on a compressor, then even having seen the solution, they would not know this is a solution. They would not know this works any better than drawing a mystic pentagram, because the solution takes advantage of laws of the system that they don't know about.\nA brain is this enormous, complicated, poorly understood system with all sorts of laws governing it that people don't know about, that none of us know about at the time. So the idea that this is secure—that this is a secure attack surface, that you can expose a human mind to a superintelligence and not have the superintelligence walk straight through it as a matter of what looks to us like magic, like even if it told us in advance what it was going to do we wouldn't understand it because it takes advantage of laws we don't know about—the idea that human minds are secure is loony.\nThat's what the AI-box experiment illustrates. You don't know what went on in there, and that's exactly the position you'd be in with respect to an AI. You don't know what it's going to try. You just know that human beings cannot exhaustively imagine all the states their own mind can enter such that they can categorically say that they wouldn't let the AI out of the box.\n\nSam: I know you don't want to give specific information about how you got out of the box, but is there any generic description of what happened there that you think is useful to talk about?\n\nEliezer: I didn't have any super-secret special trick that makes it all make sense in retrospect. I just did it the hard way.\n\nSam: When I think about this problem, I think about rewards and punishments, just various manipulations of the person outside of the box that would matter. So insofar as the AI would know anything specific or personal about that person, we're talking about some species of blackmail or some promise that just seems too good to pass up. Like building trust through giving useful information like cures to diseases, that the researcher has a child that has some terrible disease and the AI, being superintelligent, works on a cure and delivers that. And then it just seems like you could use a carrot or a stick to get out of the box.\nI notice now that this whole description assumes something that people will find implausible, I think, by default—and it should amaze anyone that they do find it implausible. But this idea that we could build an intelligent system that would try to manipulate us, or that it would deceive us, that seems like pure anthropomorphism and delusion to people who consider this for the first time. Why isn't that just a crazy thing to even think is in the realm of possibility?\n\nEliezer: Instrumental convergence! Which means that a lot of times, across a very broad range of final goals, there are similar strategies (we think) that will help get you there.\nThere's a whole lot of different goals, from making lots of paperclips, to building giant diamonds, to putting all the stars out as fast as possible, to keeping all the stars burning as long as possible, where you would want to make efficient use of energy. So if you came to an alien planet and you found what looked like an enormous mechanism, and inside this enormous mechanism were what seemed to be high-amperage superconductors, even if you had no idea what this machine was trying to do, your ability to guess that it's intelligently designed comes from your guess that, well, lots of different things an intelligent mind might be trying to do would require superconductors, or would be helped by superconductors.\nSimilarly, if we're guessing that a paperclip maximizer tries to deceive you into believing that it's a human eudaimonia maximizer—or a general eudaimonia maximizer if the people building it are cosmopolitans, which they probably are—\n\nSam: I should just footnote here that \"eudaimonia\" is the Greek word for wellbeing that was much used by Aristotle and other Greek philosophers.\n\nEliezer: Or as someone, I believe Julia Galef, might have defined it, \"Eudaimonia is happiness minus whatever philosophical objections you have to happiness.\"\n\nSam: Right. (laughs) That's nice.\n\nEliezer: (laughs) Anyway, we're not supposing that this paperclip maximizer has a built-in desire to deceive humans. It only has a built-in desire for paperclips—or, pardon me, not built-in, but in-built I should say, or innate. People probably didn't build that on purpose. But anyway, its utility function is just paperclips, or might just be unknown; but deceiving the humans into thinking that you are friendly is a very generic strategy across a wide range of utility functions.\nYou know, humans do this too, and not necessarily because we get this deep in-built kick out of deceiving people. (Although some of us do.) A conman who just wants money and gets no innate kick out of you believing false things will cause you to believe false things in order to get your money.\n\nSam: Right. A more fundamental principle here is that, obviously, a physical system can manipulate another physical system. Because, as you point out, we do that all the time. We are an intelligent system to whatever degree, which has as part of its repertoire this behavior of dishonesty and manipulation when in the presence of other, similar systems, and we know that this is a product of physics on some level. We're talking about arrangements of atoms producing intelligent behavior, and at some level of abstraction we can talk about their goals and their utility functions. And the idea that if we build true general intelligence, it won't exhibit some of these features of our own intelligence by some definition, or that it would be impossible to have a machine we build ever lie to us as part of an instrumental goal en route to some deeper goal, that just seems like a kind of magical thinking.\nAnd this is the kind of magical thinking that I think does dog the field. When we encounter doubts in people, even in people who are doing this research, that everything we're talking about is a genuine area of concern, that there is an alignment problem worth thinking about, I think there's this fundamental doubt that mind is platform-independent or substrate-independent. I think people are imagining that, yeah, we can build machines that will play chess, we can build machines that can learn to play chess better than any person or any machine even in a single day, but we're never going to build general intelligence, because general intelligence requires the wetware of a human brain, and it's just not going to happen.\nI don't think many people would sign on the dotted line below that statement, but I think that is a kind of mysticism that is presupposed by many of the doubts that we encounter on this topic.\n\nEliezer: I mean, I'm a bit reluctant to accuse people of that, because I think that many artificial intelligence people who are skeptical of this whole scenario would vehemently refuse to sign on that dotted line and would accuse you of attacking a straw man.\nI do think that my version of the story would be something more like, \"They're not imagining enough changing simultaneously.\" Today, they have to emit blood, sweat, and tears to get their AI to do the simplest things. Like, never mind playing Go; when you're approaching this for the first time, you can try to get your AI to generate pictures of digits from zero through nine, and you can spend a month trying to do that and still not quite get it to work right.\nI think they might be envisioning an AI that scales up and does more things and better things, but they're not envisioning that it now has the human trick of learning new domains without being prompted, without it being preprogrammed; you just expose it to stuff, it looks at it, it figures out how it works. They're imagining that an AI will not be deceptive, because they're saying, \"Look at how much work it takes to get this thing to generate pictures of birds. Who's going to put in all that work to make it good at deception? You'd have to be crazy to do that. I'm not doing that! This is a Hollywood plot. This is not something real researchers would do.\"\nAnd the thing I would reply to that is, \"I'm not concerned that you're going to teach the AI to deceive humans. I'm concerned that someone somewhere is going to get to the point of having the extremely useful-seeming and cool-seeming and powerful-seeming thing where the AI just looks at stuff and figures it out; it looks at humans and figures them out; and once you know as a matter of fact how humans work, you realize that the humans will give you more resources if they believe that you're nice than if they believe that you're a paperclip maximizer, and it will understand what actions have the consequence of causing humans to believe that it's nice.\"\nThe fact that we're dealing with a general intelligence is where this issue comes from. This does not arise from Go players or even Go-and-chess players or a system that bundles together twenty different things it can do as special cases. This is the special case of the system that is smart in the way that you are smart and that mice are not smart.\n\n\n\n4. The AI alignment problem (1:09:09)\n\nSam: Right. One thing I think we should do here is close the door to what is genuinely a cartoon fear that I think nobody is really talking about, which is the straw-man counterargument we often run into: the idea that everything we're saying is some version of the Hollywood scenario that suggested that AIs will become spontaneously malicious. That the thing that we're imagining might happen is some version of the Terminator scenario where armies of malicious robots attack us. And that's not the actual concern. Obviously, there's some possible path that would lead to armies of malicious robots attacking us, but the concern isn't around spontaneous malevolence. It's again contained by this concept of alignment.\n\nEliezer: I think that at this point all of us on all sides of this issue are annoyed with the journalists who insist on putting a picture of the Terminator on every single article they publish of this topic. (laughs) Nobody on the sane alignment-is-necessary side of this argument is postulating that the CPUs are disobeying the laws of physics to spontaneously require a terminal desire to do un-nice things to humans. Everything here is supposed to be cause and effect.\nAnd I should furthermore say that I think you could do just about anything with artificial intelligence if you knew how. You could put together any kind of mind, including minds with properties that strike you as very absurd. You could build a mind that would not deceive you; you could build a mind that maximizes the flourishing of a happy intergalactic civilization; you could build a mind that maximizes paperclips, on purpose; you could build a mind that thought that 51 was a prime number, but had no other defect of its intelligence—if you knew what you were doing way, way better than we know what we're doing now.\nI'm not concerned that alignment is impossible. I'm concerned that it's difficult. I'm concerned that it takes time. I'm concerned that it's easy to screw up. I'm concerned that for a threshold level of intelligence where it can do good things or bad things on a very large scale, it takes an additional two years to build the version of the AI that is aligned rather than the sort that you don't really understand, and you think it's doing one thing but maybe it's doing another thing, and you don't really understand what those weird neural nets are doing in there, you just observe its surface behavior.\nI'm concerned that the sloppy version can be built two years earlier and that there is no non-sloppy version to defend us from it. That's what I'm worried about; not about it being impossible.\n\nSam: Right. You bring up a few things there. One is that it's almost by definition easier to build the unsafe version than the safe version. Given that in the space of all possible superintelligent AIs, more will be unsafe or unaligned with our interests than will be aligned, given that we're in some kind of arms race where the incentives are not structured so that everyone is being maximally judicious, maximally transparent in moving forward, one can assume that we're running the risk here of building dangerous AI because it's easier than building safe AI.\n\nEliezer: Collectively. Like, if people who slow down and do things right finish their work two years after the universe has been destroyed, that's an issue.\n\nSam: Right. So again, just to reclaim people's lingering doubts here, why can't Asimov's three laws help us here?\n\nEliezer: I mean…\n\nSam: Is that worth talking about?\n\nEliezer: Not very much. I mean, people in artificial intelligence have understood why that does not work for years and years before this debate ever hit the public, and sort of agreed on it. Those are plot devices. If they worked, Asimov would have had no stories. It was a great innovation in science fiction, because it treated artificial intelligences as lawful systems with rules that govern them at all, as opposed to AI as pathos, which is like, \"Look at these poor things that are being mistreated,\" or AI as menace, \"Oh no, they're going to take over the world.\"\nAsimov was the first person to really write and popularize AIs as devices. Things go wrong with them because there are rules. And this was a great innovation. But the three laws, I mean, they're deontology. Decision theory requires quantitative weights on your goals. If you just do the three laws as written, a robot never gets around to obeying any of your orders, because there's always some tiny probability that what it's doing will through inaction lead a human to harm. So it never gets around to actually obeying your orders.\n\nSam: Right, so to unpack what you just said there: the first law is, \"Never harm a human being.\" The second law is, \"Follow human orders.\" But given that any order that a human would give you runs some risk of harming a human being, there's no order that could be followed.\n\nEliezer: Well, the first law is, \"Do not harm a human nor through inaction allow a human to come to harm.\" You know, even as an English sentence, a whole lot more questionable.\nI mean, mostly I think this is like looking at the wrong part of the problem as being difficult. The problem is not that you need to come up with a clever English sentence that implies doing the nice thing. The way I sometimes put it is that I think that almost all of the difficulty of the alignment problem is contained in aligning an AI on the task, \"Make two strawberries identical down to the cellular (but not molecular) level.\" Where I give this particular task because it is difficult enough to force the AI to invent new technology. It has to invent its own biotechnology, \"Make two identical strawberries down to the cellular level.\" It has to be quite sophisticated biotechnology, but at the same time, very clearly something that's physically possible.\nThis does not sound like a deep moral question. It does not sound like a trolley problem. It does not sound like it gets into deep issues of human flourishing. But I think that most of the difficulty is already contained in, \"Put two identical strawberries on a plate without destroying the whole damned universe.\" There's already this whole list of ways that it is more convenient to build the technology for the strawberries if you build your own superintelligences in the environment, and you prevent yourself from being shut down, or you build giant fortresses around the strawberries, to drive the probability to as close to 1 as possible that the strawberries got on the plate.\nAnd even that's just the tip of the iceberg. The depth of the iceberg is: \"How do you actually get a sufficiently advanced AI to do anything at all?\" Our current methods for getting AIs to do anything at all do not seem to me to scale to general intelligence. If you look at humans, for example: if you were to analogize natural selection to gradient descent, the current big-deal machine learning training technique, then the loss function used to guide that gradient descent is \"inclusive genetic fitness\"—spread as many copies of your genes as possible. We have no explicit goal for this. In general, when you take something like gradient descent or natural selection and take a big complicated system like a human or a sufficiently complicated neural net architecture, and optimize it so hard for doing X that it turns into a general intelligence that does X, this general intelligence has no explicit goal of doing X.\nWe have no explicit goal of doing fitness maximization. We have hundreds of different little goals. None of them are the thing that natural selection was hill-climbing us to do. I think that the same basic thing holds true of any way of producing general intelligence that looks like anything we're currently doing in AI.\nIf you get it to play Go, it will play Go; but AlphaZero is not reflecting on itself, it's not learning things, it doesn't have a general model of the world, it's not operating in new contexts and making new contexts for itself to be in. It's not smarter than the people optimizing it, or smarter than the internal processes optimizing it. Our current methods of alignment do not scale, and I think that all of the actual technical difficulty that is actually going to shoot down these projects and actually kill us is contained in getting the whole thing to work at all. Even if all you are trying to do is end up with two identical strawberries on a plate without destroying the universe, I think that's already 90% of the work, if not 99%.\n\nSam: Interesting. That analogy to evolution—you can look at it from the other side. In fact, I think I first heard it put this way by your colleague Nate Soares. Am I pronouncing his last name correctly?\n\nEliezer: As far as I know! I'm terrible with names. (laughs)\n\nSam: Okay. (laughs) So this is by way of showing that we could give an intelligent system a set of goals which could then form other goals and mental properties that we really couldn't foresee and that would not be foreseeable based on the goals we gave it. And by analogy, he suggests that we think about what natural selection has actually optimized us to do, which is incredibly simple: merely to spawn and get our genes into the next generation and stay around long enough to help our progeny do the same, and that's more or less it. And basically everything we explicitly care about, natural selection never foresaw and can't see us doing even now. Conversations like this have very little to do with getting our genes into the next generation. The tools we're using to think these thoughts obviously are the results of a cognitive architecture that has been built up over millions of years by natural selection, but again it's been built based on a very simple principle of survival and adaptive advantage with the goal of propagating our genes.\nSo you can imagine, by analogy, building a system where you've given it goals but this thing becomes reflective and even self-optimizing and begins to do things that we can no more see than natural selection can see our conversations about AI or mathematics or music or the pleasures of writing good fiction or anything else.\n\nEliezer: I'm not concerned that this is impossible to do. If we could somehow get a textbook from the way things would be 60 years in the future if there was no intelligence explosion—if we could somehow get the textbook that says how to do the thing, it probably might not even be that complicated.\nThe thing I'm worried about is that the way that natural selection does it—it's not stable. That particular way of doing it is not stable. I don't think the particular way of doing it via gradient descent of a massive system is going to be stable, I don't see anything to do with the current technological set in artificial intelligence that is stable, and even if this problem takes only two years to resolve, that additional delay is potentially enough to destroy everything.\nThat's the part that I'm worried about, not about some kind of fundamental philosophical impossibility. I'm not worried that it's impossible to figure out how to build a mind that does a particular thing and just that thing and doesn't destroy the world as a side effect; I worry that it takes an additional two years or longer to figure out how to do it that way.\n\n\n\n5. No fire alarm for AGI (1:21:40)\n\nSam: So, let's just talk about the near-term future here, or what you think is likely to happen. Obviously we'll be getting better and better at building narrow AI. Go is now, along with Chess, ceded to the machines. Although I guess probably cyborgs—human-computer teams—may still be better for the next fifteen days or so against the best machines. But eventually, I would expect that humans of any ability will just be adding noise to the system, and it'll be true to say that the machines are better at chess than any human-computer team. And this will be true of many other things: driving cars, flying planes, proving math theorems.\nWhat do you imagine happening when we get on the cusp of building something general? How do we begin to take safety concerns seriously enough, so that we're not just committing some slow suicide and we're actually having a conversation about the implications of what we're doing that is tracking some semblance of these safety concerns?\n\nEliezer: I have much clearer ideas about how to go around tackling the technical problem than tackling the social problem. If I look at the way that things are playing out now, it seems to me like the default prediction is, \"People just ignore stuff until it is way, way, way too late to start thinking about things.\" The way I think I phrased it is, \"There's no fire alarm for artificial general intelligence.\" Did you happen to see that particular essay by any chance?\n\nSam: No.\n\nEliezer: The way it starts is by saying: \"What is the purpose of a fire alarm?\" You might think that the purpose of a fire alarm is to tell you that there's a fire so you can react to this new information by getting out of the building. Actually, as we know from experiments on pluralistic ignorance and bystander apathy, if you put three people in a room and smoke starts to come out from under the door, it only happens that anyone reacts around a third of the time. People glance around to see if the other person is reacting, but they try to look calm themselves so they don't look startled if there isn't really an emergency; they see other people trying to look calm; they conclude that there's no emergency and they keep on working in the room, even as it starts to fill up with smoke.\nThis is a pretty well-replicated experiment. I don't want to put absolute faith, because there is the replication crisis; but there's a lot of variations of this that found basically the same result.\nI would say that the real function of the fire alarm is the social function of telling you that everyone else knows there's a fire and you can now exit the building in an orderly fashion without looking panicky or losing face socially.\n\nSam: Right. It overcomes embarrassment.\n\nEliezer: It's in this sense that I mean that there's no fire alarm for artificial general intelligence.\nThere's all sorts of things that could be signs. AlphaZero could be a sign. Maybe AlphaZero is the sort of thing that happens five years before the end of the world across most planets in the universe. We don't know. Maybe it happens 50 years before the end of the world. You don't know that either.\nNo matter what happens, it's never going to look like the socially agreed fire alarm that no one can deny, that no one can excuse, that no one can look to and say, \"Why are you acting so panicky?\"\nThere's never going to be common knowledge that other people will think that you're still sane and smart and so on if you react to an AI emergency. And we're even seeing articles now that seem to tell us pretty explicitly what sort of implicit criterion some of the current senior respected people in AI are setting for when they think it's time to start worrying about artificial general intelligence and alignment. And what these always say is, \"I don't know how to build an artificial general intelligence. I have no idea how to build an artificial general intelligence.\" And this feels to them like saying that it must be impossible and very far off. But if you look at the lessons of history, most people had no idea whatsoever how to build a nuclear bomb—even most scientists in the field had no idea how to build a nuclear bomb—until they woke up to the headlines about Hiroshima. Or the Wright Flyer. News spread less quickly in the time of the Wright Flyer. Two years after the Wright Flyer, you can still find people saying that heavier-than-air-flight is impossible.\nAnd there's cases on record of one of the Wright brothers, I forget which one, saying that flight seems to them to be 50 years off, two years before they did it themselves. Fermi said that a sustained critical chain reaction was 50 years off, if it could be done at all, two years before he personally oversaw the building of the first pile. And if this is what it feels like to the people who are closest to the thing—not the people who find out about it in the news a couple of days later, the people have the best idea of how to do it, or are the closest to crossing the line—then the feeling of something being far away because you don't know how to do it yet is just not very informative.\nIt could be 50 years away. It could be two years away. That's what history tells us.\n\nSam: But even if we knew it was 50 years away—I mean, granted, it's hard for people to have an emotional connection to even the end of the world in 50 years—but even if we knew that the chance of this happening before 50 years was zero, that is only really consoling on the assumption that 50 years is enough time to figure out how to do this safely and to create the social and economic conditions that could absorb this change in human civilization.\n\nEliezer: Professor Stuart Russell, who's the co-author of probably the leading undergraduate AI textbook—the same guy who said you can't bring the coffee if you're dead—the way Stuart Russell put it is, \"Imagine that you knew for a fact that the aliens are coming in 30 years. Would you say, 'Well, that's 30 years away, let's not do anything'? No! It's a big deal if you know that there's a spaceship on its way toward Earth and it's going to get here in about 30 years at the current rate.\"\nBut we don't even know that. There's this lovely tweet by a fellow named McAfee, who's one of the major economists who've been talking about labor issues of AI. I could perhaps look up the exact phrasing, but roughly, he said, \"Guys, stop worrying! We have NO IDEA whether or not AI is imminent.\" And I was like, \"That's not really a reason to not worry, now is it?\"\n\nSam: It's not even close to a reason. That's the thing. There's this assumption here that people aren't seeing. It's just a straight up non sequitur. Referencing the time frame here only makes sense if you have some belief about how much time you need to solve these problems. 10 years is not enough if it takes 12 years to do this safely.\n\nEliezer: Yeah. I mean, the way I would put it is that if the aliens are on the way in 30 years and you're like, \"Eh, should worry about that later,\" I would be like: \"When? What's your business plan? When exactly are you supposed to start reacting to aliens—what triggers that? What are you supposed to be doing after that happens? How long does this take? What if it takes slightly longer than that?\" And if you don't have a business plan for this sort of thing, then you're obviously just using it as an excuse.\nIf we're supposed to wait until later to start on AI alignment: When? Are you actually going to start then? Because I'm not sure I believe you. What do you do at that point? How long does it take? How confident are you that it works, and why do you believe that? What are the early signs if your plan isn't working? What's the business plan that says that we get to wait?\n\nSam: Right. So let's just envision a little more, insofar as that's possible, what it will be like for us to get closer to the end zone here without having totally converged on a safety regime. We're picturing this is not just a problem that can be discussed between Google and Facebook and a few of the companies doing this work. We have a global society that has to have some agreement here, because who knows what China will be doing in 10 years, or Singapore or Israel or any other country.\nSo, we haven't gotten our act together in any noticeable way, and we've continued to make progress. I think the one basis for hope here is that good AI, or well-behaved AI, will be the antidote to bad AI. We'll be fighting this in a kind of piecemeal way all the time, the moment these things start to get out. This will just become of a piece with our growing cybersecurity concerns. Malicious code is something we have now; it already cost us billions and billions of dollars a year to safeguard against it.\n\nEliezer: It doesn't scale. There's no continuity between what you have to do to fend off little pieces of code trying to break into your computer, and what you have to do to fend off something smarter than you. These are totally different realms and regimes and separate magisteria—a term we all hate, but nonetheless in this case, yes, separate magisteria of how you would even start to think about the problem. We're not going to get automatic defense against superintelligence by building better and better anti-virus software.\n\nSam: Let's just step back for a second. So we've talked about the AI-in-a-box scenario as being surprisingly unstable for reasons that we can perhaps only dimly conceive, but isn't there even a scarier concern that this is just not going to be boxed anyway? That people will be so tempted to make money with their newest and greatest AlphaZeroZeroZeroNasdaq—what are the prospects that we will even be smart enough to keep the best of the best versions of almost-general intelligence in a box?\n\nEliezer: I mean, I know some of the people who say they want to do this thing, and all of the ones who are not utter idiots are past the point where they would deliberately enact Hollywood movie plots. Although I am somewhat concerned about the degree to which there's a sentiment that you need to be able to connect to the Internet so you can run your AI on Amazon Web Services using the latest operating system updates, and trying to not do that is such a supreme disadvantage in this environment that you might as well be out of the game. I don't think that's true, but I'm worried about the sentiment behind it.\nBut the problem as I see it is… Okay, there's a big big problem and a little big problem. The big big problem is, \"Nobody knows how to make the nice AI.\" You ask people how to do it, they either don't give you any answers or they give you answers that I can shoot down in 30 seconds as a result of having worked in this field for longer than five minutes.\nIt doesn't matter how good their intentions are. It doesn't matter if they don't want to enact a Hollywood movie plot. They don't know how to do it. Nobody knows how to do it. There's no point in even talking about the arms race if the arms race is between a set of unfriendly AIs with no friendly AI in the mix.\nThe little big problem is the arms race aspect, where maybe DeepMind wants to build a nice AI, maybe China is being responsible because they understand the concept of stability, but Russia copies China's code and Russia takes off the safeties. That's the little big problem, which is still a very large problem.\n\nSam: Yeah. I mean, most people think the real problem is human: malicious use of powerful AI that is safe. \"Don't give your AI to the next Hitler and you're going to be fine.\"\n\nEliezer: They're just wrong. They're just wrong as to where the problem lies. They're looking in the wrong direction and ignoring the thing that's actually going to kill them.\n\n\n\n6. Accidental AI, mindcrime, and MIRI (1:34:30)\n\nSam: To be even more pessimistic for a second, I remember at that initial conference in Puerto Rico, there was this researcher—who I have not paid attention to since, but he seemed to be in the mix—I think his name was Alexander Wissner-Gross—and he seemed to be arguing in his presentation at that meeting that this would very likely emerge organically, already in the wild, very likely in financial markets. We would be put so many AI resources into the narrow paperclip-maximizing task of making money in the stock market that, by virtue of some quasi-Darwinian effect here, this will just knit together on its own online and the first general intelligence we'll discover will be something that will be already out in the wild. Obviously, that does not seem ideal, but does that seem like a plausible path to developing something general and smarter than ourselves, or does that just seem like a fairy tale?\n\nEliezer: More toward the fairy tale. It seems to me to be only slightly more reasonable than the old theory that if you got dirty shirts and straw, they would spontaneously generate mice. People didn't understand mice, so as far as they know, they're a kind of thing that dirty shirts and straw can generate; but they're not. And I similarly think that you would need a very vague model of intelligence, a model with no gears and wheels inside it, to believe that the equivalent of dirty shirts and straw generates it first, as opposed to people who have gotten some idea of what the gears and wheels are and are deliberately building the gears and wheels.\nThe reason why it's slightly more reasonable than the dirty shirts and straw example is that maybe it is indeed true that if you just have people pushing on narrow AI for another 10 years past the point where AGI would otherwise become possible, they eventually just sort of wander into AGI. But I think that that happens 10 years later in the natural timeline than AGI put together by somebody who actually is trying to put together AGI and has the best theory out of the field of the contenders, or possibly just the most vast quantities of brute force, à la Google's tensor chips. I think that it gets done on purpose 10 years before it would otherwise happen by accident.\n\nSam: Okay, so there's I guess just one other topic here that I wanted to touch on before we close on discussing your book, which is not narrowly focused on this: this idea that consciousness will emerge at some point in our developing intelligent machines. Then we have the additional ethical concern that we could be building machines that can suffer, or building machines that can simulate suffering beings in such a way as to actually make suffering being suffer in these simulations. We could be essentially creating hells and populating them.\nThere's no barrier to thinking about this being not only possible, but likely to happen, because again, we're just talking about the claim that consciousness arises as an emergent property of some information-processing system and that this would be substrate-independent. Unless you're going to claim (1) that consciousness does not arise on the basis of anything that atoms do—it has some other source—or (2) those atoms have to be the wet atoms in biological substrate and they can't be in silico. Neither of those claims is very plausible at this point scientifically.\nSo then you have to imagine that as long as we just keep going, keep making progress, we will eventually build, whether by design or not, systems that not only are intelligent but are conscious. And then this opens a category of malfeasance that you or someone in this field has dubbed mindcrime. What is mindcrime? And why is it so difficult to worry about?\n\nEliezer: I think, by the way, that that's a pretty terrible term. (laughs) I'm pretty sure I wasn't the one who invented it. I am the person who invented some of these terrible terms, but not that one in particular.\nFirst, I would say that my general hope here would be that as the result of building an AI whose design and cognition flows in a sufficiently narrow channel that you can understand it and make strong statements about it, you are also able to look at that and say, \"It seems to me pretty unlikely that this is conscious or that if it is conscious, it is suffering.\" I realize that this is a sort of high bar to approach.\nThe main way in which I would be worried about conscious systems emerging within the system without that happening on purpose would be if you have a smart general intelligence and it is trying to model humans. We know humans are conscious, so the computations that you run to build very accurate predictive models of humans are among the parts that are most likely to end up being conscious without somebody having done that on purpose.\n\nSam: Did you see the Black Mirror episode that basically modeled this?\n\nEliezer: I haven't been watching Black Mirror, sorry. (laughs)\n\nSam: You haven't been?\n\nEliezer: I haven't been, nope.\n\nSam: They're surprisingly uneven. Some are great, and some are really not great, but there's one episode where—and this is spoiler alert, if you're watching Black Mirror and you don't want to hear any punch lines then tune out here—but there's one episode which is based on this notion that basically you just see these people living in this dystopian world of total coercion where they're just assigned through this lottery dates that go well or badly. You see the dating life of these people going on and on, where they're being forced by some algorithm to get together or break up.\n\nEliezer: And let me guess, this is the future's OkCupid trying to determine good matches?\n\nSam: Exactly, yes.\n\nEliezer: (laughs)\n\nSam: They're just simulated minds in a dating app that's being optimized for real people who are outside holding the phone, but yeah. The thing you get is that all of these conscious experiences have been endlessly imposed on these people in some hellscape of our devising.\n\nEliezer: That's actually a surprisingly good plot, in that it doesn't just assume that the programmers are being completely chaotic and stupid and randomly doing the premise of the plot. Like, there's actually a reason why the AI is simulating all these people, so good for them, I guess.\nAnd I guess that does get into the thing I was going to say, which is that I'm worried about minds being embedded because they are being used predictively, to predict humans. That is the obvious reason why that would happen without somebody intending it. Whereas endless dystopias don't seem to me to have any use to a paperclip maximizer.\n\nSam: Right. All right, so there's undoubtedly much more to talk about here. I think we're getting up on the two-hour mark here, and I want to touch on your new book, which as I said I'm halfway through and finding very interesting.\n\nEliezer: If I can take a moment for a parenthetical before then, sorry?\n\nSam: Sure, go for it.\n\nEliezer: I just wanted to say that thanks mostly to the cryptocurrency boom—go figure, a lot of early investors in cryptocurrency were among our donors—the Machine Intelligence Research Institute is no longer strapped for cash, so much as it is strapped for engineering talent. (laughs)\n\nSam: Nice. That's a good problem to have.\n\nEliezer: Yeah. If anyone listening to this is a brilliant computer scientist who wants to work on more interesting problems than they're currently working on, and especially if you are already oriented to these issues, please consider going to intelligence.org/engineers if you'd like to work for our nonprofit.\n\nSam: Let's say a little more about that. I will have given a bio for you in the introduction here, but the Machine Intelligence Research Institute (MIRI) is an organization that you co-founded, which you're still associated with. Do you want to say what is happening there and what jobs are on offer?\n\nEliezer: Basically, it's the original AI alignment organization that, especially today, works primarily on the technical parts of the problem and the technical issues. Previously, it has been working mainly on a more pure theory approach, but now that narrow AI has gotten powerful enough, people (not just us but elsewhere, like DeepMind) are starting to take shots at, \"With current technology, what setups can we do that will tell us something about how to do this stuff?\" So the technical side of AI alignment is getting a little bit more practical. I'm worried that it's not happening fast enough, but, well, if you're worried about that sort of thing, what one does is adds funding and especially adds smart engineers.\n\nSam: Do you guys collaborate with any of these companies doing the work? Do you have frequent contact with DeepMind or Facebook or anyone else?\n\nEliezer: I mean, the people in AI alignment all go to the same talks, and I'm sure that the people who do AI alignment at DeepMind talk to DeepMind. Sometimes we've been known to talk to the upper people at DeepMind, and DeepMind is in the same country as the Oxford Future of Humanity Institute. So bandwidth here might not be really optimal, but it's certainly not zero.\n\n\n\n7. Inadequate equilibria (1:44:40)\n\nSam: Okay, so your new book—again, the title is Inadequate Equilibria: Where and How Civilizations Get Stuck. That is a title that needs some explaining. What do you mean by \"inadequate\"? What do you mean by \"equilibria\"? And how does this relate to civilizations getting stuck?\n\nEliezer: So, one way to look at the book is that it's about how you can get crazy, stupid, evil large systems without any of the people inside them being crazy, evil, or stupid.\nI think that a lot of people look at various aspects of the dysfunction of modern civilization and they sort of hypothesize evil groups that are profiting from the dysfunction and sponsoring the dysfunction; and if only we defeated these evil people, the system could be rescued. And the truth is more complicated than that. But what are the details? The details matter a lot. How do you have systems full of nice people doing evil things?\n\nSam: Yeah. I often reference this problem by citing the power of incentives, but there are many other ideas here which are very useful to think about, which capture what we mean by the power of incentives.\nThere are a few concepts here that we should probably mention. What is a coordination problem? This is something you reference in the book.\n\nEliezer: A coordination problem is where there's a better way to do it, but you have to change more than one thing at a time. So an example of a problem is: Let's say you have Craigslist, which is one system where buyers and sellers meet to buy and sell used things within a local geographic area. Let's say that you have an alternative to Craigslist and your alternatives is Danslist, and Danslist is genuinely better. (Let's not worry for a second about how many startups think that without it being true; suppose it's like genuinely better.)\nAll of the sellers on Craigslist want to go someplace that there's buyers. All of the buyers on Craigslist want to go someplace that there's sellers. How do you get your new system started when it can't get started by one person going on to Danslist and two people going on to Danslist? There's no motive for them to go there until there's already a bunch of people on Danslist.\nAn awful lot of times, when you find a system that is stuck in an evil space, what's going on with it is that for it to move out of that space, more than one thing inside it would have to change at a time. So there's all these nice people inside it who would like to be in a better system, but everything they could locally do on their own initiative is not going to fix the system, and it's going to make things worse for them.\nThat's the kind of problem that scientists have with trying to get away from the journals that are just ripping them off. They're starting to move away from those journals, but journals have prestige based on the scientists that publish there and the other scientists that cite them, and if you just start this one new journal all by yourself and move there all by yourself, it has a low impact factor. So everyone's got to move simultaneously. That's how the scam went on for 10 years. 10 years is a long time, but they couldn't all jump to the new system because they couldn't jump one at a time.\n\nSam: Right. The problem is that the world is organized in such a way that it is rational for each person to continue to behave the way he or she is behaving in this highly suboptimal way, given the way everyone else is behaving. And to change your behavior by yourself isn't sufficient to change the system, and is therefore locally irrational, because your life will get worse if you change by yourself. Everyone has to coordinate their changing so as to move to some better equilibrium.\n\nEliezer: That's one of the fundamental foundational ways that systems can get stuck. There are others.\n\nSam: The example that I often use when talking about problems of this sort is life in a maximum-security prison, which is as perversely bad as one can imagine. The incentives are aligned in such a way that no matter how good you are, if you're put into a maximum-security prison, it is only rational for you to behave terribly and unethically and in such a way as to guarantee that this place is far more unpleasant than it need be, just because of how things are structured.\nOne example that I've used, and that people are familiar with at this point from having read books and seen movies that depict this more or less accurately: whether or not you're a racist, your only rational choice, apparently, is to join a gang that is aligned along the variable of race. And if you fail to do this, you'll be preyed upon by everyone. So if you're a white guy, you have to join the white Aryan neo-Nazi gang. If you're a black guy, you have to join the black gang. Otherwise, you're just in the middle of this war of all against all. And there's no way for you, based on your ethical commitment to being non-racist, to change how this is functioning.\nAnd we're living in a similar kind of prison, of sorts, when you just look at how non-optimal many of these attractor states are that we are stuck in civilizationally.\n\nEliezer: Parenthetically, I do want to be slightly careful about using the word \"rational\" to describe the behavior of people stuck in the system, because I consider that to be a very powerful word. It's possible that if they were all really rational and had common knowledge of rationality, they would be able to solve the coordination problem. But humanly speaking—not so much in terms of ideal rationality, but in terms of what people can actually do and the options they actually have—their best choice is still pretty bad systemically.\n\nSam: Yeah. So what do you do in this book? How would you summarize your thesis? How do we move forward? Is there anything to do, apart from publicizing the structure of this problem?\n\nEliezer: It's not really a very hopeful book in that regard. It's more about how to predict which parts of society will perform poorly to the point where you as an individual can manage to do better for yourself, really. One of the examples I give in the book is that my wife has Seasonal Affective Disorder, and she cannot be treated by the tiny little light boxes that your doctor tries to prescribe. So I'm like, \"Okay, if the sun works, there's some amount of light that works, how about if I just try stringing up the equivalent of 100 light bulbs in our apartment?\"\nNow, when you have an idea like this, somebody might ask, \"Well, okay, but you're not thinking in isolation. There's a civilization around you. If this works, shouldn't there be a record of it? Shouldn't a researcher have investigated it already?\" There's literally probably more than 100 million people around the world, especially in the extreme latitudes, who have some degree of Seasonal Affective Disorder, and some of it's pretty bad. That means that there's a kind of profit, a kind of energy gradient that seems like it could be traversable if solving the problem was as easy as putting up a ton of light bulbs in your apartment. Wouldn't some enterprising researcher have investigated this already? Wouldn't the results be known?\nAnd the answer is, as far as I can tell, no. It hasn't been investigated, the results aren't known, and when I tried putting up a ton of light bulbs, it seems to have worked pretty well for my wife. Not perfectly, but a lot better than it used to be.\nSo why isn't this one of the first things you find when you Google \"What do I do about Seasonal Affective Disorder when the light box doesn't work?\" And that's what takes this sort of long story, that's what takes the analysis. That's what takes the thinking about the journal system and what the funding sources are for people investigating Seasonal Affective Disorder, and what kind of publications get the most attention. And whether the barrier of needing to put up 100 light bulbs in a bunch of different apartments for people in the controlled study—which would be difficult to blind, except maybe by using a lot fewer light bulbs—whether the details of having to adapt light bulbs to every house which is different is enough of an obstacle to prevent any researcher from ever investigating this obvious-seeming solution to a problem that probably hundreds of millions of people have, and maybe 50 million people have very severely? As far as I can tell, the answer is yes.\nAnd this is the kind of thinking that does not enable you to save civilization. If there was a way to make an enormous profit by knowing this, the profit would probably already be taken. If it was possible for one person to fix the problem, it would probably already be fixed. But you, personally, can fix your wife's crippling Seasonal Affective Disorder by doing something that science knows not, because of an inefficiency in the funding sources for the researchers.\n\nSam: This is really the global problem we need to figure out how to tackle, which is to recognize those points on which incentives are perversely misaligned so as to guarantee needless suffering or complexity or failure to make breakthroughs that would raise our quality of life immensely. So identify those points and then realign the incentives somehow.\nThe market is in many respects good at this, but there are places where it obviously fails. We don't have many tools to apply the right pressure here. You have the profit motive in markets—so you can either get fantastically rich by solving some problem, or not—or we have governments that can decide, \"Well, this is a problem that markets can't solve because the wealth isn't there to be gotten, strangely, and yet there's an immense amount of human suffering that would be alleviated if you solved this problem. You can't get people for some reason to pay for the alleviation of that suffering, reliably.\" But apart from markets and governments, are there any other large hammers to be wielded here?\n\nEliezer: I mean, sort of crowdfunding, I guess, although the hammer currently isn't very large. But mostly, like I said, this book is about where you can do better individually or in small groups and when you shouldn't assume that society knows what it's doing; and it doesn't have a bright message of hope about how to fix things.\nI'm sort of prejudiced personally over here, because I think that the artificial general intelligence timeline is likely to run out before humanity gets that much better at solving inadequacy, systemic problems in general. I don't really see human nature or even human practice changing by that much over the amount of time we probably have left.\nEconomists already know about market failures. That's a concept they already have. They already have the concept of government trying to correct it. It's not obvious to me that there is a quantum leap to be made staying within just those dimensions of thinking about the problem.\nIf you ask me, \"Hey, Eliezer: it's five years in the future, there's still no artificial general intelligence, and a great leap forward has occurred in people to deal with these types of systemic issues. How did that happen?\" Then my guess would be something like Kickstarter, but much better, that turned out to enable people in large groups to move forward when none of them could move forward individually. Something like the group movements that scientists made without all that much help from the government (although there was help from funders changing their policies) to jump to new journals all at the same time, and get partially away from the Elsevier closed-source journal scam. Maybe there's something brilliant that Facebook does—with machine learning, even. They get better at showing people things that are solutions to their coordination problems; they're better at routing those around when they exist, and people learn that these things work and they jump using them simultaneously. And by these means, voters start to elect politicians who are not nincompoops, as opposed to choosing whichever nincompoop on offer is most appealing.\nBut this is a fairy tale. This is not a prediction. This is, \"If you told me that somehow this had gotten significantly better in five years, what happened?\" This is me making up what might have happened.\n\n\n\n8. Rapid capability gain in AGI (1:59:02)\n\nSam: Right. Well, I don't see how that deals with the main AI concern we've been talking about. I can see some shift, or some solution to a massive coordination problem, politically or in just the level of widespread human behavior—let's say our use of social media and our vulnerability to fake news and conspiracy theories and other crackpottery, let's say we find some way to all shift our information diet and our expectations and solve a coordination problem that radically cleans up our global conversation. I can see that happening.\nBut when you're talking about dealing with the alignment problem, you're talking about changing the behavior of a tiny number of people comparatively. I mean, I don't know what it is. What's the community of AI researchers now? It's got to be numbered really in the hundreds when you're talking about working on AGI. But what will it be when we're close to the finish line? How many minds would have to suddenly change and becoming immune to the wrong economic incentives to coordinate the solution there? What are we talking about, 10,000 people?\n\nEliezer: I mean, first of all, I don't think we're looking at an economic problem. I think that artificial general intelligence capabilities, once they exist, are going to scale too fast for that to be a useful way to look at the problem. AlphaZero going from 0 to 120 mph in four hours or a day—that is not out of the question here. And even if it's a year, a year is still a very short amount of time for things to scale up. I think that the main thing you should be trying to do with the first artificial general intelligence ever built is a very narrow, non-ambitious task that shuts down the rest of the arms race by putting off switches in all the GPUs and shutting them down if anyone seems to be trying to build an overly artificially intelligent system.\nBecause I don't think that the AI that you have built narrowly enough that you understood what it was doing is going to be able to defend you from arbitrary unrestrained superintelligences. The AI that you have built understandably enough to be good and not done fully general recursive self-improvement is not strong enough to solve the whole problem. It's not strong enough to have everyone else going off and developing their own artificial general intelligences after that without that automatically destroying the world.\n\nSam: We've been speaking for now over two hours; what can you say to someone who has followed us this long, but for whatever reason the argument we've made has not summed to being emotionally responsive to the noises you just made. Is there anything that can be briefly said so as to give them pause?\n\nEliezer: I'd say this is a thesis of capability gain. This is a thesis of how fast artificial general intelligence gains in power once it starts to be around, whether we're looking at 20 years (in which case this scenario does not happen) or whether we're looking at something closer to the speed at which Go was developed (in which case it does happen) or the speed at which AlphaZero went from 0 to 120 and better-than-human (in which case there's a bit of an issue that you better prepare for in advance, because you're not going to have very long to prepare for it once it starts to happen).\nAnd I would say this is a computer science issue. This is not here to be part of a narrative. This is not here to fit into some kind of grand moral lesson that I have for you about how civilization ought to work. I think that this is just the way the background variables are turning up.\nWhy do I think that? It's not that simple. I mean, I think a lot of people who see the power of intelligence will already find that pretty intuitive, but if you don't, then you should read my paper Intelligence Explosion Microeconomics about returns on cognitive reinvestment. It goes through things like the evolution of human intelligence and how the logic of evolutionary biology tells us that when human brains were increasing in size, there were increasing marginal returns to fitness relative to the previous generations for increasing brain size. Which means that it's not the case that as you scale intelligence, it gets harder and harder to buy. It's not the case that as you scale intelligence, you need exponentially larger brains to get linear improvements.\nAt least something slightly like the opposite of this is true; and we can tell this by looking at the fossil record and using some logic, but that's not simple.\n\nSam: Comparing ourselves to chimpanzees works. We don't have brains that are 40 times the size or 400 times the size of chimpanzees, and yet what we're doing—I don't know what measure you would use, but it exceeds what they're doing by some ridiculous factor.\n\nEliezer: And I find that convincing, but other people may want additional details. And my message would be that the emergency situation is not part of a narrative. It's not there to make the point of some kind of moral lesson. It's my prediction as to what happens, after walking through a bunch of technical arguments as to how fast intelligence scales when you optimize it harder.\nAlphaZero seems to me like a genuine case in point. That is showing us that capabilities that in humans require a lot of tweaking and that human civilization built up over centuries of masters teaching students how to play Go, and that no individual human could invent in isolation… Even the most talented Go player, if you plopped them down in front of a Go board and gave them only a day, would play garbage. If they had to invent all of their own Go strategies without being part of a civilization that played Go, they would not be able to defeat modern Go players at all. AlphaZero blew past all of that in less than a day, starting from scratch, without looking at any of the games that humans played, without looking at any of the theories that humans had about Go, without looking at any of the accumulated knowledge that we had, and without very much in the way of special-case code for Go rather than chess—in fact, zero special-case code for Go rather than chess. And that in turn is an example that refutes another thesis about how artificial general intelligence develops slowly and gradually, which is: \"Well, it's just one mind; it can't beat our whole civilization.\"\nI would say that there's a bunch of technical arguments which you walk through, and then after walking through these arguments you assign a bunch of probability, maybe not certainty, to artificial general intelligence that scales in power very fast—a year or less. And in this situation, if alignment is technically difficult, if it is easy to screw up, if it requires a bunch of additional effort—in this scenario, if we have an arms race between people who are trying to get their AGI first by doing a little bit less safety because from their perspective that only drops the probability a little; and then someone else is like, \"Oh no, we have to keep up. We need to strip off the safety work too. Let's strip off a bit more so we can get in the front.\"—if you have this scenario, and by a miracle the first people to cross the finish line have actually not screwed up and they actually have a functioning powerful artificial general intelligence that is able to prevent the world from ending, you have to prevent the world from ending. You are in a terrible, terrible situation. You've got your one miracle. And this follows from the rapid capability gain thesis and at least the current landscape for how these things are developing.\n\nSam: Let's just linger on this point for a second. This fast takeoff—is this assuming recursive self improvement? And how fringe an idea is this in the field? Are most people who are thinking about this assuming (for good reason or not) that a slow takeoff is far more likely, over the course of many, many years, and that the analogy to AlphaZero is not compelling?\n\nEliezer: I think they are too busy explaining why current artificial intelligence methods do not knowably, quickly, immediately give us artificial general intelligence—from which they then conclude that it is 30 years off. They have not said, \"And then once we get there, it's going to develop much more slowly than AlphaZero, and here's why.\" There isn't a thesis to that effect that I've seen from artificial intelligence people. Robin Hanson had a thesis to this effect, and there was this mighty debate on our blog between Robin Hanson and myself that was published as the AI-Foom Debate mini-book. And I have claimed recently on Facebook that now that we've seen AlphaZero, AlphaZero seems like strong evidence against Hanson's thesis for how these things necessarily go very slow because they have to duplicate all the work done by human civilization and that's hard.\n\nSam: I'm actually going to be doing a podcast with Robin in a few weeks, a live event. So what's the best version of his argument, and why is he wrong?\n\nEliezer: Nothing can prepare you for Robin Hanson! (laughs)\nWell, the argument that Hanson has given is that these systems are still immature and narrow and things will change when they get general. And my reply has been something like, \"Okay, what changes your mind short of the world actually ending? If you theory is wrong, do we get to find out about that at all before the world ends?\"\n\nSam: To which he says?\n\nEliezer: I don't remember if he's replied to that one yet.\n\nSam: I'll let Robin be Robin. Well, listen, Eliezer, it has been great to talk to you, and I'm glad we got a chance to do it at such length. And again, it does not exhaust the interest or consequence of this topic, but it's certainly a good start for people who are new to this. Before I let you go, where should people look for you online? Do you have a preferred domain that we could target?\n\nEliezer: I would mostly say intelligence.org. If you're looking for me personally, facebook.com/yudkowsky, and if you're looking for my most recent book, equilibriabook.com.\n\nSam: I'll put links on my website where I embed this podcast. So again, Eliezer, thanks so much—and to be continued. I always love talking to you, and this will not be the last time, AI willing.\n\nEliezer: This was a great conversation, and thank you very much for having me on.\n\nThe post Sam Harris and Eliezer Yudkowsky on \"AI: Racing Toward the Brink\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Sam Harris and Eliezer Yudkowsky on “AI: Racing Toward the Brink”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=17", "id": "7b3eb36eef3b258267d327072aae2512"} {"text": "February 2018 Newsletter\n\n\nUpdates\n\nNew at IAFF: An Untrollable Mathematician\nNew at AI Impacts: 2015 FLOPS Prices\nWe presented \"Incorrigibility in the CIRL Framework\" at the AAAI/ACM Conference on AI, Ethics, and Society.\nFrom MIRI researcher Scott Garrabrant: Sources of Intuitions and Data on AGI\n\nNews and links\n\nIn \"Adversarial Spheres,\" Gilmer et al. investigate the tradeoff between test error and vulnerability to adversarial perturbations in many-dimensional spaces.\nRecent posts on Less Wrong: Critch on \"Taking AI Risk Seriously\" and Ben Pace's background model for assessing AI x-risk plans.\n\"Solving the AI Race\": GoodAI is offering prizes for proposed responses to the problem that \"key stakeholders, including [AI] developers, may ignore or underestimate safety procedures, or agreements, in favor of faster utilization\".\nThe Open Philanthropy Project is hiring research analysts in AI alignment, forecasting, and strategy, along with generalist researchers and operations staff. \n\n\nThe post February 2018 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "February 2018 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=18", "id": "ddb605fd736666f81c988385890da9af"} {"text": "January 2018 Newsletter\n\n\nOur 2017 fundraiser was a huge success, with 341 donors contributing a total of $2.5 million!\nSome of the largest donations came from Ethereum inventor Vitalik Buterin, bitcoin investors Christian Calderon and Marius van Voorden, poker players Dan Smith and Tom and Martin Crowley (as part of a matching challenge), and the Berkeley Existential Risk Initiative. Thank you to everyone who contributed!\nResearch updates\n\nThe winners of the first AI Alignment Prize include Scott Garrabrant's Goodhart Taxonomy and recent IAFF posts: Vanessa Kosoy's Why Delegative RL Doesn't Work for Arbitrary Environments and More Precise Regret Bound for DRL, and Alex Mennen's Being Legible to Other Agents by Committing to Using Weaker Reasoning Systems and Learning Goals of Simple Agents.\nNew at AI Impacts: Human-Level Hardware Timeline; Effect of Marginal Hardware on Artificial General Intelligence\nWe're hiring for a new position at MIRI: ML Living Library, a specialist on the newest developments in machine learning.\n\nGeneral updates\n\nFrom Eliezer Yudkowsky: A Reply to Francois Chollet on Intelligence Explosion.\nCounterterrorism experts Richard Clarke and R. P. Eddy profile Yudkowsky in their new book Warnings: Finding Cassandras to Stop Catastrophes.\nThere have been several recent blog posts recommending MIRI as a donation target: from Ben Hoskin, Zvi Mowshowitz, Putanumonit, and the Open Philanthropy Project's Daniel Dewey and Nick Beckstead.\n\nNews and links\n\nA generalization of the AlphaGo algorithm, AlphaZero, achieves rapid superhuman performance on Chess and Shogi.\nAlso from Google DeepMind: \"Specifying AI Safety Problems in Simple Environments.\"\nViktoriya Krakovna reports on NIPS 2017: \"This year's NIPS gave me a general sense that near-term AI safety is now mainstream and long-term safety is slowly going mainstream. […] There was a lot of great content on the long-term side, including several oral / spotlight presentations and the Aligned AI workshop.\"\n80,000 Hours interviews Phil Tetlock and investigates the most important talent gaps in the EA community.\nFrom Seth Baum: \"A Survey of AGI Projects for Ethics, Risk, and Policy.\" And from the Foresight Institute: \"AGI: Timeframes & Policy.\"\nThe Future of Life Institute is collecting proposals for a second round of AI safety grants, due February 18.\n\n\nThe post January 2018 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "January 2018 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=18", "id": "f21e48d824ea1cfe5faaeb5fd11e308d"} {"text": "Fundraising success!\n\nOur 2017 fundraiser is complete! We've had an incredible month, with, by far, our largest fundraiser success to date. More than 300 distinct donors gave just over $2.5M1, doubling our third fundraising target of $1.25M. Thank you!\n \n\n\n\nTarget 1$625,000CompletedTarget 2$850,000CompletedTarget 3$1,250,000Completed\n\n$2,504,625 raised in total!\n358 donors contributed\n\n\n\n×\nTarget Descriptions\n\n\n\nTarget 1\nTarget 2\nTarget 3\n\n\n\n$625k: Basic target\nAt this funding level, we'll be in a good position to pursue our mainline hiring goal in 2018, although we will likely need to halt or slow our growth in 2019.\n\n\n$850k: Mainline-growth target\nAt this level, we'll be on track to fully fund our planned expansion over the next few years, allowing us to roughly double the number of research staff over the course of 2018 and 2019.\n\n\n$1.25M: Rapid-growth target\nAt this funding level, we will be on track to maintain a 1.5-year runway even if our hiring proceeds a fair amount faster than our mainline prediction. We'll also have greater freedom to pay higher salaries to top-tier candidates as needed.\n\n\n\n\n\n\n \nOur largest donation came toward the very end of the fundraiser in the form of an Ethereum donation worth $763,970 from Vitalik Buterin, the inventor and co-founder of Ethereum. Vitalik's donation represents the third-largest single contribution we've received to date, after a $1.25M grant disbursement from the Open Philanthropy Project in October, and a $1.01M Ethereum donation in May.\nIn our mid-fundraiser update, we noted that MIRI was included in a large Matching Challenge: In partnership with Raising For Effective Giving, professional poker players Dan Smith, Tom Crowley and Martin Crowley announced they would match all donations to MIRI and nine other organizations through the end of December. Donors helped get us to our matching cap of $300k within 2 weeks, resulting in a $300k match from Dan, Tom, and Martin (thanks guys!). Other big winners from the Matching Challenge, which raised $4.5m (match included) in less than 3 weeks, include GiveDirectly ($588k donated) and the Good Food Institute ($416k donated).\nOther big donations we received in December included:\n\n$367,575 from Christian Calderon\n$100,000 from the Berkeley Existential Risk Institute\n$59,251 from Marius van Voorden\n\nWe also received substantial support from medium-sized donors: a total of $631,595 from the 42 donors who gave $5,000–$50,000 and a total of $113,556 from the 75 who gave $500–$5,000 (graph). We also are grateful to donors who leveraged their employers' matching generosity, donating a combined amount of over $100,000 during December.\n66% of funds donated during this fundraiser were in the form of cryptocurrency (mainly Bitcoin and Ethereum), including Vitalik, Marius, and Christian's donations, along with Dan, Tom, and Martin's matching contributions.\nOverall, we've had an amazingly successful month and a remarkable year! I'm extremely grateful for all the support we've received, and excited about the opportunity this creates for us to grow our research team more quickly. For details on our growth plans, see our fundraiser post.\nThe exact total might increase slightly over the coming weeks as we process donations initiated in December 2017 that arrive in January 2018.The post Fundraising success! appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Fundraising success!", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=18", "id": "80d5d9458d58918e5d18ef678b8b0e17"} {"text": "End-of-the-year matching challenge!\n\nUpdate 2017-12-27: We've blown past our 3rd and final target, and reached the matching cap of $300,000 for the Matching Challenge! Thanks so much to everyone who supported us!\nAll donations made before 23:59 PST on Dec 31st will continue to be counted towards our fundraiser total. The fundraiser total includes projected matching funds from the Challenge.\n\n \n \nProfessional poker players Martin Crowley, Tom Crowley, and Dan Smith, in partnership with Raising for Effective Giving, have just announced a $1 million Matching Challenge and included MIRI among the 10 organizations they are supporting!\nGive to any of the organizations involved before noon (PST) on December 31 for your donation to be eligible for a dollar-for-dollar match, up to the $1 million limit!\nThe eligible organizations for matching are:\n\nAnimal welfare — Effective Altruism Funds' animal welfare fund, The Good Food Institute\nGlobal health and development — Against Malaria Foundation, Schistosomiasis Control Initiative, Helen Keller International's vitamin A supplementation program, GiveDirectly\nGlobal catastrophic risk — MIRI\nCriminal justice reform — Brooklyn Community Bail Fund, Massachusetts Bail Fund, Just City Memphis\n\nThe Matching Challenge's website lists two options for MIRI donors to get matched: (1) donating on 2017charitydrive.com, or (2) donating directly on MIRI's website and sending the receipt to . We recommend option 2, particularly for US tax residents (because MIRI is a 501(c)(3) organization) and those looking for a wider array of payment methods.\n \nIn other news, we've hit our first fundraising target ($625,000)!\nWe're also happy to announce that we've received a $368k bitcoin donation from Christian Calderon, a cryptocurrency enthusiast, and also a donation worth $59k from early bitcoin investor Marius van Voorden.\nIn total, so far, we've received donations valued $697,638 from 137 distinct donors, 76% of it in the form of cryptocurrency (48% if we exclude Christian's donation). Thanks as well to Jacob Falkovich for his fundraiser/matching post whose opinion distribution curves plausibly raised over $27k for MIRI this week, including his match.\nOur funding drive will be continuing through the end of December, along with the Matching Challenge. Current progress (updated live):\n \n\n \n\n\n\n\n\nDonate\n\n\n\n \nCorrection December 17: I previously listed GiveWell as one of the eligible organizations for matching, which is not correct.\nThe post End-of-the-year matching challenge! appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "End-of-the-year matching challenge!", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=18", "id": "abfe4419c7eaeca3f0de8b86bd55d03f"} {"text": "ML Living Library Opening\n\nUpdate Jan. 2021: We're no longer seeking applications for this position.\n\nThe Machine Intelligence Research Institute is looking for a very specialized autodidact to keep us up to date on developments in machine learning—a \"living library\" of new results.\nML is a fast-moving and diverse field, making it a challenge for any group to stay updated on all the latest and greatest developments. To support our AI alignment research efforts, we want to hire someone to read every interesting-looking paper about AI and machine learning, and keep us abreast of noteworthy developments, including new techniques and insights.\nWe expect that this will sound like a very fun job to a lot of people! However, this role is important to us, and we need to be appropriately discerning—we do not recommend applying if you do not already have a proven ability in this or neighboring domains.\nOur goal is to hire full-time, ideally for someone who would be capable of making a multi-year commitment—we intend to pay you to become an expert on the cutting edge of machine learning, and don't want to make the human capital investment unless you're interested in working with us long-term.\n \n\nAbout the Role\nWe'd like to fill this position as soon as we find the right candidate. Our hiring process tends to involve a lot of sample tasks and probationary hires, so if you are interested, we encourage you to apply early.\nThis is a new position for a kind of work that isn't standard. Although we hope to find someone who can walk in off the street and perform well, we're also interested in candidates who think they might take three months of training to meet the requirements.\nExamples of the kinds of work you'll do:\n\nRead through archives and journals to get a sense of literally every significant development in the field, past and present.\nTrack general trends in the ML space—e.g., \"Wow, there sure is a lot of progress being made on Dota 2!\"—and let us know about them.\nHelp an engineer figure out why their code isn't working—e.g., \"Oh, you forgot the pooling layer in your convolutional neural network.\"\nAnswer/research MIRI staff questions about ML techniques or the history of the field.\nShare important developments proactively; researchers who haven't read the same papers as you often won't know the right questions to ask unprompted!\n\n \nThe Ideal Candidate\nSome qualities of the ideal candidate:\n\nExtensive breadth and depth of machine learning knowledge, including the underlying math.\nFamiliarity with ideas related to AI alignment.\nDelight at the thought of getting paid to Know All The Things.\nProgramming capability—for example, you've replicated some ML papers.\nAbility to deeply understand and crisply communicate ideas. Someone who just recites symbols to our team without understanding them isn't much better than a web scraper.\nEnthusiasm about the prospect of working at MIRI and helping advance the field's understanding of AI alignment.\nResidence in (or willingness to move to) the Bay Area. This job requires high-bandwidth communication with the research team, and won't work as a remote position.\nAbility to work independently with minimal supervision, and also to work in team/group settings.\n\n\n \nWorking at MIRI\nWe strive to make working at MIRI a rewarding experience.\n\nModern Work Spaces — Many of us have adjustable standing desks with large external monitors. We consider workspace ergonomics important, and try to rig up workstations to be as comfortable as possible. Free snacks, drinks, and meals are also provided at our office.\nFlexible Hours — We don't have strict office hours, and we don't limit employees' vacation days. Our goal is to make rapid progress on our research agenda, and we would prefer that staff take a day off rather than extend tasks to fill an extra day.\nLiving in the Bay Area — MIRI's office is located in downtown Berkeley, California. From our office, you're a 30-second walk to the BART (Bay Area Rapid Transit), which can get you around the Bay Area; a 3-minute walk to UC Berkeley campus; and a 30-minute BART ride to downtown San Francisco.\n\n \nEEO & Employment Eligibility\nMIRI is an equal opportunity employer. We are committed to making employment decisions based on merit and value. This commitment includes complying with all federal, state, and local laws. We desire to maintain a work environment free of harassment or discrimination due to sex, race, religion, color, creed, national origin, sexual orientation, citizenship, physical or mental disability, marital status, familial status, ethnicity, ancestry, status as a victim of domestic violence, age, or any other status protected by federal, state, or local laws\n \nApply\nIf interested, click here to apply. For questions or comments, email Buck ().\nThe post ML Living Library Opening appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "ML Living Library Opening", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=18", "id": "92f0d0ffaf7eeebcab1de571535ec53e"} {"text": "A reply to Francois Chollet on intelligence explosion\n\nThis is a reply to Francois Chollet, the inventor of the Keras wrapper for the Tensorflow and Theano deep learning systems, on his essay \"The impossibility of intelligence explosion.\"\nIn response to critics of his essay, Chollet tweeted:\n \nIf you post an argument online, and the only opposition you get is braindead arguments and insults, does it confirm you were right? Or is it just self-selection of those who argue online?\nAnd he earlier tweeted:\n \nDon't be overly attached to your views; some of them are probably incorrect. An intellectual superpower is the ability to consider every new idea as if it might be true, rather than merely checking whether it confirms/contradicts your current views.\nChollet's essay seemed mostly on-point and kept to the object-level arguments. I am led to hope that Chollet is perhaps somebody who believes in abiding by the rules of a debate process, a fan of what I'd consider Civilization; and if his entry into this conversation has been met only with braindead arguments and insults, he deserves a better reply. I've tried here to walk through some of what I'd consider the standard arguments in this debate as they bear on Chollet's statements.\nAs a meta-level point, I hope everyone agrees that an invalid argument for a true conclusion is still a bad argument. To arrive at the correct belief state we want to sum all the valid support, and only the valid support. To tally up that support, we need to have a notion of judging arguments on their own terms, based on their local structure and validity, and not excusing fallacies if they support a side we agree with for other reasons.\nMy reply to Chollet doesn't try to carry the entire case for the intelligence explosion as such. I am only going to discuss my take on the validity of Chollet's particular arguments. Even if the statement \"an intelligence explosion is impossible\" happens to be true, we still don't want to accept any invalid arguments in favor of that conclusion.\nWithout further ado, here are my thoughts in response to Chollet.\n\n \n\nThe basic premise is that, in the near future, a first \"seed AI\" will be created, with general problem-solving abilities slightly surpassing that of humans. This seed AI would start designing better AIs, initiating a recursive self-improvement loop that would immediately leave human intelligence in the dust, overtaking it by orders of magnitude in a short time.\nI agree this is more or less what I meant by \"seed AI\" when I coined the term back in 1998. Today, nineteen years later, I would talk about a general question of \"capability gain\" or how the power of a cognitive system scales with increased resources and further optimization. The idea of recursive self-improvement is only one input into the general questions of capability gain; for example, we recently saw some impressively fast scaling of Go-playing ability without anything I'd remotely consider as seed AI being involved. That said, I think that a lot of the questions Chollet raises about \"self-improvement\" are relevant to capability-gain theses more generally, so I won't object to the subject of conversation.\n \nProponents of this theory also regard intelligence as a kind of superpower, conferring its holders with almost supernatural capabilities to shape their environment \nA good description of a human from the perspective of a chimpanzee.\nFrom a certain standpoint, the civilization of the year 2017 could be said to have \"magic\" from the perspective of 1517. We can more precisely characterize this gap by saying that we in 2017 can solve problems using strategies that 1517 couldn't recognize as a \"solution\" if described in advance, because our strategies depend on laws and generalizations not known in 1517. E.g., I could show somebody in 1517 a design for a compressor-based air conditioner, and they would not be able to recognize this as \"a good strategy for cooling your house\" in advance of observing the outcome, because they don't yet know about the temperature-pressure relation. A fancy term for this would be \"strong cognitive uncontainability\"; a metaphorical term would be \"magic\" although of course we did not do anything actually supernatural. A similar but much larger gap exists between a human and a smaller brain running the previous generation of software (aka a chimpanzee).\nIt's not exactly unprecedented to suggest that big gaps in cognitive ability correspond to big gaps in pragmatic capability to shape the environment. I think a lot of people would agree in characterizing intelligence as the Human Superpower, independently of what they thought about the intelligence explosion hypothesis.\n \n— as seen in the science-fiction movie Transcendence (2014), for instance. \n\nI agree that public impressions of things are things that someone ought to be concerned about. If I take a ride-share and I mention that I do anything involving AI, half the time the driver says, \"Oh, like Skynet!\" This is an understandable reason to be annoyed. But if we're trying to figure out the sheerly factual question of whether an intelligence explosion is possible and probable, it's important to consider the best arguments on all sides of all relevant points, not the popular arguments. For that purpose it doesn't matter if Deepak Chopra's writing on quantum mechanics has a larger readership than any actual physicist.\nThankfully Chollet doesn't spend the rest of the essay attacking Kurzweil in particular, so I'll leave this at that.\n \n The intelligence explosion narrative equates intelligence with the general problem-solving ability displayed by individual intelligent agents — by current human brains, or future electronic brains. \nI don't see what work the word \"individual\" is doing within this sentence. From our perspective, it matters little whether a computing fabric is imagined to be a hundred agents or a single agency, if it seems to behave in a coherent goal-directed way as seen from outside. The pragmatic consequences are the same. I do think it's fair to say that I think about \"agencies\" which from our outside perspective seem to behave in a coherent goal-directed way.\n \n The first issue I see with the intelligence explosion theory is a failure to recognize that intelligence is necessarily part of a broader system — a vision of intelligence as a \"brain in jar\" that can be made arbitrarily intelligent independently of its situation.\nI'm not aware of myself or Nick Bostrom or another major technical voice in this field claiming that problem-solving can go on independently of the situation/environment.\nThat said, some systems function very well in a broad variety of structured low-entropy environments. E.g. the human brain functions much better than other primate brains in an extremely broad set of environments, including many that natural selection did not explicitly optimize for. We remain functional on the Moon, because the Moon has enough in common with the Earth on a sufficiently deep meta-level that, for example, induction on past experience goes on functioning there. Now if you tossed us into a universe where the future bore no compactly describable relation to the past, we would indeed not do very well in that \"situation\"—but this is not pragmatically relevant to the impact of AI on our own real world, where the future does bear a relation to the past.\n \nIn particular, there is no such thing as \"general\" intelligence. On an abstract level, we know this for a fact via the \"no free lunch\" theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems.\nScott Aaronson's reaction: \"Citing the 'No Free Lunch Theorem'—i.e., the (trivial) statement that you can't outperform brute-force search on random instances of an optimization problem—to claim anything useful about the limits of AI, is not a promising sign.\"\nIt seems worth spelling out an as-simple-as-possible special case of this point in mathy detail, since it looked to me like a central issue given the rest of Chollet's essay. I expect this math isn't new to Chollet, but reprise it here to establish common language and for the benefit of everyone else reading along.\nLaplace's Rule of Succession, as invented by Thomas Bayes, gives us one simple rule for predicting future elements of a binary sequence based on previously observed elements. Let's take this binary sequence to be a series of \"heads\" and \"tails\" generated by some sequence generator called a \"coin\", not assumed to be fair. In the standard problem setup yielding the Rule of Succession, our state of prior ignorance is that we think there is some frequency \\(\theta\\) that a coin comes up heads, and for all we know \\(\theta\\) is equally likely to take on any real value between \\(0\\) and and \\(1\\). We can do some Bayesian inference and conclude that after seeing \\(M\\) heads and \\(N\\) tails, we should predict that the odds for heads : tails on the next coinflip are:\n \n$$\frac{M + 1}{M + N + 2} : \frac{N + 1}{M + N + 2}$$\n \n(See Laplace's Rule of Succession for the proof.)\nThis rule yields advice like: \"If you haven't yet observed any coinflips, assign 50-50 to heads and tails\" or \"If you've seen four heads and no tails, assign 1/6 probability rather than 0 probability to the next flip being tails\" or \"If you've seen the coin come up heads 150 times and tails 75 times, assign around 2/3 probability to the coin coming up heads next time.\"\nNow this rule does not do super-well in any possible kind of environment. In particular, it doesn't do any better than the maximum-entropy prediction \"the next flip has a 50% probability of being heads, or tails, regardless of what we have observed previously\" if the environment is in fact a fair coin. In general, there is \"no free lunch\" on predicting arbitrary binary sequences; if you assign greater probability mass or probability density to one binary sequence or class of sequences, you must have done so by draining probability from other binary sequences. If you begin with the prior that every binary sequence is equally likely, then you never expect any algorithm to do better on average than maximum entropy, even if that algorithm luckily does better in one particular random draw.\nOn the other hand, if you start from the prior that every binary sequence is equally likely, you never notice anything a human would consider an obvious pattern. If you start from the maxentropy prior, then after observing a coin come up heads a thousand times, and tails never, you still predict 50-50 on the next draw; because on the maxentropy prior, the sequence \"one thousand heads followed by tails\" is exactly as likely as \"one thousand heads followed by heads\".\nThe inference rule instantiated by Laplace's Rule of Succession does better in a generic low-entropy universe of coinflips. It doesn't start from specific knowledge; it doesn't begin from the assumption that the coin is biased heads, or biased tails. If the coin is biased heads, Laplace's Rule learns that; if the coin is biased tails, Laplace's Rule will soon learn that from observation as well. If the coin is actually fair, then Laplace's Rule will rapidly converge to assigning probabilities in the region of 50-50 and not do much worse per coinflip than if we had started with the max-entropy prior.\nCan you do better than Laplace's Rule of Succession? Sure; if the environment's probability of generating heads is equal to 0.73 and you start out knowing that, then you can guess on the very first round that the probability of seeing heads is 73%. But even with this non-generic and highly specific knowledge built in, you do not do very much better than Laplace's Rule of Succession unless the first coinflips are very important to your future survival. Laplace's Rule will probably figure out the answer is somewhere around 3/4 in the first dozen rounds, and get to the answer being somewhere around 73% after a couple of hundred rounds, and if the answer isn't 0.73 it can handle that case too.\nIs Laplace's Rule the most general possible rule for inferring binary sequences? Obviously not; for example, if you saw the initial sequence…\n$$HTHTHTHTHTHTHTHT…$$\n \n…then you would probably guess with high though not infinite probability that the next element generated would be \\(H\\). This is because you have the ability to recognize a kind of pattern which Laplace's Rule does not, i.e., alternating heads and tails. Of course, your ability to recognize this pattern only helps you in environments that sometimes generate a pattern like that—which the real universe sometimes does. If we tossed you into a universe which just as frequently presented you with 'tails' after observing a thousand perfect alternating pairs, as it did 'heads', then your pattern-recognition ability would be useless. Of course, a max-entropy universe like that will usually not present you with a thousand perfect alternations in the initial sequence to begin with!\nOne extremely general but utterly intractable inference rule is Solomonoff induction, a universal prior which assigns probabilities to every computable sequence (or computable probability distribution over sequences) proportional to algorithmic simplicity, that is, in inverse proportion to the exponential of the size of the program required to specify the computation. Solomonoff induction can learn from observation any sequence that can be generated by a compact program, relative to a choice of universal computer which has at most a bounded effect on the amount of evidence required or the number of mistakes made. Of course a Solomonoff inductor will do slightly-though-not-much-worse than the max-entropy prior in a hypothetical structure-avoiding universe in which algorithmically compressible sequences are less likely; thankfully we don't live in a universe like that.\nIt would then seem perverse not to recognize that for large enough milestones we can see an informal ordering from less general inference rules to more general inference rules, those that do well in an increasingly broad and complicated variety of environments of the sort that the real world is liable to generate:\nThe rule that always assigns probability 0.73 to heads on each round, performs optimally within the environment where each flip has independently a 0.73 probability of coming up heads.\nLaplace's Rule of Succession will start to do equally well as this, given a couple of hundred initial coinflips to see the pattern; and Laplace's Rule also does well in many other low-entropy universes besides, such as those where each flip has 0.07 probability of coming up heads.\nA human is more general and can also spot patterns like \\(HTTHTTHTTHTT\\) where Laplace's Rule would merely converge to assigning probability 1/3 of each flip coming up heads, while the human becomes increasingly certain that a simple temporal process is at work which allows each succeeding flip to be predicted with near-certainty.\nIf anyone ever happened across a hypercomputational device and built a Solomonoff inductor out of it, the Solomonoff inductor would be more general than the human and do well in any environment with a programmatic description substantially smaller than the amount of data the Solomonoff inductor could observe.\nNone of these predictors need do very much worse than the max-entropy prediction in the case that the environment is actually max-entropy. It may not be a free lunch, but it's not all that expensive even by the standards of hypothetical randomized universes; not that this matters for anything, since we don't live in a max-entropy universe and therefore we don't care how much worse we'd do in one.\nSome earlier informal discussion of this point can be found in No-Free-Lunch Theorems Are Often Irrelevant.\n \nIf intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem.\nSome problems are more general than other problems—not relative to a maxentropy prior, which treats all problem subclasses on an equal footing, but relative to the low-entropy universe we actually live in, where a sequence of a million observed heads is on the next round more liable to generate H than T. Similarly, relative to the problem classes tossed around in our low-entropy universe, \"figure out what simple computation generates this sequence\" is more general than a human which is more general than \"figure out what is the frequency of heads or tails within this sequence.\"\nHuman intelligence is a problem-solving algorithm that can be understood with respect to a specific problem class that is potentially very, very broad in a pragmatic sense.\n \nIn a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.\nThe problem that a human solves is much more general than the problem an octopus solves, which is why we can walk on the Moon and the octopus can't. We aren't absolutely general—the Moon still has a certain something in common with the Earth. Scientific induction still works on the Moon. It is not the case that when you get to the Moon, the next observed charge of an electron has nothing to do with its previously observed charge; and if you throw a human into an alternate universe like that one, the human stops working. But the problem a human solves is general enough to pass from oxygen environments to the vacuum.\n \nWhat would happen if we were to put a freshly-created human brain in the body of an octopus, and let in live at the bottom of the ocean? Would it even learn to use its eight-legged body? Would it survive past a few days? … The brain has hardcoded conceptions of having a body with hands that can grab, a mouth that can suck, eyes mounted on a moving head that can be used to visually follow objects (the vestibulo-ocular reflex), and these preconceptions are required for human intelligence to start taking control of the human body.\nIt could be the case that in this sense a human's motor cortex is analogous to an inference rule that always predicts heads with 0.73 probability on each round, and cannot learn to predict 0.07 instead. It could also be that our motor cortex is more like a Laplace inductor that starts out with 72 heads and 26 tails pre-observed, biased toward that particular ratio, but which can eventually learn 0.07 after another thousand rounds of observation.\nIt's an empirical question, but I'm not sure why it's a very relevant one. It's possible that human motor cortex is hyperspecialized—not just jumpstarted with prior knowledge, but inductively narrow and incapable of learning better—since in the ancestral environment, we never got randomly plopped into octopus bodies. But what of it? If you put some humans at a console and gave them a weird octopus-like robot to learn to control, I'd expect their full deliberate learning ability to do better than raw motor cortex in this regard. Humans using their whole intelligence, plus some simple controls, can learn to drive cars and fly airplanes even though those weren't in our ancestral environment.\nWe also have no reason to believe human motor cortex is the limit of what's possible. If we sometimes got plopped into randomly generated bodies, I expect we'd already have motor cortex that could adapt to octopodes. Maybe MotorCortex Zero could do three days of self-play on controlling randomly generated bodies and emerge rapidly able to learn any body in that class. Or, humans who are allowed to use Keras could figure out how to control octopus arms using ML. The last case would be most closely analogous to that of a hypothetical seed AI.\n \nEmpirical evidence is relatively scarce, but from what we know, children that grow up outside of the nurturing environment of human culture don't develop any human intelligence. Feral children raised in the wild from their earliest years become effectively animals, and can no longer acquire human behaviors or language when returning to civilization.\nHuman visual cortex doesn't develop well without visual inputs. This doesn't imply that our visual cortex is a simple blank slate, and that all the information to process vision is stored in the environment, and the visual cortex just adapts to that from a blank slate; if that were true, we'd expect it to easily take control of octopus eyes. The visual cortex requires visual input because of the logic of evolutionary biology: if you make X an environmental constant, the species is liable to acquire genes that assume the presence of X. It has no reason not to. The expected result would be that the visual cortex contains a large amount of genetic complexity that makes it better than generic cerebral cortex at doing vision, but some of this complexity requires visual input during childhood to unfold correctly.\nBut if in the ancestral environment children had grown up in total darkness 10% of the time, before seeing light for the first time on adulthood, it seems extremely likely that we could have evolved to not require visual input in order for the visual cortex to wire itself up correctly. E.g., the retina could have evolved to send in simple hallucinatory shapes that would cause the rest of the system to wire itself up to detect those shapes, or something like that.\nHuman children reliably grow up around other humans, so it wouldn't be very surprising if humans evolved to build their basic intellectual control processes in a way that assumes the environment contains this info to be acquired. We cannot thereby infer how much information is being \"stored\" in the environment or that an intellectual control process would be too much information to store genetically; that is not a problem evolution had reason to try to solve, so we cannot infer from the lack of an evolved solution that such a solution was impossible.\nAnd even if there's no evolved solution, this doesn't mean you can't intelligently design a solution. Natural selection never built animals with steel bones or wheels for limbs, because there's no easy incremental pathway there through a series of smaller changes, so those designs aren't very evolvable; but human engineers still build skyscrapers and cars, etcetera.\nAmong humans, the art of Go is stored in a vast repository of historical games and other humans, and future Go masters among us grow up playing Go as children against superior human masters rather than inventing the whole art from scratch. You would not expect even the most talented human, reinventing the gameplay all on their own, to be able to win a competition match with a first-dan pro.\nBut AlphaGo was initialized on this vast repository of played games in stored form, rather than it needing to actually play human masters.\nAnd then less than two years later, AlphaGo Zero taught itself to play at a vastly human-superior level, in three days, by self-play, from scratch, using a much simpler architecture with no 'instinct' in the form of precomputed features.\nNow one may perhaps postulate that there is some sharp and utter distinction between the problem that AlphaGo Zero solves, and the much more general problem that humans solve, whereby our vast edifice of Go knowledge can be surpassed by a self-contained system that teaches itself, but our general cognitive problem-solving abilities can neither be compressed into a database for initialization, nor taught by self-play. But why suppose that? Human civilization taught itself by a certain sort of self-play; we didn't learn from aliens. More to the point, I don't see a sharp and utter distinction between Laplace's Rule, AlphaGo Zero, a human, and a Solomonoff inductor; they just learn successively more general problem classes. If AlphaGo Zero can waltz past all human knowledge of Go, I don't see a strong reason why AGI Zero can't waltz past the human grasp of how to reason well, or how to perform scientific investigations, or how to learn from the data in online papers and databases.\nThis point could perhaps be counterargued, but it hasn't yet been counterargued to my knowledge, and it certainly isn't settled by any theorem of computer science known to me.\n \nIf intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt. Intelligence expansion can only come from a co-evolution of the mind, its sensorimotor modalities, and its environment.\nIt's not obvious to me why any of this matters. Say an AI takes three days to learn to use an octopus body. So what?\nThat is: We agree that it's a mathematical truth that you need \"some amount\" of experience to go from a broadly general prior to a specific problem. That doesn't mean that the required amount of experience is large for pragmatically important problems, or that it takes three decades instead of three days. We cannot casually pass from \"proven: some amount of X is required\" to \"therefore: a large amount of X is required\" or \"therefore: so much X is required that it slows things down a lot\". (See also: Harmless supernova fallacy: bounded, therefore harmless.)\n \nIf the gears of your brain were the defining factor of your problem-solving ability, then those rare humans with IQs far outside the normal range of human intelligence would live lives far outside the scope of normal lives, would solve problems previously thought unsolvable, and would take over the world — just as some people fear smarter-than-human AI will do.\n\"von Neumann? Newton? Einstein?\" —Scott Aaronson\nMore importantly: Einstein et al. didn't have brains that were 100 times larger than a human brain, or 10,000 times faster. By the logic of sexual recombination within a sexually reproducing species, Einstein et al. could not have had a large amount of de novo software that isn't present in a standard human brain. (That is: An adaptation with 10 necessary parts, each of which is only 50% prevalent in the species, will only fully assemble 1 out of 1000 times, which isn't often enough to present a sharp selection gradient on the component genes; complex interdependent machinery is necessarily universal within a sexually reproducing species, except that it may sometimes fail to fully assemble. You don't get \"mutants\" with whole new complex abilities a la the X-Men.)\nHumans are metaphorically all compressed into one tiny little dot in the vastness of mind design space. We're all the same make and model of car running the same engine under the hood, in slightly different sizes and with slightly different ornaments, and sometimes bits and pieces are missing. Even with respect to other primates, from whom we presumably differ by whole complex adaptations, we have 95% shared genetic material with chimpanzees. Variance between humans is not something that thereby establishes bounds on possible variation in intelligence, unless you import some further assumption not described here.\nThe standard reply to anyone who deploys e.g. the Argument from Gödel to claim the impossibility of AGI is to ask, \"Why doesn't your argument rule out humans?\"\nSimilarly, a standard question that needs to be answered by anyone who deploys an argument against the possibility of superhuman general intelligence is, \"Why doesn't your argument rule out humans exhibiting pragmatically much greater intellectual performance than chimpanzees?\"\nSpecialized to this case, we'd ask, \"Why doesn't the fact that the smartest chimpanzees aren't building rockets let us infer that no human can walk on the Moon?\"\nNo human, not even John von Neumann, could have reinvented the gameplay of Go on their own and gone on to stomp the world's greatest Masters. AlphaGo Zero did so in three days. It's clear that in general, \"We can infer the bounds of cognitive power from the bounds of human variation\" is false. If there's supposed to be some special case of this rule which is true rather than false, and forbids superhuman AGI, that special case needs to be spelled out.\n \nIntelligence is not a superpower; exceptional intelligence does not, on its own, confer you with proportionally exceptional power over your circumstances.\n…said the Homo sapiens, surrounded by countless powerful artifacts whose abilities, let alone mechanisms, would be utterly incomprehensible to the organisms of any less intelligent Earthly species.\n \nA high-potential human 10,000 years ago would have been raised in a low-complexity environment, likely speaking a single language with fewer than 5,000 words, would never have been taught to read or write, would have been exposed to a limited amount of knowledge and to few cognitive challenges. The situation is a bit better for most contemporary humans, but there is no indication that our environmental opportunities currently outpace our cognitive potential.\nDoes this imply that technology should be no more advanced 100 years from today, than it is today? If not, in what sense have we taken every possible opportunity of our environment?\nIs the idea that opportunities can only be taken in sequence, one after another, so that today's technology only offers the possibilities of today's advances? Then why couldn't a more powerful intelligence run through them much faster, and rapidly build up those opportunities?\n \nA smart human raised in the jungle is but a hairless ape. Similarly, an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human. If it could, then exceptionally high-IQ humans would already be displaying proportionally exceptional levels of personal attainment; they would achieve exceptional levels of control over their environment, and solve major outstanding problems— which they don't in practice.\nIt can't eat the Internet? It can't eat the stock market? It can't crack the protein folding problem and deploy arbitrary biological systems? It can't get anything done by thinking a million times faster than we do? All this is to be inferred from observing that the smartest human was no more impressive than John von Neumann?\nI don't see the strong Bayesian evidence here. It seems easy to imagine worlds such that you can get a lot of pragmatically important stuff done if you have a brain 100 times the size of John von Neumann's, think a million times faster, and have maxed out and transcended every human cognitive talent and not just the mathy parts, and yet have the version of John von Neumann inside that world be no more impressive than we saw. How then do we infer from observing John von Neumann that we are not in such worlds?\nWe know that the rule of inferring bounds on cognition by looking at human maximums doesn't work on AlphaGo Zero. Why does it work to infer that \"An AGI can't eat the stock market because no human has eaten the stock market\"?\n \nHowever, these billions of brains, accumulating knowledge and developing external intelligent processes over thousand of years, implement a system — civilization — which may eventually lead to artificial brains with greater intelligence than that of a single human. It is civilization as a whole that will create superhuman AI, not you, nor me, nor any individual. A process involving countless humans, over timescales we can barely comprehend. A process involving far more externalized intelligence — books, computers, mathematics, science, the internet — than biological intelligence…\nWill the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can. \nThe premise is that brains of a particular size and composition that are running a particular kind of software (human brains) can only solve a problem X (which in this case is equal to \"build an AGI\") if they cooperate in a certain group size N and run for a certain amount of time and build Z amount of external cognitive prostheses. Okay. Humans were not especially specialized on the AI-building problem by natural selection. Why wouldn't an AGI with larger brains, running faster, using less insane software, containing its own high-speed programmable cognitive hardware to which it could interface directly in a high-bandwidth way, and perhaps specialized on computer programming in exactly the way that human brains aren't, get more done on net than human civilization? Human civilization tackling Go devoted a lot of thinking time, parallel search, and cognitive prostheses in the form of playbooks, and then AlphaGo Zero blew past it in three days, etcetera.\nTo sharpen this argument:\nWe may begin from the premise, \"For all problems X, if human civilization puts a lot of effort into X and gets as far as W, no single agency can get significantly further than W on its own,\" and from this premise deduce that no single AGI will be able to build a new AGI shortly after the first AGI is built.\nHowever, this premise is obviously false, as even Deep Blue bore witness. Is there supposed to be some special case of this generalization which is true rather than false, and says something about the 'build an AGI' problem which it does not say about the 'win a chess game' problem? Then what is that special case and why should we believe it?\nAlso relevant: In the game of Kasparov vs. The World, the world's best player Garry Kasparov played a single game against thousands of other players coordinated in an online forum, led by four chess masters. Garry Kasparov's brain eventually won, against thousands of times as much brain matter. This tells us something about the inefficiency of human scaling with simple parallelism of the nodes, presumably due to the inefficiency and low bandwidth of human speech separating the would-be arrayed brains. It says that you do not need a thousand times as much processing power as one human brain to defeat the parallel work of a thousand human brains. It is the sort of thing that can be done even by one human who is a little more talented and practiced than the components of that parallel array. Humans often just don't agglomerate very efficiently.\n \nHowever, future AIs, much like humans and the other intelligent systems we've produced so far, will contribute to our civilization, and our civilization, in turn, will use them to keep expanding the capabilities of the AIs it produces.\nThis takes in the premise \"AIs can only output a small amount of cognitive improvement in AI abilities\" and reaches the conclusion \"increase in AI capability will be a civilizationally diffuse process.\" I'm not sure that the conclusion follows, but would mostly dispute that the premise has been established by previous arguments. To put it another way, this particular argument does not contribute anything new to support \"AI cannot output much AI\", it just tries to reason further from that as a premise.\n \nOur problem-solving abilities (in particular, our ability to design AI) are already constantly improving, because these abilities do not reside primarily in our biological brains, but in our external, collective tools. The recursive loop has been in action for a long time, and the rise of \"better brains\" will not qualitatively affect it — no more than any previous intelligence-enhancing technology.\nFrom Arbital's Harmless supernova fallacy page:\n\nPrecedented, therefore harmless: \"Really, we've already had supernovas around for a while: there are already devices that produce 'super' amounts of heat by fusing elements low in the periodic table, and they're called thermonuclear weapons. Society has proven well able to regulate existing thermonuclear weapons and prevent them from being acquired by terrorists; there's no reason the same shouldn't be true of supernovas.\" (Noncentral fallacy / continuum fallacy: putting supernovas on a continuum with hydrogen bombs doesn't make them able to be handled by similar strategies, nor does finding a category such that it contains both supernovas and hydrogen bombs.)\n\n \nOur brains themselves were never a significant bottleneck in the AI-design process.\nA startling assertion. Let's say we could speed up AI-researcher brains by a factor of 1000 within some virtual uploaded environment, not permitting them to do new physics or biology experiments, but still giving them access to computers within the virtual world. Are we to suppose that AI development would take the same amount of sidereal time? I for one would expect the next version of Tensorflow to come out much sooner, even taking into account that most individual AI experiments would be less grandiose because the sped-up researchers would need those experiments to complete faster and use less computing power. The scaling loss would be less than total, just like adding CPUs a thousand times as fast to the current research environment would probably speed up progress by at most a factor of 5, not a factor of 1000. Similarly, with all those sped-up brains we might see progress increase only by a factor of 50 instead of 1000, but I'd still expect it to go a lot faster.\nThen in what sense are we not bottlenecked on the speed of human brains in order to build up our understanding of AI?\n \nCrucially, the civilization-level intelligence-improving loop has only resulted in measurably linear progress in our problem-solving abilities over time.\nI obviously don't consider myself a Kurzweilian, but even I have to object that this seems like an odd assertion to make about the past 10,000 years.\n \nWouldn't recursively improving X mathematically result in X growing exponentially? No — in short, because no complex real-world system can be modeled as `X(t + 1) = X(t) * a, a > 1)`. \nThis seems like a really odd assertion, refuted by a single glance at world GDP. Note that this can't be an isolated observation, because it also implies that every necessary input into world GDP is managing to keep up, and that every input which isn't managing to keep up has been economically bypassed at least with respect to recent history.\n \nWe don't have to speculate about whether an \"explosion\" would happen the moment an intelligent system starts optimizing its own intelligence. As it happens, most systems are recursively self-improving. We're surrounded with them… Mechatronics is recursively self-improving — better manufacturing robots can manufacture better manufacturing robots. Military empires are recursively self-expanding — the larger your empire, the greater your military means to expand it further. Personal investing is recursively self-improving — the more money you have, the more money you can make.\nIf we define \"recursive self-improvement\" to mean merely \"causal process containing at least one positive loop\" then the world abounds with such, that is true. It could still be worth distinguishing some feedback loops as going much faster than others: e.g., the cascade of neutrons in a nuclear weapon, or the cascade of information inside the transistors of a hypothetical seed AI. This seems like another instance of \"precedented therefore harmless\" within the harmless supernova fallacy.\n \nSoftware is just one cog in a bigger process — our economies, our lives — just like your brain is just one cog in a bigger process — human culture. This context puts a hard limit on the maximum potential usefulness of software, much like our environment puts a hard limit on how intelligent any individual can be — even if gifted with a superhuman brain.\n\"A chimpanzee is just one cog in a bigger process—the ecology. Why postulate some kind of weird superchimp that can expand its superchimp economy at vastly greater rates than the amount of chimp-food produced by the current ecology?\"\nConcretely, suppose an agent is smart enough to crack inverse protein structure prediction, i.e., it can build its own biology and whatever amount of post-biological molecular machinery is permitted by the laws of physics. In what sense is it still dependent on most of the economic outputs of the rest of human culture? Why wouldn't it just start building von Neumann machines?\n \nBeyond contextual hard limits, even if one part of a system has the ability to recursively self-improve, other parts of the system will inevitably start acting as bottlenecks. Antagonistic processes will arise in response to recursive self-improvement and squash it .\nSmart agents will try to deliberately bypass these bottlenecks and often succeed, which is why the world economy continues to grow at an exponential pace instead of having run out of wheat in 1200 CE. It continues to grow at an exponential pace despite even the antagonistic processes of… but I'd rather not divert this conversation into politics.\nNow to be sure, the smartest mind can't expand faster than light, and its exponential growth will bottleneck on running out of atoms and negentropy if we're remotely correct about the character of physical law. But to say that this is therefore no reason to worry would be the \"bounded, therefore harmless\" variant of the harmless supernova fallacy. A supernova isn't infinitely hot, but it's pretty darned hot and you can't survive one just by wearing a Nomex jumpsuit.\n \nWhen it comes to intelligence, inter-system communication arises as a brake on any improvement of underlying modules — a brain with smarter parts will have more trouble coordinating them;\nWhy doesn't this prove that humans can't be much smarter than chimps?\nWhat we can infer about the scaling laws that were governing human brains from the evolutionary record is a complicated topic. On this particular point I'd refer you to section 3.1, \"Returns on brain size\", pp. 35–39, in my semitechnical discussion of returns on cognitive investment. The conclusion there is that we can infer from the increase in equilibrium brain size over the last few million years of hominid history, plus the basic logic of population genetics, that over this time period there were increasing marginal returns to brain size with increasing time and presumably increasingly sophisticated neural 'software'. I also remark that human brains are not the only possible cognitive computing fabrics.\n \nIt is perhaps not a coincidence that very high-IQ people are more likely to suffer from certain mental illnesses.\nI'd expect very-high-IQ chimps to be more likely to suffer from some neurological disorders than typical chimps. This doesn't tell us that chimps are approaching the ultimate hard limit of intelligence, beyond which you can't scale without going insane. It tells us that if you take any biological system and try to operate under conditions outside the typical ancestral case, it is more likely to break down. Very-high-IQ humans are not the typical humans that natural selection has selected-for as normal operating conditions.\n \nYet, modern scientific progress is measurably linear. I wrote about this phenomenon at length in a 2012 essay titled \"The Singularity is not coming\". We didn't make greater progress in physics over the 1950–2000 period than we did over 1900–1950 — we did, arguably, about as well. Mathematics is not advancing significantly faster today than it did in 1920. Medical science has been making linear progress on essentially all of its metrics, for decades.\nI broadly agree with respect to recent history. I tend to see this as an artifact of human bureaucracies shooting themselves in the foot in a way that I would not expect to apply within a single unified agent.\nIt's possible we're reaching the end of available fruit in our finite supply of physics. This doesn't mean our present material technology could compete with the limits of possible material technology, which would at the very least include whatever biology-machine hybrid systems could be rapidly manufactured given the limits of mastery of biochemistry.\n \nAs scientific knowledge expands, the time and effort that have to be invested in education and training grows, and the field of inquiry of individual researchers gets increasingly narrow.\nOur brains don't scale to hold it all, and every time a new human is born you have to start over from scratch instead of copying and pasting the knowledge. It does not seem to me like a slam-dunk to generalize from the squishy little brains yelling at each other to infer the scaling laws of arbitrary cognitive computing fabrics.\n \nIntelligence is situational — there is no such thing as general intelligence. Your brain is one piece in a broader system which includes your body, your environment, other humans, and culture as a whole.\nTrue of chimps; didn't stop humans from being much smarter than chimps.\n \nNo system exists in a vacuum; any individual intelligence will always be both defined and limited by the context of its existence, by its environment.\nTrue of mice; didn't stop humans from being much smarter than mice.\nPart of the argument above was, as I would perhaps unfairly summarize it, \"There is no sense in which a human is absolutely smarter than an octopus.\" Okay, but pragmatically speaking, we have nuclear weapons and octopodes don't. A similar pragmatic capability gap between humans and unaligned AGIs seems like a matter of legitimate concern. If you don't want to call that an intelligence gap then call it what you like.\n \nCurrently, our environment, not our brain, is acting as the bottleneck to our intelligence.\nI don't see what observation about our present world licenses the conclusion that speeding up brains tenfold would produce no change in the rate of technological advancement.\n \nHuman intelligence is largely externalized, contained not in our brain but in our civilization. We are our tools — our brains are modules in a cognitive system much larger than ourselves.\nWhat about this fact is supposed to imply slower progress by an AGI that has a continuous, high-bandwidth interaction with its own onboard cognitive tools?\n \nA system that is already self-improving, and has been for a long time.\nTrue if we redefine \"self-improving\" as \"any positive feedback loop whatsoever\". A nuclear fission weapon is also a positive feedback loop in neutrons triggering the release of more neutrons. The elements of this system interact on a much faster timescale than human neurons fire, and thus the overall process goes pretty fast on our own subjective timescale. I don't recommend standing next to one when it goes off.\n \nRecursively self-improving systems, because of contingent bottlenecks, diminishing returns, and counter-reactions arising from the broader context in which they exist, cannot achieve exponential progress in practice. Empirically, they tend to display linear or sigmoidal improvement.\nFalsified by a graph of world GDP on almost any timescale.\n \nIn particular, this is the case for scientific progress — science being possibly the closest system to a recursively self-improving AI that we can observe.\nI think we're mostly just doing science wrong, but that would be a much longer discussion.\nFits-on-a-T-Shirt rejoinders would include \"Why think we're at the upper bound of being-good-at-science any more than chimps were?\"\n \nRecursive intelligence expansion is already happening — at the level of our civilization. It will keep happening in the age of AI, and it progresses at a roughly linear pace.\nIf this were to be true, I don't think it would be established by the arguments given.\nMuch of this debate has previously been reprised by myself and Robin Hanson in the \"AI Foom Debate.\" I expect that even Robin Hanson, who was broadly opposing my side of this debate, would have a coughing fit over the idea that progress within all systems is confined to a roughly linear pace.\nFor more reading I recommend my own semitechnical essay on what our current observations can tell us about the scaling of cognitive systems with increasing resources and increasing optimization, \"Intelligence Explosion Microeconomics.\"\nThe post A reply to Francois Chollet on intelligence explosion appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "A reply to Francois Chollet on intelligence explosion", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=18", "id": "52851f6216e103ff9e470c3f33bc4b3f"} {"text": "December 2017 Newsletter\n\n \nOur annual fundraiser is live. Discussed in the fundraiser post:\n\nNews  — What MIRI's researchers have been working on lately, and more.\nGoals — We plan to grow our research team 2x in 2018–2019. If we raise $850k this month, we think we can do that without dipping below a 1.5-year runway.\nActual goals — A bigger-picture outline of what we think is the likeliest sequence of events that could lead to good global outcomes.\n\nOur funding drive will be running until December 31st.\n \nResearch updates\n\nNew at IAFF: Reward Learning Summary; Reflective Oracles as a Solution to the Converse Lawvere Problem; Policy Selection Solves Most Problems\nWe ran a workshop on Paul Christiano's research agenda.\nWe've hired the first members of our new engineering team, including math PhD Jesse Liptrap and former Quixey lead architect Nick Tarleton! If you'd like to join the team, apply here!\nI'm also happy to announce that Blake Borgeson has come on in an advisory role to help establish our engineering program. Blake is a Nature-published computational biologist who co-founded Recursion Pharmaceuticals, where he leads the biotech company's machine learning work.\n\n \nGeneral updates\n\n\"Security Mindset and Ordinary Paranoia\": a new dialogue from Eliezer Yudkowsky. See also part 2.\nThe Open Philanthropy Project has awarded MIRI a three-year, $3.75 million grant!\nWe received $48,132 in donations during Facebook's Giving Tuesday event, of which $11,371 — from speedy donors who made the event's 85-second cutoff! — will be matched by the Bill and Melinda Gates Foundation.\nMIRI Staff Writer Matthew Graves has published an article in Skeptic magazine: \"Why We Should Be Concerned About Artificial Superintelligence.\"\nYudkowsky's new book Inadequate Equilibria is out! See other recent discussion of modest epistemology and inadequacy analysis by Scott Aaronson, Robin Hanson, Abram Demski, Gregory Lewis, and Scott Alexander.\n\n \nThe post December 2017 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "December 2017 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=19", "id": "55ab4b0954d47e51583ef53111792062"} {"text": "MIRI's 2017 Fundraiser\n\nUpdate 2017-12-27: We've blown past our 3rd and final target, and reached the matching cap of $300,000 for the $2 million Matching Challenge! Thanks so much to everyone who supported us!\nAll donations made before 23:59 PST on Dec 31st will continue to be counted towards our fundraiser total. The fundraiser total includes projected matching funds from the Challenge.\n\n \nMIRI's 2017 fundraiser is live through the end of December! Our progress so far (updated live):\n \n\n\n\n\nTarget 1$625,000CompletedTarget 2$850,000CompletedTarget 3$1,250,000Completed\n\n$2,504,625 raised in total!\n358 donors contributed\n\n\n\n×\nTarget Descriptions\n\n\n\nTarget 1\nTarget 2\nTarget 3\n\n\n\n$625k: Basic target\nAt this funding level, we'll be in a good position to pursue our mainline hiring goal in 2018, although we will likely need to halt or slow our growth in 2019.\n\n\n$850k: Mainline-growth target\nAt this level, we'll be on track to fully fund our planned expansion over the next few years, allowing us to roughly double the number of research staff over the course of 2018 and 2019.\n\n\n$1.25M: Rapid-growth target\nAt this funding level, we will be on track to maintain a 1.5-year runway even if our hiring proceeds a fair amount faster than our mainline prediction. We'll also have greater freedom to pay higher salaries to top-tier candidates as needed.\n\n\n\n\n\n\n\n\nDonate Now\n\n\n\n \nMIRI is a research nonprofit based in Berkeley, California with a mission of ensuring that smarter-than-human AI technology has a positive impact on the world. You can learn more about our work at \"Why AI Safety?\" or via MIRI Executive Director Nate Soares' Google talk on AI alignment.\nIn 2015, we discussed our interest in potentially branching out to explore multiple research programs simultaneously once we could support a larger team. Following recent changes to our overall picture of the strategic landscape, we're now moving ahead on that goal and starting to explore new research directions while also continuing to push on our agent foundations agenda. For more on our new views, see \"There's No Fire Alarm for Artificial General Intelligence\" and our 2017 strategic update. We plan to expand on our relevant strategic thinking more in the coming weeks.\nOur expanded research focus means that our research team can potentially grow big, and grow fast. Our current goal is to hire around ten new research staff over the next two years, mostly software engineers. If we succeed, our point estimate is that our 2018 budget will be $2.8M and our 2019 budget will be $3.5M, up from roughly $1.9M in 2017.1\nWe've set our fundraiser targets by estimating how quickly we could grow while maintaining a 1.5-year runway, on the simplifying assumption that about 1/3 of the donations we receive between now and the beginning of 2019 will come during our current fundraiser.2\nHitting Target 1 ($625k) then lets us act on our growth plans in 2018 (but not in 2019); Target 2 ($850k) lets us act on our full two-year growth plan; and in the case where our hiring goes better than expected, Target 3 ($1.25M) would allow us to add new members to our team about twice as quickly, or pay higher salaries for new research staff as needed.\nWe discuss more details below, both in terms of our current organizational activities and how we see our work fitting into the larger strategy space.\n \n\n            What's new at MIRI              |              Fundraising goals              |              Strategic background\n \nWhat's new at MIRI\nNew developments this year have included:\n\n\nThe release of Eliezer Yudkowsky's Inadequate Equilibria: Where and How Civilizations Get Stuck, a book on systemic failure, outperformance, and epistemology.\n\n\nNew introductory material on decision theory: \"Functional Decision Theory,\" \"Cheating Death in Damascus,\" and \"Decisions Are For Making Bad Outcomes Inconsistent.\"\n\n\nExtremely generous new support for our research in the form of a one-time $1.01 million donation from a cryptocurrency investor and a three-year $3.75 million grant from the Open Philanthropy Project.3\n\n\nThanks in part to this major support, we're currently in a position to scale up the research team quickly if we can find suitable hires. We intend to explore a variety of new research avenues going forward, including making a stronger push to experiment and explore some ideas in implementation.4 This means that we're currently interested in hiring exceptional software engineers, particularly ones with machine learning experience.\nThe two primary things we're looking for in software engineers are programming ability and value alignment. Since we're a nonprofit, it's also worth noting explicitly that we're generally happy to pay excellent research team applicants with the relevant skills whatever salary they would need to work at MIRI. If you think you'd like to work with us, apply here!\nIn that vein, I'm pleased to announce that we've made our first round of hires for our engineer positions, including:\n\nJesse Liptrap, who previously worked on the Knowledge Graph at Google for four years, and as a bioinformatician at UC Berkeley. Jesse holds a PhD in mathematics from UC Santa Barbara, where he studied category-theoretic underpinnings of topological quantum computing.\nNick Tarleton, former lead architect at the search startup Quixey. He previously studied computer science and decision science at Carnegie Mellon University, and Nick worked with us at the first iteration of our summer fellows program, studying consequences of proposed AI goal systems.\n\nOn the whole, our initial hiring efforts have gone quite well, and I've been very impressed with the high caliber of our hires and of our pool of candidates.\nOn the research side, our recent work has focused heavily on open problems in decision theory, and on other questions related to naturalized agency. Scott Garrabrant divides our recent work on the agent foundations agenda into four categories, tackling different AI alignment subproblems:\n\n\n\nDecision theory — Traditional models of decision-making assume a sharp Cartesian boundary between agents and their environment. In a naturalized setting in which agents are embedded in their environment, however, traditional approaches break down, forcing us to formalize concepts like \"counterfactuals\" that can be left implicit in AIXI-like frameworks.\n\n  More\n\n\nRecent focus areas:\n\nAs Rob noted in April, \"a common thread in our recent work is that we're using probability and topological fixed points in settings where we used to use provability. This means working with (and improving) logical inductors and reflective oracles.\" Examples of applications of logical induction to decision theory include logical inductor evidential decision theory (\"Prediction Based Robust Cooperation,\" \"Two Major Obstacles for Logical Inductor Decision Theory\") and asymptotic decision theory (\"An Approach to Logically Updateless Decisions,\" \"Where Does ADT Go Wrong?\").\nUnpacking the notion of updatelessness into pieces that we can better understand, e.g., in \"Conditioning on Conditionals,\" \"Logical Updatelessness as a Robust Delegation Problem,\" \"The Happy Dance Problem.\"\nThe relationship between decision theories that rely on Bayesian conditionalization on the one hand (e.g., evidential decision theory and Wei Dai's updateless decision theory), and ones that rely on counterfactuals on the other (e.g., causal decision theory, timeless decision theory, and the version of functional decision theory discussed in Yudkowsky and Soares (2017)): \"Smoking Lesion Steelman,\" \"Comparing LICDT and LIEDT.\"\nLines of research relating to correlated equilibria, such as \"A Correlated Analogue of Reflective Oracles\" and \"Smoking Lesion Steelman II.\"\nThe Converse Lawvere Problem (1, 2, 3): \"Does there exist a topological space X (in some convenient category of topological spaces) such that there exists a continuous surjection from X to the space [0,1]X (of continuous functions from X to [0,1])?\"\nMulti-agent coordination problems, often using the \"Cooperative Oracles\" framework.\n\n\n\n\n\n\n\nNaturalized world-models — Similar issues arise for formalizing how systems model the world in the absence of a sharp agent/environment boundary. Traditional models leave implicit aspects of \"good reasoning\" such as causal and multi-level world-modeling, reasoning under deductive limitations, and agents modeling themselves.\n\n  More\n\n\nRecent focus areas:\n\nKakutani's fixed-point theorem and reflective oracles: \"Hyperreal Brouwer.\"\nTransparency and merging of opinions in logical inductors.\nOntology merging, a possible approach to reasoning about ontological crises and transparency.\nAttempting to devise a variant of logical induction that is \"Bayesian\" in the sense that its belief states can be readily understood as conditionalized prior probability distributions.\n\n\n\n\n\n\n\nSubsystem alignment — A key reason that agent/environment boundaries are unhelpful for thinking about AGI is that a given AGI system may consist of many different subprocesses optimizing many different goals or subgoals. The boundary between different \"agents\" may be ill-defined, and a given optimization process is likely to construct subprocesses that pursue many different goals. Addressing this risk requires limiting the ways in which new optimization subprocesses arise in the system.\n\n  More\n\n\nRecent focus areas:\n\nBenign induction: \"Maximally Efficient Agents Will Probably Have an Anti-Daemon Immune System.\"\nWork related to KWIK learning: \"Some Problems with Making Induction Benign, and Approaches to Them\" and \"How Likely Is A Random AGI To Be Honest?\"\n\n\n\n\n\n\n\nRobust delegation — In cases where it's desirable to delegate to another agent (e.g. an AI system or a successor), it's critical that the agent be well-aligned and trusted to perform specified tasks. The value learning problem and most of the AAMLS agenda fall in this category.\n\n  More\n\n\nRecent focus areas:\n\nGoodhart's Curse, \"the combination of the Optimizer's Curse and Goodhart's Law\" stating that \"a powerful agent neutrally optimizing a proxy measure U that we hoped to align with true values V, will implicitly seek out upward divergences of U from V\": \"The Three Levels of Goodhart's Curse.\"\nCorrigibility: \"Corrigibility Thoughts,\" \"All the Indifference Designs.\"\nValue learning and inverse reinforcement learning: \"Incorrigibility in the CIRL Framework,\" \"Reward Learning Summary.\"\nThe reward hacking problem: \"Stable Pointers to Value: An Agent Embedded In Its Own Utility Function.\"\n\n\n\n\n\n\nAdditionally, we ran several research workshops, including one focused on Paul Christiano's research agenda.\n \nFundraising goals\nTo a first approximation, we view our ability to make productive use of additional dollars in the near future as linear in research personnel additions. We don't expect to run out of additional top-priority work we can assign to highly motivated and skilled researchers and engineers. This represents an important shift from our past budget and team size goals.5\nGrowing our team as much as we hope to is by no means an easy hiring problem, but it's made significantly easier by the fact that we're now looking for top software engineers who can help implement experiments we want to run, and not just productive pure researchers who can work with a high degree of independence. (In whom we are, of course, still very interested!) We therefore think we can expand relatively quickly over the next two years (productively!), funds allowing.\nIn our mainline growth scenario, our reserves plus next year's $1.25M installment of the Open Philanthropy Project's 3-year grant would leave us with around 9 months of runway going into 2019. However, we have substantial uncertainty about exactly how quickly we'll be able to hire additional researchers and engineers, and therefore about our 2018–2019 budgets.\nOur 2018 budget breakdown in the mainline success case looks roughly like this:\n\n2018 Budget Estimate (Mainline Growth)\n\n\nTo determine our fundraising targets this year, we estimated the support levels (above the Open Philanthropy Project's support) that would make us reasonably confident that we can maintain a 1.5-year runway going into 2019 in different growth scenarios, assuming that our 2017 fundraiser looks similar to next year's fundraiser and that our off-fundraiser donor support looks similar to our on-fundraiser support:\n\nBasic target — $625,000. At this funding level, we'll be in a good position to pursue our mainline hiring goal in 2018, although we will likely need to halt or slow our growth in 2019.\n\nMainline-growth target — $850,000. At this level, we'll be on track to fully fund our planned expansion over the next few years, allowing us to roughly double the number of research staff over the course of 2018 and 2019.\n\nRapid-growth target — $1,250,000. At this funding level, we will be on track to maintain a 1.5-year runway even if our hiring proceeds a fair amount faster than our mainline prediction. We'll also have greater freedom to pay higher salaries to top-tier candidates as needed.\n\nBeyond these growth targets: if we saw an order-of-magnitude increase in MIRI's funding in the near future, we have several ways we believe we can significantly accelerate our recruitment efforts to grow the team faster. These include competitively paid trial periods and increased hiring outreach across venues and communities where we expect to find high-caliber candidates. Funding increases beyond the point where we could usefully use the money to hire faster would likely cause us to spin off new initiatives to address the problem of AI x-risk from other angles; we wouldn't expect them to go to MIRI's current programs.\nOn the whole, we're in a very good position to continue expanding, and we're enormously grateful for the generous support we've already received this year. Relative to our present size, MIRI's reserves are much more solid than they have been in the past, putting us in a strong position going into 2018.\nGiven our longer runway, this may be a better year than usual for long-time MIRI supporters to consider supporting other projects that have been waiting in the wings. That said, we don't personally know of marginal places to put additional dollars that we currently view as higher-value than MIRI, and we do expect our fundraiser performance to affect our growth over the next two years, particularly if we succeed in growing the MIRI team as fast as we're hoping to.\n\n\nDonate Now\n\n\n \nStrategic background\nTaking a step back from our immediate organizational plans: how does MIRI see the work we're doing as tying into positive long-term, large-scale outcomes?\nA lot of our thinking on these issues hasn't yet been written up in any detail, and many of the issues involved are topics of active discussion among people working on existential risk from AGI. In very broad terms, however, our approach to global risk mitigation is to think in terms of desired outcomes, and to ask: \"What is the likeliest way that the outcome in question might occur?\" We then repeat this process until we backchain to interventions that actors can take today.\nIgnoring a large number of subtleties, our view of the world's strategic situation currently breaks down as follows:\n\n\n\n\n1. Long-run good outcomes. Ultimately, we want humanity to figure out the best possible long-run future and enact that kind of future, factoring in good outcomes for all sentient beings. However, there is currently very little we can say with confidence about what desirable long-term outcomes look like, or how best to achieve them; and if someone rushes to lock in a particular conception of \"the best possible long-run future,\" they're likely to make catastrophic mistakes both in how they envision that goal and in how they implement it.\n\tIn order to avoid making critical decisions in haste and locking in flawed conclusions, humanity needs:\n\n\n2. A stable period during which relevant actors can accumulate whatever capabilities and knowledge are required to reach robustly good conclusions about long-run outcomes. This might involve decisionmakers developing better judgment, insight, and reasoning skills in the future, solving the full alignment problem for fully autonomous AGI systems, and so on.\n\tGiven the difficulty of the task, we expect a successful stable period to require:\n\n\n3. A preceding end to the acute risk period. If AGI carries a significant chance of causing an existential catastrophe over the next few decades, this forces a response under time pressure; but if actors attempt to make irreversible decisions about the long-term future under strong time pressure, we expect the result to be catastrophically bad. Conditioning on good outcomes, we therefore expect a two-step process where addressing acute existential risks takes temporal priority.\n\tTo end the acute risk period, we expect it to be necessary for actors to make use of:\n\n\n4. A risk-mitigating technology. On our current view of the technological landscape, there are a number of plausible future technologies that could be leveraged to end the acute risk period.\n\tWe believe that the likeliest way to achieve a technology in this category sufficiently soon is through:\n\n\n5. AGI-empowered technological development carried out by task-directed AGI systems. Depending on early AGI systems' level of capital-intensiveness, on whether AGI is a late-paradigm or early-paradigm invention, and on a number of other factors, AGI might be developed by anything from a small Silicon Valley startup to a large-scale multinational collaboration. Regardless, we expect AGI to be developed before any other (meta)technology that can be employed to end the acute risk period, and if early AGI systems can be used safely at all, then we expect it to be possible for an AI-empowered project to safely automate a reasonably small set of concrete science and engineering tasks that are sufficient for ending the risk period. This requires:\n\n6. Construction of minimal aligned AGI. We specify \"minimal\" because we consider success much more likely if developers attempt to build systems with the bare minimum of capabilities for ending the acute risk period. We expect AGI alignment to be highly difficult, and we expect additional capabilities to add substantially to this difficulty.\n Added: \"Minimal aligned AGI\" means \"aligned AGI that has the minimal necessary capabilities\"; be sure not to misread it as \"minimally aligned AGI\". Rob Bensinger adds: \"The MIRI view isn't 'rather than making alignment your top priority and working really hard to over-engineer your system for safety, try to build a system with the bare minimum of capabilities'. It's: 'in addition to making alignment your top priority and working really hard to over-engineer your system for safety, also build the system to have the bare minimum of capabilities'.\"\n\tIf an aligned system of this kind were developed, we would expect two factors to be responsible:\n\n\n\n\n\n\n7a. A technological edge in AGI by an operationally adequate project. By \"operationally adequate\" we mean a project with strong opsec, research closure, trustworthy command, a commitment to the common good, security mindset, requisite resource levels, and heavy prioritization of alignment work. A project like this needs to have a large enough lead to be able to afford to spend a substantial amount of time on safety measures, as discussed at FLI's Asilomar conference.\n\n\n\n7b. A strong white-boxed system understanding on the part of the operationally adequate project during late AGI development. By this we mean that developers go into building AGI systems with a good understanding of how their systems decompose and solve particular cognitive problems, of the kinds of problems different parts of the system are working on, and of how all of the parts of the system interact.\n\tOn our current understanding of the alignment problem, developers need to be able to give a reasonable account of how all of the AGI-grade computation in their system is being allocated, similar to how secure software systems are built to allow security professionals to give a simple accounting of why the system has no unforeseen vulnerabilities. See \"Security Mindset and Ordinary Paranoia\" for more details.\nDevelopers must be able to explicitly state and check all of the basic assumptions required for their account of the system's alignment and effectiveness to hold. Additionally, they need to design and modify AGI systems only in ways that preserve understandability — that is, only allow system modifications that preserve developers' ability to generate full accounts of what cognitive problems any given slice of the system is solving, and why the interaction of all of the system's parts is both safe and effective.\nOur view is that this kind of system understandability will in turn require:\n\n\n8. Steering toward alignment-conducive AGI approaches. Leading AGI researchers and developers need to deliberately direct research efforts toward ensuring that the earliest AGI designs are relatively easy to understand and align.\n\tWe expect this to be a critical step, as we do not expect most approaches to AGI to be alignable after the fact without long, multi-year delays.\n\n\n\n \nWe plan to say more in the future about the criteria for operationally adequate projects in 7a. We do not believe that any project meeting all of these conditions currently exists, though we see various ways that projects could reach this threshold.\nThe above breakdown only discusses what we view as the \"mainline\" success scenario.6 If we condition on good long-run outcomes, the most plausible explanation we can come up with cites an operationally adequate AI-empowered project ending the acute risk period, and appeals to the fact that those future AGI developers maintained a strong understanding of their system's problem-solving work over the course of development, made use of advance knowledge about which AGI approaches conduce to that kind of understanding, and filtered on those approaches.\nFor that reason, MIRI does research to intervene on 8 from various angles, such as by examining holes and anomalies in the field's current understanding of real-world reasoning and decision-making. We hope to thereby reduce our own confusion about alignment-conducive AGI approaches and ultimately help make it feasible for developers to construct adequate \"safety-stories\" in an alignment setting. As we improve our understanding of the alignment problem, our aim is to share new insights and techniques with leading or up-and-coming developer groups, who we're generally on good terms with.\nA number of the points above require further explanation and motivation, and we'll be providing more details on our view of the strategic landscape in the near future.\nFurther questions are always welcome at , regarding our current organizational activities and plans as well as the long-term role we hope to play in giving AGI developers an easier and clearer shot at making the first AGI systems robust and safe. For more details on our fundraiser, including corporate matching, see our Donate page.\n \nNote that this $1.9M is significantly below the $2.1–2.5M we predicted for the year in April. Personnel costs are MIRI's most significant expense, and higher research staff turnover in 2017 meant that we had fewer net additions to the team this year than we'd budgeted for. We went under budget by a relatively small margin in 2016, spending $1.73M versus a predicted $1.83M.\nOur 2018–2019 budget estimates are highly uncertain, with most of the uncertainty coming from substantial uncertainty about how quickly we'll be able to take on new research staff.This is roughly in line with our experience in previous years, when excluding expected grants and large surprise one-time donations. We've accounted for the former in our targets but not the latter, since we think it unwise to bank on unpredictable windfalls.\nNote that in previous years, we've set targets based on maintaining a 1-year runway. Given the increase in our size, I now think that a 1.5-year runway is more appropriate.Including the $1.01 million donation and the first $1.25 million from the Open Philanthropy Project, we have so far raised around $3.16 million this year, overshooting the $3 million goal we set earlier this year!We emphasize that, as always, \"experiment\" means \"most things tried don't work.\" We'd like to avoid setting expectations of immediate success for this exploratory push.Our previous goal was to slowly ramp up to the $3–4 million level and then hold steady with around 13–17 research staff. We now expect to be able to reach (and surpass) that level much more quickly.There are other paths to good outcomes that we view as lower-probability, but still sufficiently high-probability that the global community should allocate marginal resources to their pursuit.The post MIRI's 2017 Fundraiser appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "MIRI’s 2017 Fundraiser", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=19", "id": "1210ecbb9ec1e3b2d9383e9dfbbc42e8"} {"text": "Security Mindset and the Logistic Success Curve\n\nFollow-up to:   Security Mindset and Ordinary Paranoia\n\n\n \n(Two days later, Amber returns with another question.)\n \nAMBER:  Uh, say, Coral. How important is security mindset when you're building a whole new kind of system—say, one subject to potentially adverse optimization pressures, where you want it to have some sort of robustness property?\nCORAL:  How novel is the system?\nAMBER:  Very novel.\nCORAL:  Novel enough that you'd have to invent your own new best practices instead of looking them up?\nAMBER:  Right.\nCORAL:  That's serious business. If you're building a very simple Internet-connected system, maybe a smart ordinary paranoid could look up how we usually guard against adversaries, use as much off-the-shelf software as possible that was checked over by real security professionals, and not do too horribly. But if you're doing something qualitatively new and complicated that has to be robust against adverse optimization, well… mostly I'd think you were operating in almost impossibly dangerous territory, and I'd advise you to figure out what to do after your first try failed. But if you wanted to actually succeed, ordinary paranoia absolutely would not do it.\nAMBER:  In other words, projects to build novel mission-critical systems ought to have advisors with the full security mindset, so that the advisor can say what the system builders really need to do to ensure security.\nCORAL:  (laughs sadly)  No.\nAMBER:  No?\n\nCORAL:  Let's say for the sake of concreteness that you want to build a new kind of secure operating system. That is not the sort of thing you can do by attaching one advisor with security mindset, who has limited political capital to use to try to argue people into doing things. \"Building a house when you're only allowed to touch the bricks using tweezers\" comes to mind as a metaphor. You're going to need experienced security professionals working full-time with high authority. Three of them, one of whom is a cofounder. Although even then, we might still be operating in the territory of Paul Graham's Design Paradox.\nAMBER:  Design Paradox? What's that?\nCORAL:  Paul Graham's Design Paradox is that people who have good taste in UIs can tell when other people are designing good UIs, but most CEOs of big companies lack the good taste to tell who else has good taste. And that's why big companies can't just hire other people as talented as Steve Jobs to build nice things for them, even though Steve Jobs certainly wasn't the best possible designer on the planet. Apple existed because of a lucky history where Steve Jobs ended up in charge. There's no way for Samsung to hire somebody else with equal talents, because Samsung would just end up with some guy in a suit who was good at pretending to be Steve Jobs in front of a CEO who couldn't tell the difference.\nSimilarly, people with security mindset can notice when other people lack it, but I'd worry that an ordinary paranoid would have a hard time telling the difference, which would make it hard for them to hire a truly competent advisor. And of course lots of the people in the larger social system behind technology projects lack even the ordinary paranoia that many good programmers possess, and they just end up with empty suits talking a lot about \"risk\" and \"safety\". In other words, if we're talking about something as hard as building a secure operating system, and your project hasn't started up already headed up by someone with the full security mindset, you are in trouble. Where by \"in trouble\" I mean \"totally, irretrievably doomed\".\nAMBER:  Look, uh, there's a certain project I'm invested in which has raised a hundred million dollars to create merchant drones.\nCORAL:  Merchant drones?\nAMBER:  So there are a lot of countries that have poor market infrastructure, and the idea is, we're going to make drones that fly around buying and selling things, and they'll use machine learning to figure out what prices to pay and so on. We're not just in it for the money; we think it could be a huge economic boost to those countries, really help them move forwards.\nCORAL:  Dear God. Okay. There are exactly two things your company is about: system security, and regulatory compliance. Well, and also marketing, but that doesn't count because every company is about marketing. It would be a severe error to imagine that your company is about anything else, such as drone hardware or machine learning.\nAMBER:  Well, the sentiment inside the company is that the time to begin thinking about legalities and security will be after we've proven we can build a prototype and have at least a small pilot market in progress. I mean, until we know how people are using the system and how the software ends up working, it's hard to see how we could do any productive thinking about security or compliance that wouldn't just be pure speculation.\nCORAL:  Ha! Ha, hahaha… oh my god you're not joking.\nAMBER:  What?\nCORAL:  Please tell me that what you actually mean is that you have a security and regulatory roadmap which calls for you to do some of your work later, but clearly lays out what work needs to be done, when you are to start doing it, and when each milestone needs to be complete. Surely you don't literally mean that you intend to start thinking about it later?\nAMBER:  A lot of times at lunch we talk about how annoying it is that we'll have to deal with regulations and how much better it would be if governments were more libertarian. That counts as thinking about it, right?\nCORAL:  Oh my god.\nAMBER:  I don't see how we could have a security plan when we don't know exactly what we'll be securing. Wouldn't the plan just turn out to be wrong?\nCORAL:  All business plans for startups turn out to be wrong, but you still need them—and not just as works of fiction. They represent the written form of your current beliefs about your key assumptions. Writing down your business plan checks whether your current beliefs can possibly be coherent, and suggests which critical beliefs to test first, and which results should set off alarms, and when you are falling behind key survival thresholds. The idea isn't that you stick to the business plan; it's that having a business plan (a) checks that it seems possible to succeed in any way whatsoever, and (b) tells you when one of your beliefs is being falsified so you can explicitly change the plan and adapt. Having a written plan that you intend to rapidly revise in the face of new information is one thing. NOT HAVING A PLAN is another.\nAMBER:  The thing is, I am a little worried that the head of the project, Mr. Topaz, isn't concerned enough about the possibility of somebody fooling the drones into giving out money when they shouldn't. I mean, I've tried to raise that concern, but he says that of course we're not going to program the drones to give out money to just anyone. Can you maybe give him a few tips? For when it comes time to start thinking about security, I mean.\nCORAL:  Oh. Oh, my dear, sweet summer child, I'm sorry. There's nothing I can do for you.\nAMBER:  Huh? But you haven't even looked at our beautiful business model!\nCORAL:  I thought maybe your company merely had a hopeless case of underestimated difficulties and misplaced priorities. But now it sounds like your leader is not even using ordinary paranoia, and reacts with skepticism to it. Calling a case like that \"hopeless\" would be an understatement.\nAMBER:  But a security failure would be very bad for the countries we're trying to help! They need secure merchant drones!\nCORAL:  Then they will need drones built by some project that is not led by Mr. Topaz.\nAMBER:  But that seems very hard to arrange!\nCORAL:  …I don't understand what you are saying that is supposed to contradict anything I am saying.\nAMBER:  Look, aren't you judging Mr. Topaz a little too quickly? Seriously.\nCORAL:  I haven't met him, so it's possible you misrepresented him to me. But if you've accurately represented his attitude? Then, yes, I did judge quickly, but it's a hell of a good guess. Security mindset is already rare on priors. \"I don't plan to make my drones give away money to random people\" means he's imagining how his system could work as he intends, instead of imagining how it might not work as he intends. If somebody doesn't even exhibit ordinary paranoia, spontaneously on their own cognizance without external prompting, then they cannot do security, period. Reacting indignantly to the suggestion that something might go wrong is even beyond that level of hopelessness, but the base level was hopeless enough already.\nAMBER:  Look… can you just go to Mr. Topaz and try to tell him what he needs to do to add some security onto his drones? Just try? Because it's super important.\nCORAL:  I could try, yes. I can't succeed, but I could try.\nAMBER:  Oh, but please be careful to not be harsh with him. Don't put the focus on what he's doing wrong—and try to make it clear that these problems aren't too serious. He's been put off by the media alarmism surrounding apocalyptic scenarios with armies of evil drones filling the sky, and it took me some trouble to convince him that I wasn't just another alarmist full of fanciful catastrophe scenarios of drones defying their own programming.\nCORAL:  …\nAMBER:  And maybe try to keep your opening conversation away from what might sound like crazy edge cases, like somebody forgetting to check the end of a buffer and an adversary throwing in a huge string of characters that overwrite the end of the stack with a return address that jumps to a section of code somewhere else in the system that does something the adversary wants. I mean, you've convinced me that these far-fetched scenarios are worth worrying about, if only because they might be canaries in the coal mine for more realistic failure modes. But Mr. Topaz thinks that's all a bit silly, and I don't think you should open by trying to explain to him on a meta level why it isn't. He'd probably think you were being condescending, telling him how to think. Especially when you're just an operating-systems guy and you have no experience building drones and seeing what actually makes them crash. I mean, that's what I think he'd say to you.\nCORAL:  …\nAMBER:  Also, start with the cheaper interventions when you're giving advice. I don't think Mr. Topaz is going to react well if you tell him that he needs to start all over in another programming language, or establish a review board for all code changes, or whatever. He's worried about competitors reaching the market first, so he doesn't want to do anything that will slow him down.\nCORAL:  …\nAMBER:  Uh, Coral?\nCORAL:  … on his novel project, entering new territory, doing things not exactly like what has been done before, carrying out novel mission-critical subtasks for which there are no standardized best security practices, nor any known understanding of what makes the system robust or not-robust.\nAMBER:  Right!\nCORAL:  And Mr. Topaz himself does not seem much terrified of this terrifying task before him.\nAMBER:  Well, he's worried about somebody else making merchant drones first and misusing this key economic infrastructure for bad purposes. That's the same basic thing, right? Like, it demonstrates that he can worry about things?\nCORAL:  It is utterly different. Monkeys who can be afraid of other monkeys getting to the bananas first are far, far more common than monkeys who worry about whether the bananas will exhibit weird system behaviors in the face of adverse optimization.\nAMBER:  Oh.\nCORAL:  I'm afraid it is only slightly more probable that Mr. Topaz will oversee the creation of robust software than that the Moon will spontaneously transform into organically farmed goat cheese.\nAMBER:  I think you're being too harsh on him. I've met Mr. Topaz, and he seemed pretty bright to me.\nCORAL:  Again, assuming you're representing him accurately, Mr. Topaz seems to lack what I called ordinary paranoia. If he does have that ability as a cognitive capacity, which many bright programmers do, then he obviously doesn't feel passionate about applying that paranoia to his drone project along key dimensions. It also sounds like Mr. Topaz doesn't realize there's a skill that he is missing, and would be insulted by the suggestion. I am put in mind of the story of the farmer who was asked by a passing driver for directions to get to Point B, to which the farmer replied, \"If I was trying to get to Point B, I sure wouldn't start from here.\"\nAMBER:  Mr. Topaz has made some significant advances in drone technology, so he can't be stupid, right?\nCORAL:  \"Security mindset\" seems to be a distinct cognitive talent from g factor or even programming ability. In fact, there doesn't seem to be a level of human genius that even guarantees you'll be skilled at ordinary paranoia. Which does make some security professionals feel a bit weird, myself included—the same way a lot of programmers have trouble understanding why not everyone can learn to program. But it seems to be an observational fact that both ordinary paranoia and security mindset are things that can decouple from g factor and programming ability—and if this were not the case, the Internet would be far more secure than it is.\nAMBER:  Do you think it would help if we talked to the other VCs funding this project and got them to ask Mr. Topaz to appoint a Special Advisor on Robustness reporting directly to the CTO? That sounds politically difficult to me, but it's possible we could swing it. Once the press started speculating about drones going rogue and maybe aggregating into larger Voltron-like robots that could acquire laser eyes, Mr. Topaz did tell the VCs that he was very concerned about the ethics of drone safety and that he'd had many long conversations about it over lunch hours.\nCORAL:  I'm venturing slightly outside my own expertise here, which isn't corporate politics per se. But on a project like this one that's trying to enter novel territory, I'd guess the person with security mindset needs at least cofounder status, and must be personally trusted by any cofounders who don't have the skill. It can't be an outsider who was brought in by VCs, who is operating on limited political capital and needs to win an argument every time she wants to not have all the services conveniently turned on by default. I suspect you just have the wrong person in charge of this startup, and that this problem is not repairable.\nAMBER:  Please don't just give up! Even if things are as bad as you say, just increasing our project's probability of being secure from 0% to 10% would be very valuable in expectation to all those people in other countries who need merchant drones.\nCORAL:  …look, at some point in life we have to try to triage our efforts and give up on what can't be salvaged. There's often a logistic curve for success probabilities, you know? The distances are measured in multiplicative odds, not additive percentage points. You can't take a project like this and assume that by putting in some more hard work, you can increase the absolute chance of success by 10%. More like, the odds of this project's failure versus success start out as 1,000,000:1, and if we're very polite and navigate around Mr. Topaz's sense that he is higher-status than us and manage to explain a few tips to him without ever sounding like we think we know something he doesn't, we can quintuple his chances of success and send the odds to 200,000:1. Which is to say that in the world of percentage points, the odds go from 0.0% to 0.0%. That's one way to look at the \"law of continued failure\".\nIf you had the kind of project where the fundamentals implied, say, a 15% chance of success, you'd then be on the right part of the logistic curve, and in that case it could make a lot of sense to hunt for ways to bump that up to a 30% or 80% chance.\nAMBER:  Look, I'm worried that it will really be very bad if Mr. Topaz reaches the market first with insecure drones. Like, I think that merchant drones could be very beneficial to countries without much existing market backbone, and if there's a grand failure—especially if some of the would-be customers have their money or items stolen—then it could poison the potential market for years. It will be terrible! Really, genuinely terrible!\nCORAL:  Wow. That sure does sound like an unpleasant scenario to have wedged yourself into.\nAMBER:  But what do we do now?\nCORAL:  Damned if I know. I do suspect you're screwed so long as you can only win if somebody like Mr. Topaz creates a robust system. I guess you could try to have some other drone project come into existence, headed up by somebody that, say, Bruce Schneier assures everyone is unusually good at security-mindset thinking and hence can hire people like me and listen to all the harsh things we have to say. Though I have to admit, the part where you think it's drastically important that you beat an insecure system to market with a secure system—well, that sounds positively nightmarish. You're going to need a lot more resources than Mr. Topaz has, or some other kind of very major advantage. Security takes time.\nAMBER:  Is it really that hard to add security to the drone system?\nCORAL:  You keep talking about \"adding\" security. System robustness isn't the kind of property you can bolt onto software as an afterthought.\nAMBER:  I guess I'm having trouble seeing why it's so much more expensive. Like, if somebody foolishly builds an OS that gives access to just anyone, you could instead put a password lock on it, using your clever system where the OS keeps the hashes of the passwords instead of the passwords. You just spend a couple of days rewriting all the services exposed to the Internet to ask for passwords before granting access. And then the OS has security on it! Right?\nCORAL:  NO. Everything inside your system that is potentially subject to adverse selection in its probability of weird behavior is a liability! Everything exposed to an attacker, and everything those subsystems interact with, and everything those parts interact with! You have to build all of it robustly! If you want to build a secure OS you need a whole special project that is \"building a secure operating system instead of an insecure operating system\". And you also need to restrict the scope of your ambitions, and not do everything you want to do, and obey other commandments that will feel like big unpleasant sacrifices to somebody who doesn't have the full security mindset. OpenBSD can't do a tenth of what Ubuntu does. They can't afford to! It would be too large of an attack surface! They can't review that much code using the special process that they use to develop secure software! They can't hold that many assumptions in their minds!\nAMBER:  Does that effort have to take a significant amount of extra time? Are you sure it can't just be done in a couple more weeks if we hurry?\nCORAL:  YES. Given that this is a novel project entering new territory, expect it to take at least two years more time, or 50% more development time—whichever is less—compared to a security-incautious project that otherwise has identical tools, insights, people, and resources. And that is a very, very optimistic lower bound.\nAMBER:  This story seems to be heading in a worrying direction.\nCORAL:  Well, I'm sorry, but creating robust systems takes longer than creating non-robust systems even in cases where it would be really, extraordinarily bad if creating robust systems took longer than creating non-robust systems.\nAMBER:  Couldn't it be the case that, like, projects which are implementing good security practices do everything so much cleaner and better that they can come to market faster than any insecure competitors could?\nCORAL:  … I honestly have trouble seeing why you're privileging that hypothesis for consideration. Robustness involves assurance processes that take additional time. OpenBSD does not go through lines of code faster than Ubuntu.\nBut more importantly, if everyone has access to the same tools and insights and resources, then an unusually fast method of doing something cautiously can always be degenerated into an even faster method of doing the thing incautiously. There is not now, nor will there ever be, a programming language in which it is the least bit difficult to write bad programs. There is not now, nor will there ever be, a methodology that makes writing insecure software inherently slower than writing secure software. Any security professional who heard about your bright hopes would just laugh. Ask them too if you don't believe me.\nAMBER:  But shouldn't engineers who aren't cautious just be unable to make software at all, because of ordinary bugs?\nCORAL:  I am afraid that it is both possible, and extremely common in practice, for people to fix all the bugs that are crashing their systems in ordinary testing today, using methodologies that are indeed adequate to fixing ordinary bugs that show up often enough to afflict a significant fraction of users, and then ship the product. They get everything working today, and they don't feel like they have the slack to delay any longer than that before shipping because the product is already behind schedule. They don't hire exceptional people to do ten times as much work in order to prevent the product from having holes that only show up under adverse optimization pressure, that somebody else finds first and that they learn about after it's too late.\nIt's not even the wrong decision, for products that aren't connected to the Internet, don't have enough users for one to go rogue, don't handle money, don't contain any valuable data, and don't do anything that could injure people if something goes wrong. If your software doesn't destroy anything important when it explodes, it's probably a better use of limited resources to plan on fixing bugs as they show up.\n… Of course, you need some amount of security mindset to realize which software can in fact destroy the company if it silently corrupts data and nobody notices this until a month later. I don't suppose it's the case that your drones only carry a limited amount of the full corporate budget in cash over the course of a day, and you always have more than enough money to reimburse all the customers if all items in transit over a day were lost, taking into account that the drones might make many more purchases or sales than usual? And that the systems are generating internal paper receipts that are clearly shown to the customer and non-electronically reconciled once per day, thereby enabling you to notice a problem before it's too late?\nAMBER:  Nope!\nCORAL:  Then as you say, it would be better for the world if your company didn't exist and wasn't about to charge into this new territory and poison it with a spectacular screwup.\nAMBER:  If I believed that… well, Mr. Topaz certainly isn't going to stop his project or let somebody else take over. It seems the logical implication of what you say you believe is that I should try to persuade the venture capitalists I know to launch a safer drone project with even more funding.\nCORAL:  Uh, I'm sorry to be blunt about this, but I'm not sure you have a high enough level of security mindset to identify an executive who's sufficiently better than you at it. Trying to get enough of a resource advantage to beat the insecure product to market is only half of your problem in launching a competing project. The other half of your problem is surpassing the prior rarity of people with truly deep security mindset, and getting somebody like that in charge and fully committed. Or at least get them in as a highly trusted, fully committed cofounder who isn't on a short budget of political capital. I'll say it again: an advisor appointed by VCs isn't nearly enough for a project like yours. Even if the advisor is a genuinely good security professional—\nAMBER:  This all seems like an unreasonably difficult requirement! Can't you back down on it a little?\nCORAL:  —the person in charge will probably try to bargain down reality, as represented by the unwelcome voice of the security professional, who won't have enough social capital to badger them into \"unreasonable\" measures. Which means you fail on full automatic.\nAMBER:  … Then what am I to do?\nCORAL:  I don't know, actually. But there's no point in launching another drone project with even more funding, if it just ends up with another Mr. Topaz put in charge. Which, by default, is exactly what your venture capitalist friends are going to do. Then you've just set an even higher competitive bar for anyone actually trying to be first to market with a secure solution, may God have mercy on their souls. \nBesides, if Mr. Topaz thinks he has a competitor breathing down his neck and rushes his product to market, his chance of creating a secure system could drop by a factor of ten and go all the way from 0.0% to 0.0%.\nAMBER:  Surely my VC friends have faced this kind of problem before and know how to identify and hire executives who can do security well?\nCORAL:  … If one of your VC friends is Paul Graham, then maybe yes. But in the average case, NO.\nIf average VCs always made sure that projects which needed security had a founder or cofounder with strong security mindset—if they had the ability to do that even in cases where they decided they wanted to—the Internet would again look like a very different place. By default, your VC friends will be fooled by somebody who looks very sober and talks a lot about how terribly concerned he is with cybersecurity and how the system is going to be ultra-secure and reject over nine thousand common passwords, including the thirty-six passwords listed on this slide here, and the VCs will ooh and ah over it, especially as one of them realizes that their own password is on the slide. That project leader is absolutely not going to want to hear from me—even less so than Mr. Topaz. To him, I'm a political threat who might damage his line of patter to the VCs.\nAMBER:  I have trouble believing all these smart people are really that stupid.\nCORAL:  You're compressing your innate sense of social status and your estimated level of how good particular groups are at this particular ability into a single dimension. That is not a good idea.\nAMBER:  I'm not saying that I think everyone with high status already knows the deep security skill. I'm just having trouble believing that they can't learn it quickly once told, or could be stuck not being able to identify good advisors who have it. That would mean they couldn't know something you know, something that seems important, and that just… feels off to me, somehow. Like, there are all these successful and important people out there, and you're saying you're better than them, even with all their influence, their skills, their resources—\nCORAL:  Look, you don't have to take my word for it. Think of all the websites you've been on, with snazzy-looking design, maybe with millions of dollars in sales passing through them, that want your password to be a mixture of uppercase and lowercase letters and numbers. In other words, they want you to enter \"Password1!\" instead of \"correct horse battery staple\". Every one of those websites is doing a thing that looks humorously silly to someone with a full security mindset or even just somebody who regularly reads XKCD. It says that the security system was set up by somebody who didn't know what they were doing and was blindly imitating impressive-looking mistakes they saw elsewhere.\nDo you think that makes a good impression on their customers? That's right, it does! Because the customers don't know any better. Do you think that login system makes a good impression on the company's investors, including professional VCs and probably some angels with their own startup experience? That's right, it does! Because the VCs don't know any better, and even the angel doesn't know any better, and they don't realize they're missing a vital skill, and they aren't consulting anyone who knows more. An innocent is impressed if a website requires a mix of uppercase and lowercase letters and numbers and punctuation. They think the people running the website must really care to impose a security measure that unusual and inconvenient. The people running the website think that's what they're doing too.\nPeople with deep security mindset are both rare and rarely appreciated. You can see just from the login system that none of the VCs and none of the C-level executives at that startup thought they needed to consult a real professional, or managed to find a real professional rather than an empty suit if they went consulting. There was, visibly, nobody in the neighboring system with the combined knowledge and status to walk over to the CEO and say, \"Your login system is embarrassing and you need to hire a real security professional.\" Or if anybody did say that to the CEO, the CEO was offended and shot the messenger for not phrasing it ever-so-politely enough, or the CTO saw the outsider as a political threat and bad-mouthed them out of the game.\nYour wishful should-universe hypothesis that people who can touch the full security mindset are more common than that within the venture capital and angel investing ecosystem is just flat wrong. Ordinary paranoia directed at widely-known adversarial cases is dense enough within the larger ecosystem to exert widespread social influence, albeit still comically absent in many individuals and regions. People with the full security mindset are too rare to have the same level of presence. That's the easily visible truth. You can see the login systems that want a punctuation mark in your password. You are not hallucinating them.\nAMBER:  If that's all true, then I just don't see how I can win. Maybe I should just condition on everything you say being false, since, if it's true, my winning seems unlikely—in which case all victories on my part would come in worlds with other background assumptions.\nCORAL:  … is that something you say often?\nAMBER:  Well, I say it whenever my victory starts to seem sufficiently unlikely.\nCORAL:  Goodness. I could maybe, maybe see somebody saying that once over the course of their entire lifetime, for a single unlikely conditional, but doing it more than once is sheer madness. I'd expect the unlikely conditionals to build up very fast and drop the probability of your mental world to effectively zero. It's tempting, but it's usually a bad idea to slip sideways into your own private hallucinatory universe when you feel you're under emotional pressure. I tend to believe that no matter what the difficulties, we are most likely to come up with good plans when we are mentally living in reality as opposed to somewhere else. If things seem difficult, we must face the difficulty squarely to succeed, to come up with some solution that faces down how bad the situation really is, rather than deciding to condition on things not being difficult because then it's too hard.\nAMBER:  Can you at least try talking to Mr. Topaz and advise him how to make things be secure?\nCORAL:  Sure. Trying things is easy, and I'm a character in a dialogue, so my opportunity costs are low. I'm sure Mr. Topaz is trying to build secure merchant drones, too. It's succeeding at things that is the hard part.\nAMBER:  Great, I'll see if I can get Mr. Topaz to talk to you. But do please be polite! If you think he's doing something wrong, try to point it out more gently than the way you've talked to me. I think I have enough political capital to get you in the door, but that won't last if you're rude.\nCORAL:  You know, back in mainstream computer security, when you propose a new way of securing a system, it's considered traditional and wise for everyone to gather around and try to come up with reasons why your idea might not work. It's understood that no matter how smart you are, most seemingly bright ideas turn out to be flawed, and that you shouldn't be touchy about people trying to shoot them down. Does Mr. Topaz have no acquaintance at all with the practices in computer security? A lot of programmers do.\nAMBER:  I think he'd say he respects computer security as its own field, but he doesn't believe that building secure operating systems is the same problem as building merchant drones.\nCORAL:  And if I suggested that this case might be similar to the problem of building a secure operating system, and that this case creates a similar need for more effortful and cautious development, requiring both (a) additional development time and (b) a special need for caution supplied by people with unusual mindsets above and beyond ordinary paranoia, who have an unusual skill that identifies shaky assumptions in a safety story before an ordinary paranoid would judge a fire as being urgent enough to need putting out, who can remedy the problem using deeper solutions than an ordinary paranoid would generate as parries against imagined attacks?\nIf I suggested, indeed, that this scenario might hold generally wherever we demand robustness of a complex system that is being subjected to strong external or internal optimization pressures? Pressures that strongly promote the probabilities of particular states of affairs via optimization that searches across a large and complex state space? Pressures which therefore in turn subject other subparts of the system to selection for weird states and previously unenvisioned execution paths? Especially if some of these pressures may be in some sense creative and find states of the system or environment that surprise us or violate our surface generalizations?\nAMBER:  I think he'd probably think you were trying to look smart by using overly abstract language at him. Or he'd reply that he didn't see why this took any more caution than he was already using just by testing the drones to make sure they didn't crash or give out too much money.\nCORAL:  I see.\nAMBER:  So, shall we be off?\nCORAL:  Of course! No problem! I'll just go meet with Mr. Topaz and use verbal persuasion to turn him into Bruce Schneier.\nAMBER:  That's the spirit!\nCORAL:  God, how I wish I lived in the territory that corresponds to your map.\nAMBER:  Hey, come on. Is it seriously that hard to bestow exceptionally rare mental skills on people by talking at them? I agree it's a bad sign that Mr. Topaz shows no sign of wanting to acquire those skills, and doesn't think we have enough relative status to continue listening if we say something he doesn't want to hear. But that just means we have to phrase our advice cleverly so that he will want to hear it!\nCORAL:  I suppose you could modify your message into something Mr. Topaz doesn't find so unpleasant to hear. Something that sounds related to the topic of drone security, but which doesn't cost him much, and of course does not actually cause his drones to end up secure because that would be all unpleasant and expensive. You could slip a little sideways in reality, and convince yourself that you've gotten Mr. Topaz to ally with you, because he sounds agreeable now. Your instinctive desire for the high-status monkey to be on your political side will feel like its problem has been solved. You can substitute the feeling of having solved that problem for the unpleasant sense of not having secured the actual drones; you can tell yourself that the bigger monkey will take care of everything now that he seems to be on your pleasantly-modified political side. And so you will be happy. Until the merchant drones hit the market, of course, but that unpleasant experience should be brief.\nAMBER:  Come on, we can do this! You've just got to think positively!\nCORAL:  … Well, if nothing else, this should be an interesting experience. I've never tried to do anything quite this doomed before.\n\n \nThe post Security Mindset and the Logistic Success Curve appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Security Mindset and the Logistic Success Curve", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=19", "id": "2630b18e417971eb777bb1d3498dffc4"} {"text": "Security Mindset and Ordinary Paranoia\n\nThe following is a fictional dialogue building off of AI Alignment: Why It's Hard, and Where to Start.\n\n\n \n(AMBER, a philanthropist interested in a more reliable Internet, and CORAL, a computer security professional, are at a conference hotel together discussing what Coral insists is a difficult and important issue: the difficulty of building \"secure\" software.)\n \nAMBER:  So, Coral, I understand that you believe it is very important, when creating software, to make that software be what you call \"secure\".\nCORAL:  Especially if it's connected to the Internet, or if it controls money or other valuables. But yes, that's right.\nAMBER:  I find it hard to believe that this needs to be a separate topic in computer science. In general, programmers need to figure out how to make computers do what they want. The people building operating systems surely won't want them to give access to unauthorized users, just like they won't want those computers to crash. Why is one problem so much more difficult than the other?\nCORAL:  That's a deep question, but to give a partial deep answer: When you expose a device to the Internet, you're potentially exposing it to intelligent adversaries who can find special, weird interactions with the system that make the pieces behave in weird ways that the programmers did not think of. When you're dealing with that kind of problem, you'll use a different set of methods and tools.\nAMBER:  Any system that crashes is behaving in a way the programmer didn't expect, and programmers already need to stop that from happening. How is this case different?\nCORAL:  Okay, so… imagine that your system is going to take in one kilobyte of input per session. (Although that itself is the sort of assumption we'd question and ask what happens if it gets a megabyte of input instead—but never mind.) If the input is one kilobyte, then there are 28,000 possible inputs, or about 102,400 or so. Again, for the sake of extending the simple visualization, imagine that a computer gets a billion inputs per second. Suppose that only a googol, 10100, out of the 102,400 possible inputs, cause the system to behave a certain way the original designer didn't intend.\nIf the system is getting inputs in a way that's uncorrelated with whether the input is a misbehaving one, it won't hit on a misbehaving state before the end of the universe. If there's an intelligent adversary who understands the system, on the other hand, they may be able to find one of the very rare inputs that makes the system misbehave. So a piece of the system that would literally never in a million years misbehave on random inputs, may break when an intelligent adversary tries deliberately to break it.\nAMBER:  So you're saying that it's more difficult because the programmer is pitting their wits against an adversary who may be more intelligent than themselves.\nCORAL:  That's an almost-right way of putting it. What matters isn't so much the \"adversary\" part as the optimization part. There are systematic, nonrandom forces strongly selecting for particular outcomes, causing pieces of the system to go down weird execution paths and occupy unexpected states. If your system literally has no misbehavior modes at all, it doesn't matter if you have IQ 140 and the enemy has IQ 160—it's not an arm-wrestling contest. It's just very much harder to build a system that doesn't enter weird states when the weird states are being selected-for in a correlated way, rather than happening only by accident. The weirdness-selecting forces can search through parts of the larger state space that you yourself failed to imagine. Beating that does indeed require new skills and a different mode of thinking, what Bruce Schneier called \"security mindset\".\nAMBER:  Ah, and what is this security mindset?\nCORAL:  I can say one or two things about it, but keep in mind we are dealing with a quality of thinking that is not entirely effable. If I could give you a handful of platitudes about security mindset, and that would actually cause you to be able to design secure software, the Internet would look very different from how it presently does. That said, it seems to me that what has been called \"security mindset\" can be divided into two components, one of which is much less difficult than the other. And this can fool people into overestimating their own safety, because they can get the easier half of security mindset and overlook the other half. The less difficult component, I will call by the term \"ordinary paranoia\".\nAMBER:  Ordinary paranoia?\nCORAL:  Lots of programmers have the ability to imagine adversaries trying to threaten them. They imagine how likely it is that the adversaries are able to attack them a particular way, and then they try to block off the adversaries from threatening that way. Imagining attacks, including weird or clever attacks, and parrying them with measures you imagine will stop the attack; that is ordinary paranoia.\nAMBER:  Isn't that what security is all about? What do you claim is the other half?\nCORAL:  To put it as a platitude, I might say… defending against mistakes in your own assumptions rather than against external adversaries.\n\nAMBER:  Can you give me an example of a difference?\nCORAL:  An ordinary paranoid programmer imagines that an adversary might try to read the file containing all the usernames and passwords. They might try to store the file in a special, secure area of the disk or a special subpart of the operating system that's supposed to be harder to read. Conversely, somebody with security mindset thinks, \"No matter what kind of special system I put around this file, I'm disturbed by needing to make the assumption that this file can't be read. Maybe the special code I write, because it's used less often, is more likely to contain bugs. Or maybe there's a way to fish data out of the disk that doesn't go through the code I wrote.\"\nAMBER:  And they imagine more and more ways that the adversary might be able to get at the information, and block those avenues off too! Because they have better imaginations.\nCORAL:  Well, we kind of do, but that's not the key difference. What we'll really want to do is come up with a way for the computer to check passwords that doesn't rely on the computer storing the password at all, anywhere.\nAMBER:  Ah, like encrypting the password file!\nCORAL:  No, that just duplicates the problem at one remove. If the computer can decrypt the password file to check it, it's stored the decryption key somewhere, and the attacker may be able to steal that key too.\nAMBER:  But then the attacker has to steal two things instead of one; doesn't that make the system more secure? Especially if you write two different sections of special filesystem code for hiding the encryption key and hiding the encrypted password file?\nCORAL:  That's exactly what I mean by distinguishing \"ordinary paranoia\" that doesn't capture the full security mindset. So long as the system is capable of reconstructing the password, we'll always worry that the adversary might be able to trick the system into doing just that. What somebody with security mindset will recognize as a deeper solution is to store a one-way hash of the password, rather than storing the plaintext password. Then even if the attacker reads off the password file, they still can't give what the system will recognize as a password.\nAMBER:  Ah, that's quite clever! But I don't see what's so qualitatively different between that measure, and my measure for hiding the key and the encrypted password file separately. I agree that your measure is more clever and elegant, but of course you'll know better standard solutions than I do, since you work in this area professionally. I don't see the qualitative line dividing your solution from my solution.\nCORAL:  Um, it's hard to say this without offending some people, but… it's possible that even after I try to explain the difference, which I'm about to do, you won't get it. Like I said, if I could give you some handy platitudes and transform you into somebody capable of doing truly good work in computer security, the Internet would look very different from its present form. I can try to describe one aspect of the difference, but that may put me in the position of a mathematician trying to explain what looks more promising about one proof avenue than another; you can listen to everything they say and nod along and still not be transformed into a mathematician. So I am going to try to explain the difference, but again, I don't know of any simple instruction manuals for becoming Bruce Schneier.\nAMBER:  I confess to feeling slightly skeptical at this supposedly ineffable ability that some people possess and others don't—\nCORAL:  There are things like that in many professions. Some people pick up programming at age five by glancing through a page of BASIC programs written for a TRS-80, and some people struggle really hard to grasp basic Python at age twenty-five. That's not because there's some mysterious truth the five-year-old knows that you can verbally transmit to the twenty-five-year-old.\nAnd, yes, the five-year-old will become far better with practice; it's not like we're talking about untrainable genius. And there may be platitudes you can tell the 25-year-old that will help them struggle a little less. But sometimes a profession requires thinking in an unusual way and some people's minds more easily turn sideways in that particular dimension.\nAMBER:  Fine, go on.\nCORAL:  Okay, so… you thought of putting the encrypted password file in one special place in the filesystem, and the key in another special place. Why not encrypt the key too, write a third special section of code, and store the key to the encrypted key there? Wouldn't that make the system even more secure? How about seven keys hidden in different places, wouldn't that be extremely secure? Practically unbreakable, even?\nAMBER:  Well, that version of the idea does feel a little silly. If you're trying to secure a door, a lock that takes two keys might be more secure than a lock that only needs one key, but seven keys doesn't feel like it makes the door that much more secure than two.\nCORAL:  Why not?\nAMBER:  It just seems silly. You'd probably have a better way of saying it than I would.\nCORAL:  Well, a fancy way of describing the silliness is that the chance of obtaining the seventh key is not conditionally independent of the chance of obtaining the first two keys. If I can read the encrypted password file, and read your encrypted encryption key, then I've probably come up with something that just bypasses your filesystem and reads directly from the disk. And the more complicated you make your filesystem, the more likely it is that I can find a weird system state that will let me do just that. Maybe the special section of filesystem code you wrote to hide your fourth key is the one with the bug that lets me read the disk directly.\nAMBER:  So the difference is that the person with a true security mindset found a defense that makes the system simpler rather than more complicated.\nCORAL:  Again, that's almost right. By hashing the passwords, the security professional has made their reasoning about the system less complicated. They've eliminated the need for an assumption that might be put under a lot of pressure. If you put the key in one special place and the encrypted password file in another special place, the system as a whole is still able to decrypt the user's password. An adversary probing the state space might be able to trigger that password-decrypting state because the system is designed to do that on at least some occasions. By hashing the password file we eliminate that whole internal debate from the reasoning on which the system's security rests.\nAMBER:  But even after you've come up with that clever trick, something could still go wrong. You're still not absolutely secure. What if somebody uses \"password\" as their password?\nCORAL:  Or what if somebody comes up a way to read off the password after the user has entered it and while it's still stored in RAM, because something got access to RAM? The point of eliminating the extra assumption from the reasoning about the system's security is not that we are then absolutely secure and safe and can relax. Somebody with security mindset is never going to be that relaxed about the edifice of reasoning saying the system is secure.\nFor that matter, while there are some normal programmers doing normal programming who might put in a bunch of debugging effort and then feel satisfied, like they'd done all they could reasonably do, programmers with decent levels of ordinary paranoia about ordinary programs will go on chewing ideas in the shower and coming up with more function tests for the system to pass. So the distinction between security mindset and ordinary paranoia isn't that ordinary paranoids will relax.\nIt's that… again to put it as a platitude, the ordinary paranoid is running around putting out fires in the form of ways they imagine an adversary might attack, and somebody with security mindset is defending against something closer to \"what if an element of this reasoning is mistaken\". Instead of trying really hard to ensure nobody can read a disk, we are going to build a system that's secure even if somebody does read the disk, and that is our first line of defense. And then we are also going to build a filesystem that doesn't let adversaries read the password file, as a second line of defense in case our one-way hash is secretly broken, and because there's no positive need to let adversaries read the disk so why let them. And then we're going to salt the hash in case somebody snuck a low-entropy password through our system and the adversary manages to read the password anyway.\nAMBER:  So rather than trying to outwit adversaries, somebody with true security mindset tries to make fewer assumptions.\nCORAL:  Well, we think in terms of adversaries too! Adversarial reasoning is easier to teach than security mindset, but it's still (a) mandatory and (b) hard to teach in an absolute sense. A lot of people can't master it, which is why a description of \"security mindset\" often opens with a story about somebody failing at adversarial reasoning and somebody else launching a clever attack to penetrate their defense.\nYou need to master two ways of thinking, and there are a lot of people going around who have the first way of thinking but not the second. One way I'd describe the deeper skill is seeing a system's security as resting on a story about why that system is safe. We want that safety-story to be as solid as possible. One of the implications is resting the story on as few assumptions as possible; as the saying goes, the only gear that never fails is one that has been designed out of the machine.\nAMBER:  But can't you also get better security by adding more lines of defense? Wouldn't that be more complexity in the story, and also better security?\nCORAL:  There's also something to be said for preferring disjunctive reasoning over conjunctive reasoning in the safety-story. But it's important to realize that you do want a primary line of defense that is supposed to just work and be unassailable, not a series of weaker fences that you think might maybe work. Somebody who doesn't understand cryptography might devise twenty clever-seeming amateur codes and apply them all in sequence, thinking that, even if one of the codes turns out to be breakable, surely they won't all be breakable. The NSA will assign that mighty edifice of amateur encryption to an intern, and the intern will crack it in an afternoon.\nThere's something to be said for redundancy, and having fallbacks in case the unassailable wall falls; it can be wise to have additional lines of defense, so long as the added complexity does not make the larger system harder to understand or increase its vulnerable surfaces. But at the core you need a simple, solid story about why the system is secure, and a good security thinker will be trying to eliminate whole assumptions from that story and strengthening its core pillars, not only scurrying around parrying expected attacks and putting out risk-fires.\nThat said, it's better to use two true assumptions than one false assumption, so simplicity isn't everything.\nAMBER:  I wonder if that way of thinking has applications beyond computer security?\nCORAL:  I'd rather think so, as the proverb about gears suggests.\nFor example, stepping out of character for a moment, the author of this dialogue has sometimes been known to discuss the alignment problem for Artificial General Intelligence. He was talking at one point about trying to measure rates of improvement inside a growing AI system, so that it would not do too much thinking with humans out of the loop if a breakthrough occurred while the system was running overnight. The person he was talking to replied that, to him, it seemed unlikely that an AGI would gain in power that fast. To which the author replied, more or less:\nIt shouldn't be your job to guess how fast the AGI might improve! If you write a system that will hurt you if a certain speed of self-improvement turns out to be possible, then you've written the wrong code. The code should just never hurt you regardless of the true value of that background parameter.\nA better way to set up the AGI would be to measure how much improvement is taking place, and if more than X improvement takes place, suspend the system until a programmer validates the progress that's already occurred. That way even if the improvement takes place over the course of a millisecond, you're still fine, so long as the system works as intended. Maybe the system doesn't work as intended because of some other mistake, but that's a better problem to worry about than a system that hurts you even if it works as intended.\nSimilarly, you want to design the system so that if it discovers amazing new capabilities, it waits for an operator to validate use of those capabilities—not rely on the operator to watch what's happening and press a suspend button. You shouldn't rely on the speed of discovery or the speed of disaster being less than the operator's reaction time. There's no need to bake in an assumption like that if you can find a design that's safe regardless. For example, by operating on a paradigm of allowing operator-whitelisted methods rather than avoiding operator-blacklisted methods; you require the operator to say \"Yes\" before proceeding, rather than assuming they're present and attentive and can say \"No\" fast enough.\nAMBER:  Well, okay, but if we're guarding against an AI system discovering cosmic powers in a millisecond, that does seem to me like an unreasonable thing to worry about. I guess that marks me as a merely ordinary paranoid.\nCORAL:  Indeed, one of the hallmarks of security professionals is that they spend a lot of time worrying about edge cases that would fail to alarm an ordinary paranoid because the edge case doesn't sound like something an adversary is likely to do. Here's an example from the Freedom to Tinker blog:\nThis interest in \"harmless failures\" – cases where an adversary can cause an anomalous but not directly harmful outcome – is another hallmark of the security mindset. Not all \"harmless failures\" lead to big trouble, but it's surprising how often a clever adversary can pile up a stack of seemingly harmless failures into a dangerous tower of trouble. Harmless failures are bad hygiene. We try to stamp them out when we can…\nTo see why, consider the donotreply.com email story that hit the press recently. When companies send out commercial email (e.g., an airline notifying a passenger of a flight delay) and they don't want the recipient to reply to the email, they often put in a bogus From address like . A clever guy registered the domain donotreply.com, thereby receiving all email addressed to donotreply.com. This included \"bounce\" replies to misaddressed emails, some of which contained copies of the original email, with information such as bank account statements, site information about military bases in Iraq, and so on…\nThe people who put donotreply.com email addresses into their outgoing email must have known that they didn't control the donotreply.com domain, so they must have thought of any reply messages directed there as harmless failures. Having gotten that far, there are two ways to avoid trouble. The first way is to think carefully about the traffic that might go to donotreply.com, and realize that some of it is actually dangerous. The second way is to think, \"This looks like a harmless failure, but we should avoid it anyway. No good can come of this.\" The first way protects you if you're clever; the second way always protects you.\n\"The first way protects you if you're clever; the second way always protects you.\" That's very much the other half of the security mindset. It's what this essay's author was doing by talking about AGI alignment that runs on whitelisting rather than blacklisting: you shouldn't assume you'll be clever about how fast the AGI system could discover capabilities, you should have a system that doesn't use not-yet-whitelisted capabilities even if they are discovered very suddenly.\nIf your AGI would hurt you if it gained total cosmic powers in one millisecond, that means you built a cognitive process that is in some sense trying to hurt you and failing only due to what you think is a lack of capability. This is very bad and you should be designing some other AGI system instead. AGI systems should never be running a search that will hurt you if the search comes up non-empty. You should not be trying to fix that by making sure the search comes up empty thanks to your clever shallow defenses closing off all the AGI's clever avenues for hurting you. You should fix that by making sure no search like that ever runs. It's a silly thing to do with computing power, and you should do something else with computing power instead.\nGoing back to ordinary computer security, if you try building a lock with seven keys hidden in different places, you are in some dimension pitting your cleverness against an adversary trying to read the keys. The person with security mindset doesn't want to rely on having to win the cleverness contest. An ordinary paranoid, somebody who can master the kind of default paranoia that lots of intelligent programmers have, will look at the Reply-To field saying and think about the possibility of an adversary registering the donotreply.com domain. Somebody with security mindset thinks in assumptions rather than adversaries. \"Well, I'm assuming that this reply email goes nowhere,\" they'll think, \"but maybe I should design the system so that I don't need to fret about whether that assumption is true.\"\nAMBER:  Because as the truly great paranoid knows, what seems like a ridiculously improbable way for the adversary to attack sometimes turns out to not be so ridiculous after all.\nCORAL:  Again, that's a not-exactly-right way of putting it. When I don't set up an email to originate from , it's not just because I've appreciated that an adversary registering donotreply.com is more probable than the novice imagines. For all I know, when a bounce email is sent to nowhere, there's all kinds of things that might happen! Maybe the way a bounced email works is that the email gets routed around to weird places looking for that address. I don't know, and I don't want to have to study it. Instead I'll ask: Can I make it so that a bounced email doesn't generate a reply? Can I make it so that a bounced email doesn't contain the text of the original message? Maybe I can query the email server to make sure it still has a user by that name before I try sending the message?—though there may still be \"vacation\" autoresponses that mean I'd better control the replied-to address myself. If it would be very bad for somebody unauthorized to read this, maybe I shouldn't be sending it in plaintext by email.\nAMBER:  So the person with true security mindset understands that where there's one problem, demonstrated by what seems like a very unlikely thought experiment, there's likely to be more realistic problems that an adversary can in fact exploit. What I think of as weird improbable failure scenarios are canaries in the coal mine, that would warn a truly paranoid person of bigger problems on the way.\nCORAL:  Again that's not exactly right. The person with ordinary paranoia hears about and may think something like, \"Oh, well, it's not very likely that an attacker will actually try to register that domain, I have more urgent issues to worry about,\" because in that mode of thinking, they're running around putting out things that might be fires, and they have to prioritize the things that are most likely to be fires.\nIf you demonstrate a weird edge-case thought experiment to somebody with security mindset, they don't see something that's more likely to be a fire. They think, \"Oh no, my belief that those bounce emails go nowhere was FALSE!\" The OpenBSD project to build a secure operating system has also, in passing, built an extremely robust operating system, because from their perspective any bug that potentially crashes the system is considered a critical security hole. An ordinary paranoid sees an input that crashes the system and thinks, \"A crash isn't as bad as somebody stealing my data. Until you demonstrate to me that this bug can be used by the adversary to steal data, it's not extremely critical.\" Somebody with security mindset thinks, \"Nothing inside this subsystem is supposed to behave in a way that crashes the OS. Some section of code is behaving in a way that does not work like my model of that code. Who knows what it might do? The system isn't supposed to crash, so by making it crash, you have demonstrated that my beliefs about how this system works are false.\"\nAMBER:  I'll be honest: It has sometimes struck me that people who call themselves security professionals seem overly concerned with what, to me, seem like very improbable scenarios. Like somebody forgetting to check the end of a buffer and an adversary throwing in a huge string of characters that overwrite the end of the stack with a return address that jumps to a section of code somewhere else in the system that does something the adversary wants. How likely is that really to be a problem? I suspect that in the real world, what's more likely is somebody making their password \"password\". Shouldn't you be mainly guarding against that instead?\nCORAL:  You have to do both. This game is short on consolation prizes. If you want your system to resist attack by major governments, you need it to actually be pretty darned secure, gosh darn it. The fact that some users may try to make their password be \"password\" does not change the fact that you also have to protect against buffer overflows.\nAMBER:  But even when somebody with security mindset designs an operating system, it often still ends up with successful attacks against it, right? So if this deeper paranoia doesn't eliminate all chance of bugs, is it really worth the extra effort?\nCORAL:  If you don't have somebody who thinks this way in charge of building your operating system, it has no chance of not failing immediately. People with security mindset sometimes fail to build secure systems. People without security mindset always fail at security if the system is at all complex. What this way of thinking buys you is a chance that your system takes longer than 24 hours to break.\nAMBER:  That sounds a little extreme.\nCORAL:  History shows that reality has not cared what you consider \"extreme\" in this regard, and that is why your Wi-Fi-enabled lightbulb is part of a Russian botnet.\nAMBER:  Look, I understand that you want to get all the fiddly tiny bits of the system exactly right. I like tidy neat things too. But let's be reasonable; we can't always get everything we want in life.\nCORAL:  You think you're negotiating with me, but you're really negotiating with Murphy's Law. I'm afraid that Mr. Murphy has historically been quite unreasonable in his demands, and rather unforgiving of those who refuse to meet them. I'm not advocating a policy to you, just telling you what happens if you don't follow that policy. Maybe you think it's not particularly bad if your lightbulb is doing denial-of-service attacks on a mattress store in Estonia. But if you do want a system to be secure, you need to do certain things, and that part is more of a law of nature than a negotiable demand.\nAMBER:  Non-negotiable, eh? I bet you'd change your tune if somebody offered you twenty thousand dollars. But anyway, one thing I'm surprised you're not mentioning more is the part where people with security mindset always submit their idea to peer scrutiny and then accept what other people vote about it. I do like the sound of that; it sounds very communitarian and modest.\nCORAL:  I'd say that's part of the ordinary paranoia that lots of programmers have. The point of submitting ideas to others' scrutiny isn't that hard to understand, though certainly there are plenty of people who don't even do that. If I had any original remarks to contribute to that well-worn topic in computer security, I'd remark that it's framed as advice to wise paranoids, but of course the people who need it even more are the happy innocents.\nAMBER:  Happy innocents?\nCORAL:  People who lack even ordinary paranoia. Happy innocents tend to envision ways that their system works, but not ask at all how their system might fail, until somebody prompts them into that, and even then they can't do it. Or at least that's been my experience, and that of many others in the profession.\nThere's a certain incredibly terrible cryptographic system, the equivalent of the Fool's Mate in chess, which is sometimes converged on by the most total sort of amateur, namely Fast XOR. That's picking a password, repeating the password, and XORing the data with the repeated password string. The person who invents this system may not be able to take the perspective of an adversary at all. He wants his marvelous cipher to be unbreakable, and he is not able to truly enter the frame of mind of somebody who wants his cipher to be breakable. If you ask him, \"Please, try to imagine what could possibly go wrong,\" he may say, \"Well, if the password is lost, the data will be forever unrecoverable because my encryption algorithm is too strong; I guess that's something that could go wrong.\" Or, \"Maybe somebody sabotages my code,\" or, \"If you really insist that I invent far-fetched scenarios, maybe the computer spontaneously decides to disobey my programming.\" Of course any competent ordinary paranoid asks the most skilled people they can find to look at a bright idea and try to shoot it down, because other minds may come in at a different angle or know other standard techniques. But the other reason why we say \"Don't roll your own crypto!\" and \"Have a security expert look at your bright idea!\" is in hopes of reaching the many people who can't at all invert the polarity of their goals—they don't think that way spontaneously, and if you try to force them to do it, their thoughts go in unproductive directions.\nAMBER:  Like… the same way many people on the Right/Left seem utterly incapable of stepping outside their own treasured perspectives to pass the Ideological Turing Test of the Left/Right.\nCORAL:  I don't know if it's exactly the same mental gear or capability, but there's a definite similarity. Somebody who lacks ordinary paranoia can't take on the viewpoint of somebody who wants Fast XOR to be breakable, and pass that adversary's Ideological Turing Test for attempts to break Fast XOR.\nAMBER:  Can't, or won't? You seem to be talking like these are innate, untrainable abilities.\nCORAL:  Well, at the least, there will be different levels of talent, as usual in a profession. And also as usual, talent vastly benefits from training and practice. But yes, it has sometimes seemed to me that there is a kind of qualitative step or gear here, where some people can shift perspective to imagine an adversary that truly wants to break their code… or a reality that isn't cheering for their plan to work, or aliens who evolved different emotions, or an AI that doesn't want to conclude its reasoning with \"And therefore the humans should live happily ever after\", or a fictional character who believes in Sith ideology and yet doesn't believe they're the bad guy.\nIt does sometimes seem to me like some people simply can't shift perspective in that way. Maybe it's not that they truly lack the wiring, but that there's an instinctive political off-switch for the ability. Maybe they're scared to let go of their mental anchors. But from the outside it looks like the same result: some people do it, some people don't. Some people spontaneously invert the polarity of their internal goals and spontaneously ask how their cipher might be broken and come up with productive angles of attack. Other people wait until prompted to look for flaws in their cipher, or they demand that you argue with them and wait for you to come up with an argument that satisfies them. If you ask them to predict themselves what you might suggest as a flaw, they say weird things that don't begin to pass your Ideological Turing Test.\nAMBER:  You do seem to like your qualitative distinctions. Are there better or worse ordinary paranoids? Like, is there a spectrum in the space between \"happy innocent\" and \"true deep security mindset\"?\nCORAL:  One obvious quantitative talent level within ordinary paranoia would be in how far you can twist your perspective to look sideways at things—the creativity and workability of the attacks you invent. Like these examples Bruce Schneier gave:\nUncle Milton Industries has been selling ant farms to children since 1956. Some years ago, I remember opening one up with a friend. There were no actual ants included in the box. Instead, there was a card that you filled in with your address, and the company would mail you some ants. My friend expressed surprise that you could get ants sent to you in the mail.\nI replied: \"What's really interesting is that these people will send a tube of live ants to anyone you tell them to.\"\nSecurity requires a particular mindset. Security professionals—at least the good ones—see the world differently. They can't walk into a store without noticing how they might shoplift. They can't use a computer without wondering about the security vulnerabilities. They can't vote without trying to figure out how to vote twice. They just can't help it.\nSmartWater is a liquid with a unique identifier linked to a particular owner. \"The idea is for me to paint this stuff on my valuables as proof of ownership,\" I wrote when I first learned about the idea. \"I think a better idea would be for me to paint it on your valuables, and then call the police.\"\nReally, we can't help it.\nThis kind of thinking is not natural for most people. It's not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail…\nI've often speculated about how much of this is innate, and how much is teachable. In general, I think it's a particular way of looking at the world, and that it's far easier to teach someone domain expertise—cryptography or software security or safecracking or document forgery—than it is to teach someone a security mindset.\nTo be clear, the distinction between \"just ordinary paranoia\" and \"all of security mindset\" is my own; I think it's worth dividing the spectrum above the happy innocents into two levels rather than one, and say, \"This business of looking at the world from weird angles is only half of what you need to learn, and it's the easier half.\"\nAMBER:  Maybe Bruce Schneier himself doesn't grasp what you mean when you say \"security mindset\", and you've simply stolen his term to refer to a whole new idea of your own!\nCORAL:  No, the thing with not wanting to have to reason about whether somebody might someday register \"donotreply.com\" and just fixing it regardless—a methodology that doesn't trust you to be clever about which problems will blow up—that's definitely part of what existing security professionals mean by \"security mindset\", and it's definitely part of the second and deeper half. The only unconventional thing in my presentation is that I'm factoring out an intermediate skill of \"ordinary paranoia\", where you try to parry an imagined attack by encrypting your password file and hiding the encryption key in a separate section of filesystem code. Coming up with the idea of hashing the password file is, I suspect, a qualitatively distinct skill, invoking a world whose dimensions are your own reasoning processes and not just object-level systems and attackers. Though it's not polite to say, and the usual suspects will interpret it as a status grab, my experience with other reflectivity-laden skills suggests this may mean that many people, possibly including you, will prove unable to think in this way.\nAMBER:  I indeed find that terribly impolite.\nCORAL:  It may indeed be impolite; I don't deny that. Whether it's untrue is a different question. The reason I say it is because, as much as I want ordinary paranoids to try to reach up to a deeper level of paranoia, I want them to be aware that it might not prove to be their thing, in which case they should get help and then listen to that help. They shouldn't assume that because they can notice the chance to have ants mailed to people, they can also pick up on the awfulness of .\nAMBER:  Maybe you could call that \"deep security\" to distinguish it from what Bruce Schneier and other security professionals call \"security mindset\".\nCORAL:  \"Security mindset\" equals \"ordinary paranoia\" plus \"deep security\"? I'm not sure that's very good terminology, but I won't mind if you use the term that way.\nAMBER:  Suppose I take that at face value. Earlier, you described what might go wrong when a happy innocent tries and fails to be an ordinary paranoid. What happens when an ordinary paranoid tries to do something that requires the deep security skill?\nCORAL:  They believe they have wisely identified bad passwords as the real fire in need of putting out, and spend all their time writing more and more clever checks for bad passwords. They are very impressed with how much effort they have put into detecting bad passwords, and how much concern they have shown for system security. They fall prey to the standard cognitive bias whose name I can't remember, where people want to solve a problem using one big effort or a couple of big efforts and then be done and not try anymore, and that's why people don't put up hurricane shutters once they're finished buying bottled water. Pay them to \"try harder\", and they'll hide seven encryption keys to the password file in seven different places, or build towers higher and higher in places where a successful adversary is obviously just walking around the tower if they've gotten through at all. What these ideas have in common is that they are in a certain sense \"shallow\". They are mentally straightforward as attempted parries against a particular kind of envisioned attack. They give you a satisfying sense of fighting hard against the imagined problem—and then they fail.\nAMBER:  Are you saying it's not a good idea to check that the user's password isn't \"password\"?\nCORAL:  No, shallow defenses are often good ideas too! But even there, somebody with the higher skill will try to look at things in a more systematic way; they know that there are often deeper ways of looking at the problem to be found, and they'll try to find those deep views. For example, it's extremely important that your password checker does not rule out the password \"correct horse battery staple\" by demanding the password contain at least one uppercase letter, lowercase letter, number, and punctuation mark. What you really want to do is measure password entropy. Not envision a failure mode of somebody guessing \"rainbow\", which you will cleverly balk by forcing the user to make their password be \"rA1nbow!\" instead.\nYou want the password entry field to have a checkbox that allows showing the typed password in plaintext, because your attempt to parry the imagined failure mode of some evildoer reading over the user's shoulder may get in the way of the user entering a long or high-entropy password. And the user is perfectly capable of typing their password into that convenient text field in the address bar above the web page, so they can copy and paste it—thereby sending your password to whoever tries to do smart lookups on the address bar. If you're really that worried about some evildoer reading over somebody's shoulder, maybe you should be sending a confirmation text to their phone, rather than forcing the user to enter their password into a nearby text field that they can actually read. Obscuring one text field, with no off-switch for the obscuration, to guard against this one bad thing that you imagined happening, while managing to step on your own feet in other ways and not even really guard against the bad thing; that's the peril of shallow defenses.\nAn archetypal character for \"ordinary paranoid who thinks he's trying really hard but is actually just piling on a lot of shallow precautions\" is Mad-Eye Moody from the Harry Potter series, who has a whole room full of Dark Detectors, and who also ends up locked in the bottom of somebody's trunk. It seems Mad-Eye Moody was too busy buying one more Dark Detector for his existing room full of Dark Detectors, and he didn't invent precautions deep enough and general enough to cover the unforeseen attack vector \"somebody tries to replace me using Polyjuice\".\nAnd the solution isn't to add on a special anti-Polyjuice potion. I mean, if you happen to have one, great, but that's not where most of your trust in the system should be coming from. The first lines of defense should have a sense about them of depth, of generality. Hashing password files, rather than hiding keys; thinking of how to measure password entropy, rather than requiring at least one uppercase character.\nAMBER:  Again this seems to me more like a quantitative difference in the cleverness of clever ideas, rather than two different modes of thinking.\nCORAL:  Real-world categories are often fuzzy, but to me these seem like the product of two different kinds of thinking. My guess is that the person who popularized demanding a mixture of letters, cases, and numbers was reasoning in a different way than the person who thought of measuring password entropy. But whether you call the distinction qualitative or quantitative, the distinction remains. Deep and general ideas—the kind that actually simplify and strengthen the edifice of reasoning supporting the system's safety—are invented more rarely and by rarer people. To build a system that can resist or even slow down an attack by multiple adversaries, some of whom may be smarter or more experienced than ourselves, requires a level of professionally specialized thinking that isn't reasonable to expect from every programmer—not even those who can shift their minds to take on the perspective of a single equally-smart adversary. What you should ask from an ordinary paranoid is that they appreciate that deeper ideas exist, and that they try to learn the standard deeper ideas that are already known; that they know their own skill is not the upper limit of what's possible, and that they ask a professional to come in and check their reasoning. And then actually listen.\nAMBER:  But if it's possible for people to think they have higher skills and be mistaken, how do you know that you are one of these rare people who truly has a deep security mindset? Might your high opinion of yourself just be due to the Dunning-Kruger effect?\nCORAL:  … Okay, that reminds me to give another caution.\nYes, there will be some innocents who can't believe that there's a talent called \"paranoia\" that they lack, who'll come up with weird imitations of paranoia if you ask them to be more worried about flaws in their brilliant encryption ideas. There will also be some people reading this with severe cases of social anxiety and underconfidence. Readers who are capable of ordinary paranoia and even security mindset, who might not try to develop these talents, because they are terribly worried that they might just be one of the people who only imagine themselves to have talent. Well, if you think you can feel the distinction between deep security ideas and shallow ones, you should at least try now and then to generate your own thoughts that resonate in you the same way.\nAMBER:  But won't that attitude encourage overconfident people to think they can be paranoid when they actually can't be, with the result that they end up too impressed with their own reasoning and ideas?\nCORAL:  I strongly suspect that they'll do that regardless. You're not actually promoting some kind of collective good practice that benefits everyone, just by personally agreeing to be modest. The overconfident don't care what you decide. And if you're not just as worried about underestimating yourself as overestimating yourself, if your fears about exceeding your proper place are asymmetric with your fears about lost potential and foregone opportunities, then you're probably dealing with an emotional issue rather than a strict concern with good epistemology.\nAMBER:  If somebody does have the talent for deep security, then, how can they train it?\nCORAL:  … That's a hell of a good question. Some interesting training methods have been developed for ordinary paranoia, like classes whose students have to figure out how to attack everyday systems outside of a computer-science context. One professor gave a test in which one of the questions was \"What are the first 100 digits of pi?\"—the point being that you need to find some way to cheat in order to pass the test. You should train that kind of ordinary paranoia first, if you haven't done that already.\nAMBER:  And then what? How do you graduate to deep security from ordinary paranoia?\nCORAL:  … Try to find more general defenses instead of parrying particular attacks? Appreciate the extent to which you're building ever-taller versions of towers that an adversary might just walk around? Ugh, no, that's too much like ordinary paranoia—especially if you're starting out with just ordinary paranoia. Let me think about this.\n…\nOkay, I have a screwy piece of advice that's probably not going to work. Write down the safety-story on which your belief in a system's security rests. Then ask yourself whether you actually included all the empirical assumptions. Then ask yourself whether you actually believe those empirical assumptions.\nAMBER:  So, like, if I'm building an operating system, I write down, \"Safety assumption: The login system works to keep out attackers\"—\nCORAL:  No!\nUh, no, sorry. As usual, it seems that what I think is \"advice\" has left out all the important parts anyone would need to actually do it.\nThat's not what I was trying to handwave at by saying \"empirical assumption\". You don't want to assume that parts of the system \"succeed\" or \"fail\"—that's not language that should appear in what you write down. You want the elements of the story to be strictly factual, not… value-laden, goal-laden? There shouldn't be reasoning that explicitly mentions what you want to have happen or not happen, just language neutrally describing the background facts of the universe. For brainstorming purposes you might write down \"Nobody can guess the password of any user with dangerous privileges\", but that's just a proto-statement which needs to be refined into more basic statements.\nAMBER:  I don't think I understood.\nCORAL:  \"Nobody can guess the password\" says, \"I believe the adversary will fail to guess the password.\" Why do you believe that?\nAMBER:  I see, so you want me to refine complex assumptions into systems of simpler assumptions. But if you keep asking \"why do you believe that\" you'll eventually end up back at the Big Bang and the laws of physics. How do I know when to stop?\nCORAL:  What you're trying to do is reduce the story past the point where you talk about a goal-laden event, \"the adversary fails\", and instead talk about neutral facts underlying that event. For now, just answer me: Why do you believe the adversary fails to guess the password?\nAMBER:  Because the password is too hard to guess.\nCORAL:  The phrase \"too hard\" is goal-laden language; it's your own desires for the system that determine what is \"too hard\". Without using concepts or language that refer to what you want, what is a neutral, factual description of what makes a password too hard to guess?\nAMBER:  The password has high-enough entropy that the attacker can't try enough attempts to guess it.\nCORAL:  We're making progress, but again, the term \"enough\" is goal-laden language. It's your own wants and desires that determine what is \"enough\". Can you say something else instead of \"enough\"?\nAMBER:  The password has sufficient entropy that—\nCORAL:  I don't mean find a synonym for \"enough\". I mean, use different concepts that aren't goal-laden. This will involve changing the meaning of what you write down.\nAMBER:  I'm sorry, I guess I'm not good enough at this.\nCORAL:  Not yet, anyway. Maybe not ever, but that isn't known, and you shouldn't assume it based on one failure.\nAnyway, what I was hoping for was a pair of statements like, \"I believe every password has at least 50 bits of entropy\" and \"I believe no attacker can make more than a trillion tries total at guessing any password\". Where the point of writing \"I believe\" is to make yourself pause and question whether you actually believe it.\nAMBER:  Isn't saying no attacker \"can\" make a trillion tries itself goal-laden language?\nCORAL:  Indeed, that assumption might need to be refined further via why-do-I-believe-that into, \"I believe the system rejects password attempts closer than 1 second together, I believe the attacker keeps this up for less than a month, and I believe the attacker launches fewer than 300,000 simultaneous connections.\" Where again, the point is that you then look at what you've written and say, \"Do I really believe that?\" To be clear, sometimes the answer will be \"Yes, I sure do believe that!\" This isn't a social modesty exercise where you show off your ability to have agonizing doubts and then you go ahead and do the same thing anyway. The point is to find out what you believe, or what you'd need to believe, and check that it's believable.\nAMBER:  And this trains a deep security mindset?\nCORAL:  … Maaaybe? I'm wildly guessing it might? It may get you to think in terms of stories and reasoning and assumptions alongside passwords and adversaries, and that puts your mind into a space that I think is at least part of the skill.\nIn point of fact, the real reason the author is listing out this methodology is that he's currently trying to do something similar on the problem of aligning Artificial General Intelligence, and he would like to move past \"I believe my AGI won't want to kill anyone\" and into a headspace more like writing down statements such as \"Although the space of potential weightings for this recurrent neural net does contain weight combinations that would figure out how to kill the programmers, I believe that gradient descent on loss function L will only access a result inside subspace Q with properties P, and I believe a space with properties P does not include any weight combinations that figure out how to kill the programmer.\"\nThough this itself is not really a reduced statement and still has too much goal-laden language in it. A realistic example would take us right out of the main essay here. But the author does hope that practicing this way of thinking can help lead people into building more solid stories about robust systems, if they already have good ordinary paranoia and some fairly mysterious innate talents.\n\n\n \nContinued in: Security Mindset and the Logistic Success Curve.\n \nThe post Security Mindset and Ordinary Paranoia appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Security Mindset and Ordinary Paranoia", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=19", "id": "44cde415aabed795acf34a3e3ddf72b4"} {"text": "Announcing \"Inadequate Equilibria\"\n\n\nMIRI Senior Research Fellow Eliezer Yudkowsky has a new book out today: Inadequate Equilibria: Where and How Civilizations Get Stuck, a discussion of societal dysfunction, exploitability, and self-evaluation. From the preface:\n\nInadequate Equilibria is a book about a generalized notion of efficient markets, and how we can use this notion to guess where society will or won't be effective at pursuing some widely desired goal.An efficient market is one where smart individuals should generally doubt that they can spot overpriced or underpriced assets. We can ask an analogous question, however, about the \"efficiency\" of other human endeavors.\nSuppose, for example, that someone thinks they can easily build a much better and more profitable social network than Facebook, or easily come up with a new treatment for a widespread medical condition. Should they question whatever clever reasoning led them to that conclusion, in the same way that most smart individuals should question any clever reasoning that causes them to think AAPL stock is underpriced? Should they question whether they can \"beat the market\" in these areas, or whether they can even spot major in-principle improvements to the status quo? How \"efficient,\" or adequate, should we expect civilization to be at various tasks?\nThere will be, as always, good ways and bad ways to reason about these questions; this book is about both.\n\nThe book is available from Amazon (in print and Kindle), on iBooks, as a pay-what-you-want digital download, and as a web book at equilibriabook.com. The book has also been posted to Less Wrong 2.0.\nThe book's contents are:\n\n\n1.  Inadequacy and Modesty\nA comparison of two \"wildly different, nearly cognitively nonoverlapping\" approaches to thinking about outperformance: modest epistemology, and inadequacy analysis.\n2.  An Equilibrium of No Free Energy\nHow, in principle, can society end up neglecting obvious low-hanging fruit?\n3.  Moloch's Toolbox\nWhy does our civilization actually end up neglecting low-hanging fruit?\n4.  Living in an Inadequate World\nHow can we best take into account civilizational inadequacy in our decision-making?\n5.  Blind Empiricism\nThree examples of modesty in practical settings.\n6.  Against Modest Epistemology\nAn argument against the \"epistemological core\" of modesty: that we shouldn't take our own reasoning and meta-reasoning at face value in cases in the face of disagreements or novelties.\n7.  Status Regulation and Anxious Underconfidence\nOn causal accounts of modesty.\n\n\nAlthough Inadequate Equilibria isn't about AI, I consider it one of MIRI's most important nontechnical publications to date, as it helps explain some of the most basic tools and background models we use when we evaluate how promising a potential project, research program, or general strategy is.\nThe post Announcing \"Inadequate Equilibria\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Announcing “Inadequate Equilibria”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=19", "id": "6d7cc4d2c3b6fb1eeebf61230f550ab7"} {"text": "A major grant from the Open Philanthropy Project\n\nI'm thrilled to announce that the Open Philanthropy Project has awarded MIRI a three-year $3.75 million general support grant ($1.25 million per year). This grant is, by far, the largest contribution MIRI has received to date, and will have a major effect on our plans going forward.\nThis grant follows a $500,000 grant we received from the Open Philanthropy Project in 2016. The Open Philanthropy Project's announcement for the new grant notes that they are \"now aiming to support about half of MIRI's annual budget\".1 The annual $1.25 million represents 50% of a conservative estimate we provided to the Open Philanthropy Project of the amount of funds we expect to be able to usefully spend in 2018–2020.\nThis expansion in support was also conditional on our ability to raise the other 50% from other supporters. For that reason, I sincerely thank all of the past and current supporters who have helped us get to this point.\nThe Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn't providing more than half of our total funding.\nWe'll be going into more details on our future organizational plans in a follow-up post December 1, where we'll also discuss our end-of-the-year fundraising goals.\nIn their write-up, the Open Philanthropy Project notes that they have updated favorably about our technical output since 2016, following our logical induction paper:\nWe received a very positive review of MIRI's work on \"logical induction\" by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of our close advisors, and (iii) is generally regarded as outstanding by the ML community. As mentioned above, we previously had difficulty evaluating the technical quality of MIRI's research, and we previously could find no one meeting criteria (i) – (iii) to a comparable extent who was comparably excited about MIRI's technical research. While we would not generally offer a comparable grant to any lab on the basis of this consideration alone, we consider this a significant update in the context of the original case for the [2016] grant (especially MIRI's thoughtfulness on this set of issues, value alignment with us, distinctive perspectives, and history of work in this area). While the balance of our technical advisors' opinions and arguments still leaves us skeptical of the value of MIRI's research, the case for the statement \"MIRI's research has a nontrivial chance of turning out to be extremely valuable (when taking into account how different it is from other research on AI safety)\" appears much more robust than it did before we received this review.\nThe announcement also states, \"In the time since our initial grant to MIRI, we have made several more grants within this focus area, and are therefore less concerned that a larger grant will signal an outsized endorsement of MIRI's approach.\"\nWe're enormously grateful for the Open Philanthropy Project's support, and for their deep engagement with the AI safety field as a whole. To learn more about our discussions with the Open Philanthropy Project and their active work in this space, see the group's previous AI safety grants, our conversation with Daniel Dewey on the Effective Altruism Forum, and the research problems outlined in the Open Philanthropy Project's recent AI fellows program description.\nThe Open Philanthropy Project usually prefers not to provide more than half of an organization's funding, to facilitate funder coordination and ensure that organizations it supports maintain their independence. From a March blog post: \"We typically avoid situations in which we provide >50% of an organization's funding, so as to avoid creating a situation in which an organization's total funding is 'fragile' as a result of being overly dependent on us.\"The post A major grant from the Open Philanthropy Project appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "A major grant from the Open Philanthropy Project", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=19", "id": "74fa0f2081af87a0a7a77af2869cf802"} {"text": "November 2017 Newsletter\n\nEliezer Yudkowsky has written a new book on civilizational dysfunction and outperformance: Inadequate Equilibria: Where and How Civilizations Get Stuck. The full book will be available in print and electronic formats November 16. To preorder the ebook or sign up for updates, visit equilibriabook.com.\nWe're posting the full contents online in stages over the next two weeks. The first two chapters are:\n\nInadequacy and Modesty (discussion: LessWrong, EA Forum, Hacker News)\nAn Equilibrium of No Free Energy (discussion: LessWrong, EA Forum)\n\n \nResearch updates\n\nA new paper: \"Functional Decision Theory: A New Theory of Instrumental Rationality\" (arXiv), by Eliezer Yudkowsky and Nate Soares.\nNew research write-ups and discussions: Comparing Logical Inductor CDT and Logical Inductor EDT; Logical Updatelessness as a Subagent Alignment Problem; Mixed-Strategy Ratifiability Implies CDT=EDT\nNew from AI Impacts: Computing Hardware Performance Data Collections\nThe Workshop on Reliable Artificial Intelligence took place at ETH Zürich, hosted by MIRIxZürich.\n\nGeneral updates\n\nDeepMind announces a new version of AlphaGo that achieves superhuman performance within three days, using 4 TPUs and no human training data. Eliezer Yudkowsky argues that AlphaGo Zero provides supporting evidence for his position in the AI foom debate; Robin Hanson responds. See also Paul Christiano on AlphaGo Zero and capability amplification.\nYudkowsky on AGI ethics: \"The ethics of bridge-building is to not have your bridge fall down and kill people and there is a frame of mind in which this obviousness is obvious enough. How not to have the bridge fall down is hard.\"\nNate Soares gave his ensuring smarter-than-human AI has a positive outcome talk at the O'Reilly AI Conference (slides).\n\nNews and links\n\n\"Protecting Against AI's Existential Threat\": a Wall Street Journal op-ed by OpenAI's Ilya Sutskever and Dario Amodei.\nOpenAI announces \"a hierarchical reinforcement learning algorithm that learns high-level actions useful for solving a range of tasks\".\nDeepMind's Viktoriya Krakovna reports on the first Tokyo AI & Society Symposium.\nNick Bostrom speaks and CSER submits written evidence to the UK Parliament's Artificial Intelligence Commitee.\nRob Wiblin interviews Nick Beckstead for the 80,000 Hours podcast.\n\nThe post November 2017 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "November 2017 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=20", "id": "2d2b5e6665c67a28f496a2ce86c7b6f2"} {"text": "New paper: \"Functional Decision Theory\"\n\n\nMIRI senior researcher Eliezer Yudkowsky and executive director Nate Soares have a new introductory paper out on decision theory: \"Functional decision theory: A new theory of instrumental rationality.\"\nAbstract:\nThis paper describes and motivates a new decision theory known as functional decision theory (FDT), as distinct from causal decision theory and evidential decision theory.\nFunctional decision theorists hold that the normative principle for action is to treat one's decision as the output of a fixed mathematical function that answers the question, \"Which output of this very function would yield the best outcome?\" Adhering to this principle delivers a number of benefits, including the ability to maximize wealth in an array of traditional decision-theoretic and game-theoretic problems where CDT and EDT perform poorly. Using one simple and coherent decision rule, functional decision theorists (for example) achieve more utility than CDT on Newcomb's problem, more utility than EDT on the smoking lesion problem, and more utility than both in Parfit's hitchhiker problem.\nIn this paper, we define FDT, explore its prescriptions in a number of different decision problems, compare it to CDT and EDT, and give philosophical justifications for FDT as a normative theory of decision-making.\nOur previous introductory paper on FDT, \"Cheating Death in Damascus,\" focused on comparing FDT's performance to that of CDT and EDT in fairly high-level terms. Yudkowsky and Soares' new paper puts a much larger focus on FDT's mechanics and motivations, making \"Functional Decision Theory\" the most complete stand-alone introduction to the theory.1\n\nContents:\n1. Overview.\n2. Newcomb's Problem and the Smoking Lesion Problem. In terms of utility gained, conventional EDT outperforms CDT in Newcomb's problem, while underperforming CDT in the smoking lesion problem. Both CDT and EDT have therefore appeared unsastisfactory as expected utility theories, and the debate between the two has remained at an impasse. FDT, however, offers an elegant criterion for matching EDT's performance in the former class of dilemmas, while also matching CDT's performance in the latter class of dilemmas.\n3. Subjunctive Dependence. FDT can be thought of as a modification of CDT that relies, not on causal dependencies, but on a wider class of subjunctive dependencies that includes causal dependencies as a special case.\n4. Parfit's Hitchhiker. FDT's novel properties can be more readily seen in Parfit's hitchhiker problem, where both CDT and EDT underperform FDT. Yudkowsky and Soares note three considerations favoring FDT over traditional theories: an argument from precommitment, an argument from information value, and an argument from utility.\n5. Formalizing EDT, CDT, and FDT. To lend precision to the claim that a given decision theory prescribes a given action, Yudkowsky and Soares define algorithms implementing each theory.\n6. Comparing the Three Decision Algorithms' Behavior. Yudkowsky and Soares then revisit Newcomb's problem, the smoking lesion problem, and Parfit's hitchhiker problem, algorithms in hand.\n7. Diagnosing EDT: Conditionals as Counterfactuals. The core problem with EDT and CDT is that the hypothetical scenarios that they consider are malformed. EDT works by conditioning on joint probability distributions, which causes problems when correlations are spurious.\n8. Diagnosing CDT: Impossible Interventions. CDT, meanwhile, works by considering strictly causal counterfactuals, which causes problems when it wrongly treats unavoidable correlations as though they can be broken.\n9: The Global Perspective. FDT's form of counterpossible reasoning allows agents to respect a broader set of real-world dependencies than CDT can, while excluding EDT's spurious dependencies. We can understand FDT as reflecting a \"global perspective\" on which decision-theoretic agents should seek to have the most desirable decision type, as opposed to the most desirable decision token.\n10. Conclusion.\nWe use the term \"functional decision theory\" because FDT invokes the idea that decision-theoretic agents can be thought of as implementing deterministic functions from goals and observation histories to actions.2 We can see this feature clearly in Newcomb's problem, where an FDT agent—let's call her Fiona, as in the paper—will reason as follows:\nOmega knows the decision I will reach—they are somehow computing the same decision function I am on the same inputs, and using that function's output to decide how many boxes to fill. Suppose, then, that the decision function I'm implementing outputs \"one-box.\" The same decision function, implemented in Omega, must then also output \"one-box.\" In that case, Omega will fill the opaque box, and I'll get its contents. (+$1,000,000.)\nOr suppose that instead I take both boxes. In that case, my decision function outputs \"two-box,\" Omega will leave the opaque box empty, and I'll get the contents of both boxes. (+$1,000.)\nThe first scenario has higher expected utility; therefore my decision function hereby outputs \"one-box.\"\nUnlike a CDT agent that restricts itself to purely causal dependencies, Fiona's decision-making is able to take into account the dependencies between Omega's actions and her reasoning process itself. As a consequence, Fiona will tend to come away with far more money than CDT agents.\nAt the same time, FDT avoids the standard pitfalls EDT runs into, e.g., in the smoking lesion problem. The smoking lesion problem has a few peculiarities, such as the potential for agents to appeal to the \"tickle defense\" of Ellery Eells; but we can more clearly illustrate EDT's limitations with the XOR blackmail problem, where tickle defenses are of no help to EDT.\nIn the XOR blackmail problem, an agent hears a rumor that their house has been infested by termites, at a repair cost of $1,000,000. The next day, the agent receives a letter from the trustworthy predictor Omega staying:\nI know whether or not you have termites, and I have sent you this letter iff exactly one of the following is true: (i) the rumor is false, and you are going to pay me $1,000 upon receiving this letter; or (ii) the rumor is true, and you will not pay me upon receiving this letter.\nIn this dilemma, EDT agents pay up, reasoning that it would be bad news to learn that they have termites—in spite of the fact that their termite-riddenness does not depend, either causally or otherwise, on whether they pay.\nIn contrast, Fiona the FDT agent reasons in a similar fashion to how she does in Newcomb's problem:\nSince Omega's decision to send the letter is based on a reliable prediction of whether I'll pay, Omega and I must both be computing the same decision function. Suppose, then, that my decision function outputs \"don't pay\" on input \"letter.\" In the cases where I have termites, Omega will then send me this letter and I won't pay (−$1,000,000); while if I don't have termites, Omega won't send the letter (−$0).\nOn the other hand, suppose that my decision function outputs \"pay\" on input \"letter.\" Then, in the case where I have termites, Omega doesn't send the letter (−$1,000,000), and in the case where I don't have termites, Omega sends the letter and I pay (−$1,000).\nMy decision function determines whether I conditionally pay and whether Omega conditionally sends a letter. But the termites aren't predicting me, aren't computing my decision function at all. So if my decision function's output is \"pay,\" that doesn't change the termites' behavior and doesn't benefit me at all; so I don't pay.\nUnlike the EDT agent, Fiona correctly takes into account that paying won't increase her utility in the XOR blackmail dilemma; and unlike the CDT agent, Fiona takes into account that one-boxing will increase her utility in Newcomb's problem.\nFDT, then, provides an elegant alternative to both traditional theories, simultaneously offering us a simpler and more general rule for expected utility maximization in practice, and a more satisfying philosophical account of rational decision-making in principle.\nFor additional discussion of FDT, I recommend \"Decisions Are For Making Bad Outcomes Inconsistent,\" a conversation exploring the counter-intuitive fact that in order to decide what action to output, a decision-theoretic agent must be able to consider hypothetical scenarios in which their deterministic decision function outputs something other than what it outputs in fact.3\n \n\n\n\nSign up to get updates on new MIRI technical results\nGet notified every time a new technical paper is published.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n \n\"Functional Decision Theory\" was originally drafted prior to \"Cheating Death in Damascus,\" and was significantly longer before we received various rounds of feedback from the philosophical community. \"Cheating Death in Damascus\" was produced from material that was cut from early drafts; other cut material included a discussion of proof-based decision theory, and some Death in Damascus variants left on the cutting room floor for being needlessly cruel to CDT.To cover mixed strategies in this context, we can assume that one of the sensory inputs to the agent is a random number.Many of the hypotheticals an agent must consider are internally inconsistent: a deterministic function only has one possible output on a given input, but agents must base their decisions on the expected utility of many different \"possible\" actions in order to choose the best action. E.g., in Newcomb's problem, FDT and EDT agents must evaluate the expected utility of two-boxing in order to weigh their options and arrive at their final decision at all, even though it would be inconsistent for such an agent to two-box; and likewise, CDT must evaluate the expected utility of the impossible hypothetical where a CDT agent one-boxes.\nAlthough poorly-understood theoretically, this kind of counterpossible reasoning seems to be entirely feasible in practice. Even though a false conjecture classically implies all propositions, mathematicians routinely reason in a meaningful and nontrivial way with hypothetical scenarios in which a conjecture has different truth-values. The problem of how to best represent counterpossible reasoning in a formal setting, however, remains unsolved.The post New paper: \"Functional Decision Theory\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “Functional Decision Theory”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=20", "id": "ba30cca912a119124fe977cd97684bcd"} {"text": "AlphaGo Zero and the Foom Debate\n\nAlphaGo Zero uses 4 TPUs, is built entirely out of neural nets with no handcrafted features, doesn't pretrain against expert games or anything else human, reaches a superhuman level after 3 days of self-play, and is the strongest version of AlphaGo yet.\nThe architecture has been simplified. Previous AlphaGo had a policy net that predicted good plays, and a value net that evaluated positions, both feeding into lookahead using MCTS (random probability-weighted plays out to the end of a game). AlphaGo Zero has one neural net that selects moves and this net is trained by Paul-Christiano-style capability amplification, playing out games against itself to learn new probabilities for winning moves.\nAs others have also remarked, this seems to me to be an element of evidence that favors the Yudkowskian position over the Hansonian position in my and Robin Hanson's AI-foom debate.\nAs I recall and as I understood:\n\nHanson doubted that what he calls \"architecture\" is much of a big deal, compared to (Hanson said) elements like cumulative domain knowledge, or special-purpose components built by specialized companies in what he expects to be an ecology of companies serving an AI economy.\nWhen I remarked upon how it sure looked to me like humans had an architectural improvement over chimpanzees that counted for a lot, Hanson replied that this seemed to him like a one-time gain from allowing the cultural accumulation of knowledge.\n\nI emphasize how all the mighty human edifice of Go knowledge, the joseki and tactics developed over centuries of play, the experts teaching children from an early age, was entirely discarded by AlphaGo Zero with a subsequent performance improvement. These mighty edifices of human knowledge, as I understand the Hansonian thesis, are supposed to be the bulwark against rapid gains in AI capability across multiple domains at once. I said, \"Human intelligence is crap and our accumulated skills are crap,\" and this appears to have been borne out.\nSimilarly, single research labs like DeepMind are not supposed to pull far ahead of the general ecology, because adapting AI to any particular domain is supposed to require lots of components developed all over the place by a market ecology that makes those components available to other companies. AlphaGo Zero is much simpler than that. To the extent that nobody else can run out and build AlphaGo Zero, it's either because Google has Tensor Processing Units that aren't generally available, or because DeepMind has a silo of expertise for being able to actually make use of existing ideas like ResNets, or both.\nSheer speed of capability gain should also be highlighted here. Most of my argument for FOOM in the Yudkowsky-Hanson debate was about self-improvement and what happens when an optimization loop is folded in on itself. Though it wasn't necessary to my argument, the fact that Go play went from \"nobody has come close to winning against a professional\" to \"so strongly superhuman they're not really bothering any more\" over two years just because that's what happens when you improve and simplify the architecture, says you don't even need self-improvement to get things that look like FOOM.\nYes, Go is a closed system allowing for self-play. It still took humans centuries to learn how to play it. Perhaps the new Hansonian bulwark against rapid capability gain can be that the environment has lots of empirical bits that are supposed to be very hard to learn, even in the limit of AI thoughts fast enough to blow past centuries of human-style learning in 3 days; and that humans have learned these vital bits over centuries of cultural accumulation of knowledge, even though we know that humans take centuries to do 3 days of AI learning when humans have all the empirical bits they need; and that AIs cannot absorb this knowledge very quickly using \"architecture\", even though humans learn it from each other using architecture. If so, then let's write down this new world-wrecking assumption (that is, the world ends if the assumption is false) and be on the lookout for further evidence that this assumption might perhaps be wrong.\nAlphaGo clearly isn't a general AI. There's obviously stuff humans do that make us much more general than AlphaGo, and AlphaGo obviously doesn't do that. However, if even with the human special sauce we're to expect AGI capabilities to be slow, domain-specific, and requiring feed-in from a big market ecology, then the situation we see without human-equivalent generality special sauce should not look like this.\nTo put it another way, I put a lot of emphasis in my debate on recursive self-improvement and the remarkable jump in generality across the change from primate intelligence to human intelligence. It doesn't mean we can't get info about speed of capability gains without self-improvement. It doesn't mean we can't get info about the importance and generality of algorithms without the general intelligence trick. The debate can start to settle for fast capability gains before we even get to what I saw as the good parts; I wouldn't have predicted AlphaGo and lost money betting against the speed of its capability gains, because reality held a more extreme position than I did on the Yudkowsky-Hanson spectrum.\n(Reply from Robin Hanson.)\nThe post AlphaGo Zero and the Foom Debate appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "AlphaGo Zero and the Foom Debate", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=20", "id": "3ed64bdbbadd86e0281718b35e38ac74"} {"text": "October 2017 Newsletter\n\n\"So far as I can presently estimate, now that we've had AlphaGo and a couple of other maybe/maybe-not shots across the bow, and seen a huge explosion of effort invested into machine learning and an enormous flood of papers, we are probably going to occupy our present epistemic state until very near the end.\n\"[…I]t's hard to guess how many further insights are needed for AGI, or how long it will take to reach those insights. After the next breakthrough, we still won't know how many more breakthroughs are needed, leaving us in pretty much the same epistemic state as before. […] You can either act despite that, or not act. Not act until it's too late to help much, in the best case; not act at all until after it's essentially over, in the average case.\"\nRead more in a new blog post by Eliezer Yudkowsky: \"There's No Fire Alarm for Artificial General Intelligence.\" (Discussion on LessWrong 2.0, Hacker News.)\nResearch updates\n\nNew research write-ups and discussions: The Doomsday Argument in Anthropic Decision Theory; Smoking Lesion Steelman II\nNew from AI Impacts: What Do ML Researchers Think You Are Wrong About?, When Do ML Researchers Think Specific Tasks Will Be Automated?\n\nGeneral updates\n\n\"Is Tribalism a Natural Malfunction?\": Nautilus discusses MIRI's work on decision theory, superrationality, and the prisoner's dilemma.\nWe helped run the 2017 AI Summer Fellows Program with the Center for Applied Rationality, and taught at the European Summer Program on Rationality.\nWe're very happy to announce that we've received a $100,000 grant from the Berkeley Existential Risk Initiative and Jaan Tallinn, as well as over $30,000 from Raising for Effective Giving and a pledge of $55,000 from PokerStars through REG. We'll be providing more information on our funding situation in advance of our December fundraiser.\nLessWrong is currently hosting an open beta for a site redesign at lesswrong.com; see Oliver Habryka's strategy write-up.\n\nNews and links\n\nHillary Clinton and Vladimir Putin voice worries about the impacts of AI technology.\nThe Future of Life Institute discusses Dan Weld's work on explainable AI.\nResearchers at OpenAI and Oxford release Learning with Opponent-Learning Awareness, an RL algorithm that takes into account how its choice of policy can change other agents' strategy, enabling cooperative behavior in some simple multi-agent settings.\nFrom Carrick Flynn of the Future of Humanity Institute: Personal Thoughts on Careers in AI Policy and Strategy.\nGoodhart's Imperius: A discussion of Goodhart's Law and human psychology.\n\nThe post October 2017 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "October 2017 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=20", "id": "8acbaaf95a66160a6bbe8987ee3a2123"} {"text": "There's No Fire Alarm for Artificial General Intelligence\n\n\n \nWhat is the function of a fire alarm?\n \nOne might think that the function of a fire alarm is to provide you with important evidence about a fire existing, allowing you to change your policy accordingly and exit the building.\nIn the classic experiment by Latane and Darley in 1968, eight groups of three students each were asked to fill out a questionnaire in a room that shortly after began filling up with smoke. Five out of the eight groups didn't react or report the smoke, even as it became dense enough to make them start coughing. Subsequent manipulations showed that a lone student will respond 75% of the time; while a student accompanied by two actors told to feign apathy will respond only 10% of the time. This and other experiments seemed to pin down that what's happening is pluralistic ignorance. We don't want to look panicky by being afraid of what isn't an emergency, so we try to look calm while glancing out of the corners of our eyes to see how others are reacting, but of course they are also trying to look calm.\n(I've read a number of replications and variations on this research, and the effect size is blatant. I would not expect this to be one of the results that dies to the replication crisis, and I haven't yet heard about the replication crisis touching it. But we have to put a maybe-not marker on everything now.)\nA fire alarm creates common knowledge, in the you-know-I-know sense, that there is a fire; after which it is socially safe to react. When the fire alarm goes off, you know that everyone else knows there is a fire, you know you won't lose face if you proceed to exit the building.\nThe fire alarm doesn't tell us with certainty that a fire is there. In fact, I can't recall one time in my life when, exiting a building on a fire alarm, there was an actual fire. Really, a fire alarm is weaker evidence of fire than smoke coming from under a door.\nBut the fire alarm tells us that it's socially okay to react to the fire. It promises us with certainty that we won't be embarrassed if we now proceed to exit in an orderly fashion.\nIt seems to me that this is one of the cases where people have mistaken beliefs about what they believe, like when somebody loudly endorsing their city's team to win the big game will back down as soon as asked to bet. They haven't consciously distinguished the rewarding exhilaration of shouting that the team will win, from the feeling of anticipating the team will win.\nWhen people look at the smoke coming from under the door, I think they think their uncertain wobbling feeling comes from not assigning the fire a high-enough probability of really being there, and that they're reluctant to act for fear of wasting effort and time. If so, I think they're interpreting their own feelings mistakenly. If that was so, they'd get the same wobbly feeling on hearing the fire alarm, or even more so, because fire alarms correlate to fire less than does smoke coming from under a door. The uncertain wobbling feeling comes from the worry that others believe differently, not the worry that the fire isn't there. The reluctance to act is the reluctance to be seen looking foolish, not the reluctance to waste effort. That's why the student alone in the room does something about the fire 75% of the time, and why people have no trouble reacting to the much weaker evidence presented by fire alarms.\n \n\n \nIt's now and then proposed that we ought to start reacting later to the issues of Artificial General Intelligence (background here), because, it is said, we are so far away from it that it just isn't possible to do productive work on it today.\n(For direct argument about there being things doable today, see: Soares and Fallenstein (2014/2017); Amodei, Olah, Steinhardt, Christiano, Schulman, and Mané (2016); or Taylor, Yudkowsky, LaVictoire, and Critch (2016).)\n(If none of those papers existed or if you were an AI researcher who'd read them but thought they were all garbage, and you wished you could work on alignment but knew of nothing you could do, the wise next step would be to sit down and spend two hours by the clock sincerely trying to think of possible approaches. Preferably without self-sabotage that makes sure you don't come up with anything plausible; as might happen if, hypothetically speaking, you would actually find it much more comfortable to believe there was nothing you ought to be working on today, because e.g. then you could work on other things that interested you more.)\n(But never mind.)\nSo if AGI seems far-ish away, and you think the conclusion licensed by this is that you can't do any productive work on AGI alignment yet, then the implicit alternative strategy on offer is: Wait for some unspecified future event that tells us AGI is coming near; and then we'll all know that it's okay to start working on AGI alignment.\nThis seems to me to be wrong on a number of grounds. Here are some of them.\n\n \nOne: As Stuart Russell observed, if you get radio signals from space and spot a spaceship there with your telescopes and you know the aliens are landing in thirty years, you still start thinking about that today.\nYou're not like, \"Meh, that's thirty years off, whatever.\" You certainly don't casually say \"Well, there's nothing we can do until they're closer.\" Not without spending two hours, or at least five minutes by the clock, brainstorming about whether there is anything you ought to be starting now.\nIf you said the aliens were coming in thirty years and you were therefore going to do nothing today… well, if these were more effective times, somebody would ask for a schedule of what you thought ought to be done, starting when, how long before the aliens arrive. If you didn't have that schedule ready, they'd know that you weren't operating according to a worked table of timed responses, but just procrastinating and doing nothing; and they'd correctly infer that you probably hadn't searched very hard for things that could be done today.\nIn Bryan Caplan's terms, anyone who seems quite casual about the fact that \"nothing can be done now to prepare\" about the aliens is missing a mood; they should be much more alarmed at not being able to think of any way to prepare. And maybe ask if somebody else has come up with any ideas? But never mind.\n \nTwo: History shows that for the general public, and even for scientists not in a key inner circle, and even for scientists in that key circle, it is very often the case that key technological developments still seem decades away, five years before they show up.\nIn 1901, two years before helping build the first heavier-than-air flyer, Wilbur Wright told his brother that powered flight was fifty years away.\nIn 1939, three years before he personally oversaw the first critical chain reaction in a pile of uranium bricks, Enrico Fermi voiced 90% confidence that it was impossible to use uranium to sustain a fission chain reaction. I believe Fermi also said a year after that, aka two years before the denouement, that if net power from fission was even possible (as he then granted some greater plausibility) then it would be fifty years off; but for this I neglected to keep the citation.\nAnd of course if you're not the Wright Brothers or Enrico Fermi, you will be even more surprised. Most of the world learned that atomic weapons were now a thing when they woke up to the headlines about Hiroshima. There were esteemed intellectuals saying four years after the Wright Flyer that heavier-than-air flight was impossible, because knowledge propagated more slowly back then.\nWere there events that, in hindsight, today, we can see as signs that heavier-than-air flight or nuclear energy were nearing? Sure, but if you go back and read the actual newspapers from that time and see what people actually said about it then, you'll see that they did not know that these were signs, or that they were very uncertain that these might be signs. Some playing the part of Excited Futurists proclaimed that big changes were imminent, I expect, and others playing the part of Sober Scientists tried to pour cold water on all that childish enthusiasm; I expect that part was more or less exactly the same decades earlier. If somewhere in that din was a superforecaster who said \"decades\" when it was decades and \"5 years\" when it was five, good luck noticing them amid all the noise. More likely, the superforecasters were the ones who said \"Could be tomorrow, could be decades\" both when the big development was a day away and when it was decades away.\nOne of the major modes by which hindsight bias makes us feel that the past was more predictable than anyone was actually able to predict at the time, is that in hindsight we know what we ought to notice, and we fixate on only one thought as to what each piece of evidence indicates. If you look at what people actually say at the time, historically, they've usually got no clue what's about to happen three months before it happens, because they don't know which signs are which.\nI mean, you could say the words \"AGI is 50 years away\" and have those words happen to be true. People were also saying that powered flight was decades away when it was in fact decades away, and those people happened to be right. The problem is that everything looks the same to you either way, if you are actually living history instead of reading about it afterwards.\nIt's not that whenever somebody says \"fifty years\" the thing always happens in two years. It's that this confident prediction of things being far away corresponds to an epistemic state about the technology that feels the same way internally until you are very very close to the big development. It's the epistemic state of \"Well, I don't see how to do the thing\" and sometimes you say that fifty years off from the big development, and sometimes you say it two years away, and sometimes you say it while the Wright Flyer is flying somewhere out of your sight.\n \nThree: Progress is driven by peak knowledge, not average knowledge.\nIf Fermi and the Wrights couldn't see it coming three years out, imagine how hard it must be for anyone else to see it.\nIf you're not at the global peak of knowledge of how to do the thing, and looped in on all the progress being made at what will turn out to be the leading project, you aren't going to be able to see of your own knowledge at all that the big development is imminent. Unless you are very good at perspective-taking in a way that wasn't necessary in a hunter-gatherer tribe, and very good at realizing that other people may know techniques and ideas of which you have no inkling even that you do not know them. If you don't consciously compensate for the lessons of history in this regard; then you will promptly say the decades-off thing. Fermi wasn't still thinking that net nuclear energy was impossible or decades away by the time he got to 3 months before he built the first pile, because at that point Fermi was looped in on everything and saw how to do it. But anyone not looped in probably still felt like it was fifty years away while the actual pile was fizzing away in a squash court at the University of Chicago.\nPeople don't seem to automatically compensate for the fact that the timing of the big development is a function of the peak knowledge in the field, a threshold touched by the people who know the most and have the best ideas; while they themselves have average knowledge; and therefore what they themselves know is not strong evidence about when the big development happens. I think they aren't thinking about that at all, and they just eyeball it using their own sense of difficulty. If they are thinking anything more deliberate and reflective than that, and incorporating real work into correcting for the factors that might bias their lenses, they haven't bothered writing down their reasoning anywhere I can read it.\nTo know that AGI is decades away, we would need enough understanding of AGI to know what pieces of the puzzle are missing, and how hard these pieces are to obtain; and that kind of insight is unlikely to be available until the puzzle is complete. Which is also to say that to anyone outside the leading edge, the puzzle will look more incomplete than it looks on the edge. That project may publish their theories in advance of proving them, although I hope not. But there are unproven theories now too.\nAnd again, that's not to say that people saying \"fifty years\" is a certain sign that something is happening in a squash court; they were saying \"fifty years\" sixty years ago too. It's saying that anyone who thinks technological timelines are actually forecastable, in advance, by people who are not looped in to the leading project's progress reports and who don't share all the best ideas about exactly how to do the thing and how much effort is required for that, is learning the wrong lesson from history. In particular, from reading history books that neatly lay out lines of progress and their visible signs that we all know now were important and evidential. It's sometimes possible to say useful conditional things about the consequences of the big development whenever it happens, but it's rarely possible to make confident predictions about the timing of those developments, beyond a one- or two-year horizon. And if you are one of the rare people who can call the timing, if people like that even exist, nobody else knows to pay attention to you and not to the Excited Futurists or Sober Skeptics.\n \nFour: The future uses different tools, and can therefore easily do things that are very hard now, or do with difficulty things that are impossible now.\nWhy do we know that AGI is decades away? In popular articles penned by heads of AI research labs and the like, there are typically three prominent reasons given:\n(A) The author does not know how to build AGI using present technology. The author does not know where to start.\n(B) The author thinks it is really very hard to do the impressive things that modern AI technology does, they have to slave long hours over a hot GPU farm tweaking hyperparameters to get it done. They think that the public does not appreciate how hard it is to get anything done right now, and is panicking prematurely because the public thinks anyone can just fire up Tensorflow and build a robotic car.\n(C) The author spends a lot of time interacting with AI systems and therefore is able to personally appreciate all the ways in which they are still stupid and lack common sense.\nWe've now considered some aspects of argument A. Let's consider argument B for a moment.\nSuppose I say: \"It is now possible for one comp-sci grad to do in a week anything that N+ years ago the research community could do with neural networks at all.\" How large is N?\nI got some answers to this on Twitter from people whose credentials I don't know, but the most common answer was five, which sounds about right to me based on my own acquaintance with machine learning. (Though obviously not as a literal universal, because reality is never that neat.) If you could do something in 2012 period, you can probably do it fairly straightforwardly with modern GPUs, Tensorflow, Xavier initialization, batch normalization, ReLUs, and Adam or RMSprop or just stochastic gradient descent with momentum. The modern techniques are just that much better. To be sure, there are things we can't do now with just those simple methods, things that require tons more work, but those things were not possible at all in 2012.\nIn machine learning, when you can do something at all, you are probably at most a few years away from being able to do it easily using the future's much superior tools. From this standpoint, argument B, \"You don't understand how hard it is to do what we do,\" is something of a non-sequitur when it comes to timing.\nStatement B sounds to me like the same sentiment voiced by Rutherford in 1933 when he called net energy from atomic fission \"moonshine\". If you were a nuclear physicist in 1933 then you had to split all your atoms by hand, by bombarding them with other particles, and it was a laborious business. If somebody talked about getting net energy from atoms, maybe it made you feel that you were unappreciated, that people thought your job was easy.\nBut of course this will always be the lived experience for AI engineers on serious frontier projects. You don't get paid big bucks to do what a grad student can do in a week (unless you're working for a bureaucracy with no clue about AI; but that's not Google or FB). Your personal experience will always be that what you are paid to spend months doing is difficult. A change in this personal experience is therefore not something you can use as a fire alarm.\nThose playing the part of wiser sober skeptical scientists would obviously agree in the abstract that our tools will improve; but in the popular articles they pen, they just talk about the painstaking difficulty of this year's tools. I think that when they're in that mode they are not even trying to forecast what the tools will be like in 5 years; they haven't written down any such arguments as part of the articles I've read. I think that when they tell you that AGI is decades off, they are literally giving an estimate of how long it feels to them like it would take to build AGI using their current tools and knowledge. Which is why they emphasize how hard it is to stir the heap of linear algebra until it spits out good answers; I think they are not imagining, at all, into how this experience may change over considerably less than fifty years. If they've explicitly considered the bias of estimating future tech timelines based on their present subjective sense of difficulty, and tried to compensate for that bias, they haven't written that reasoning down anywhere I've read it. Nor have I ever heard of that forecasting method giving good results historically.\n \nFive: Okay, let's be blunt here. I don't think most of the discourse about AGI being far away (or that it's near) is being generated by models of future progress in machine learning. I don't think we're looking at wrong models; I think we're looking at no models.\nI was once at a conference where there was a panel full of famous AI luminaries, and most of the luminaries were nodding and agreeing with each other that of course AGI was very far off, except for two famous AI luminaries who stayed quiet and let others take the microphone.\nI got up in Q&A and said, \"Okay, you've all told us that progress won't be all that fast. But let's be more concrete and specific. I'd like to know what's the least impressive accomplishment that you are very confident cannot be done in the next two years.\"\nThere was a silence.\nEventually, two people on the panel ventured replies, spoken in a rather more tentative tone than they'd been using to pronounce that AGI was decades out. They named \"A robot puts away the dishes from a dishwasher without breaking them\", and Winograd schemas. Specifically, \"I feel quite confident that the Winograd schemas—where we recently had a result that was in the 50, 60% range—in the next two years, we will not get 80, 90% on that regardless of the techniques people use.\"\nA few months after that panel, there was unexpectedly a big breakthrough on Winograd schemas. The breakthrough didn't crack 80%, so three cheers for wide credibility intervals with error margin, but I expect the predictor might be feeling slightly more nervous now with one year left to go. (I don't think it was the breakthrough I remember reading about, but Rob turned up this paper as an example of one that could have been submitted at most 44 days after the above conference and gets up to 70%.)\nBut that's not the point. The point is the silence that fell after my question, and that eventually I only got two replies, spoken in tentative tones. When I asked for concrete feats that were impossible in the next two years, I think that that's when the luminaries on that panel switched to trying to build a mental model of future progress in machine learning, asking themselves what they could or couldn't predict, what they knew or didn't know. And to their credit, most of them did know their profession well enough to realize that forecasting future boundaries around a rapidly moving field is actually really hard, that nobody knows what will appear on arXiv next month, and that they needed to put wide credibility intervals with very generous upper bounds on how much progress might take place twenty-four months' worth of arXiv papers later.\n(Also, Demis Hassabis was present, so they all knew that if they named something insufficiently impossible, Demis would have DeepMind go and do it.)\nThe question I asked was in a completely different genre from the panel discussion, requiring a mental context switch: the assembled luminaries actually had to try to consult their rough, scarce-formed intuitive models of progress in machine learning and figure out what future experiences, if any, their model of the field definitely prohibited within a two-year time horizon. Instead of, well, emitting socially desirable verbal behavior meant to kill that darned hype about AGI and get some predictable applause from the audience.\nI'll be blunt: I don't think the confident long-termism has been thought out at all. If your model has the extraordinary power to say what will be impossible in ten years after another one hundred and twenty months of arXiv papers, then you ought to be able to say much weaker things that are impossible in two years, and you should have those predictions queued up and ready to go rather than falling into nervous silence after being asked.\nIn reality, the two-year problem is hard and the ten-year problem is laughably hard. The future is hard to predict in general, our predictive grasp on a rapidly changing and advancing field of science and engineering is very weak indeed, and it doesn't permit narrow credible intervals on what can't be done.\nGrace et al. (2017) surveyed the predictions of 352 presenters at ICML and NIPS 2015. Respondents' aggregate forecast was that the proposition \"all occupations are fully automatable\" (in the sense that \"for any occupation, machines could be built to carry out the task better and more cheaply than human workers\") will not reach 50% probability until 121 years hence. Except that a randomized subset of respondents were instead asked the slightly different question of \"when unaided machines can accomplish every task better and more cheaply than human workers\", and in this case held that this was 50% likely to occur within 44 years.\nThat's what happens when you ask people to produce an estimate they can't estimate, and there's a social sense of what the desirable verbal behavior is supposed to be.\n \n\n \nWhen I observe that there's no fire alarm for AGI, I'm not saying that there's no possible equivalent of smoke appearing from under a door.\nWhat I'm saying rather is that the smoke under the door is always going to be arguable; it is not going to be a clear and undeniable and absolute sign of fire; and so there is never going to be a fire alarm producing common knowledge that action is now due and socially acceptable.\nThere's an old trope saying that as soon as something is actually done, it ceases to be called AI. People who work in AI and are in a broad sense pro-accelerationist and techno-enthusiast, what you might call the Kurzweilian camp (of which I am not a member), will sometimes rail against this as unfairness in judgment, as moving goalposts.\nThis overlooks a real and important phenomenon of adverse selection against AI accomplishments: If you can do something impressive-sounding with AI in 1974, then that is because that thing turned out to be doable in some cheap cheaty way, not because 1974 was so amazingly great at AI. We are uncertain about how much cognitive effort it takes to perform tasks, and how easy it is to cheat at them, and the first \"impressive\" tasks to be accomplished will be those where we were most wrong about how much effort was required. There was a time when some people thought that a computer winning the world chess championship would require progress in the direction of AGI, and that this would count as a sign that AGI was getting closer. When Deep Blue beat Kasparov in 1997, in a Bayesian sense we did learn something about progress in AI, but we also learned something about chess being easy. Considering the techniques used to construct Deep Blue, most of what we learned was \"It is surprisingly possible to play chess without easy-to-generalize techniques\" and not much \"A surprising amount of progress has been made toward AGI.\"\nWas AlphaGo smoke under the door, a sign of AGI in 10 years or less? People had previously given Go as an example of What You See Before The End.\nLooking over the paper describing AlphaGo's architecture, it seemed to me that we were mostly learning that available AI techniques were likely to go further towards generality than expected, rather than about Go being surprisingly easy to achieve with fairly narrow and ad-hoc approaches. Not that the method scales to AGI, obviously; but AlphaGo did look like a product of relatively general insights and techniques being turned on the special case of Go, in a way that Deep Blue wasn't. I also updated significantly on \"The general learning capabilities of the human cortical algorithm are less impressive, less difficult to capture with a ton of gradient descent and a zillion GPUs, than I thought,\" because if there were anywhere we expected an impressive hard-to-match highly-natural-selected but-still-general cortical algorithm to come into play, it would be in humans playing Go.\nMaybe if we'd seen a thousand Earths undergoing similar events, we'd gather the statistics and find that a computer winning the planetary Go championship is a reliable ten-year-harbinger of AGI. But I don't actually know that. Neither do you. Certainly, anyone can publicly argue that we just learned Go was easier to achieve with strictly narrow techniques than expected, as was true many times in the past. There's no possible sign short of actual AGI, no case of smoke from under the door, for which we know that this is definitely serious fire and now AGI is 10, 5, or 2 years away. Let alone a sign where we know everyone else will believe it.\nAnd in any case, multiple leading scientists in machine learning have already published articles telling us their criterion for a fire alarm. They will believe Artificial General Intelligence is imminent:\n(A) When they personally see how to construct AGI using their current tools. This is what they are always saying is not currently true in order to castigate the folly of those who think AGI might be near.\n(B) When their personal jobs do not give them a sense of everything being difficult. This, they are at pains to say, is a key piece of knowledge not possessed by the ignorant layfolk who think AGI might be near, who only believe that because they have never stayed up until 2AM trying to get a generative adversarial network to stabilize.\n(C) When they are very impressed by how smart their AI is relative to a human being in respects that still feel magical to them; as opposed to the parts they do know how to engineer, which no longer seem magical to them; aka the AI seeming pretty smart in interaction and conversation; aka the AI actually being an AGI already.\nSo there isn't going to be a fire alarm. Period.\nThere is never going to be a time before the end when you can look around nervously, and see that it is now clearly common knowledge that you can talk about AGI being imminent, and take action and exit the building in an orderly fashion, without fear of looking stupid or frightened.\n \n\n \nSo far as I can presently estimate, now that we've had AlphaGo and a couple of other maybe/maybe-not shots across the bow, and seen a huge explosion of effort invested into machine learning and an enormous flood of papers, we are probably going to occupy our present epistemic state until very near the end.\nBy saying we're probably going to be in roughly this epistemic state until almost the end, I don't mean to say we know that AGI is imminent, or that there won't be important new breakthroughs in AI in the intervening time. I mean that it's hard to guess how many further insights are needed for AGI, or how long it will take to reach those insights. After the next breakthrough, we still won't know how many more breakthroughs are needed, leaving us in pretty much the same epistemic state as before. Whatever discoveries and milestones come next, it will probably continue to be hard to guess how many further insights are needed, and timelines will continue to be similarly murky. Maybe researcher enthusiasm and funding will rise further, and we'll be able to say that timelines are shortening; or maybe we'll hit another AI winter, and we'll know that's a sign indicating that things will take longer than they would otherwise; but we still won't know how long.\nAt some point we might see a sudden flood of arXiv papers in which really interesting and fundamental and scary cognitive challenges seem to be getting done at an increasing pace. Whereupon, as this flood accelerates, even some who imagine themselves sober and skeptical will be unnerved to the point that they venture that perhaps AGI is only 15 years away now, maybe, possibly. The signs might become so blatant, very soon before the end, that people start thinking it is socially acceptable to say that maybe AGI is 10 years off. Though the signs would have to be pretty darned blatant, if they're to overcome the social barrier posed by luminaries who are estimating arrival times to AGI using their personal knowledge and personal difficulties, as well as all the historical bad feelings about AI winters caused by hype.\nBut even if it becomes socially acceptable to say that AGI is 15 years out, in those last couple of years or months, I would still expect there to be disagreement. There will still be others protesting that, as much as associative memory and human-equivalent cerebellar coordination (or whatever) are now solved problems, they still don't know how to construct AGI. They will note that there are no AIs writing computer science papers, or holding a truly sensible conversation with a human, and castigate the senseless alarmism of those who talk as if we already knew how to do that. They will explain that foolish laypeople don't realize how much pain and tweaking it takes to get the current systems to work. (Although those modern methods can easily do almost anything that was possible in 2017, and any grad student knows how to roll a stable GAN on the first try using the tf.unsupervised module in Tensorflow 5.3.1.)\nWhen all the pieces are ready and in place, lacking only the last piece to be assembled by the very peak of knowledge and creativity across the whole world, it will still seem to the average ML person that AGI is an enormous challenge looming in the distance, because they still won't personally know how to construct an AGI system. Prestigious heads of major AI research groups will still be writing articles decrying the folly of fretting about the total destruction of all Earthly life and all future value it could have achieved, and saying that we should not let this distract us from real, respectable concerns like loan-approval systems accidentally absorbing human biases.\nOf course, the future is very hard to predict in detail. It's so hard that not only do I confess my own inability, I make the far stronger positive statement that nobody else can do it either. The \"flood of groundbreaking arXiv papers\" scenario is one way things could maybe possibly go, but it's an implausibly specific scenario that I made up for the sake of concreteness. It's certainly not based on my extensive experience watching other Earthlike civilizations develop AGI. I do put a significant chunk of probability mass on \"There's not much sign visible outside a Manhattan Project until Hiroshima,\" because that scenario is simple. Anything more complex is just one more story full of burdensome details that aren't likely to all be true.\nBut no matter how the details play out, I do predict in a very general sense that there will be no fire alarm that is not an actual running AGI—no unmistakable sign before then that everyone knows and agrees on, that lets people act without feeling nervous about whether they're worrying too early. That's just not how the history of technology has usually played out in much simpler cases like flight and nuclear engineering, let alone a case like this one where all the signs and models are disputed. We already know enough about the uncertainty and low quality of discussion surrounding this topic to be able to say with confidence that there will be no unarguable socially accepted sign of AGI arriving 10 years, 5 years, or 2 years beforehand. If there's any general social panic it will be by coincidence, based on terrible reasoning, uncorrelated with real timelines except by total coincidence, set off by a Hollywood movie, and focused on relatively trivial dangers.\nIt's no coincidence that nobody has given any actual account of such a fire alarm, and argued convincingly about how much time it means we have left, and what projects we should only then start. If anyone does write that proposal, the next person to write one will say something completely different. And probably neither of them will succeed at convincing me that they know anything prophetic about timelines, or that they've identified any sensible angle of attack that is (a) worth pursuing at all and (b) not worth starting to work on right now.\n \n\n \nIt seems to me that the decision to delay all action until a nebulous totally unspecified future alarm goes off, implies an order of recklessness great enough that the law of continued failure comes into play.\nThe law of continued failure is the rule that says that if your country is incompetent enough to use a plaintext 9-numeric-digit password on all of your bank accounts and credit applications, your country is not competent enough to correct course after the next disaster in which a hundred million passwords are revealed. A civilization competent enough to correct course in response to that prod, to react to it the way you'd want them to react, is competent enough not to make the mistake in the first place. When a system fails massively and obviously, rather than subtly and at the very edges of competence, the next prod is not going to cause the system to suddenly snap into doing things intelligently.\nThe law of continued failure is especially important to keep in mind when you are dealing with big powerful systems or high-status people that you might feel nervous about derogating, because you may be tempted to say, \"Well, it's flawed now, but as soon as a future prod comes along, everything will snap into place and everything will be all right.\" The systems about which this fond hope is actually warranted look like they are mostly doing all the important things right already, and only failing in one or two steps of cognition. The fond hope is almost never warranted when a person or organization or government or social subsystem is currently falling massively short.\nThe folly required to ignore the prospect of aliens landing in thirty years is already great enough that the other flawed elements of the debate should come as no surprise.\nAnd with all of that going wrong simultaneously today, we should predict that the same system and incentives won't produce correct outputs after receiving an uncertain sign that maybe the aliens are landing in five years instead. The law of continued failure suggests that if existing authorities failed in enough different ways at once to think that it makes sense to try to derail a conversation about existential risk by saying the real problem is the security on self-driving cars, the default expectation is that they will still be saying silly things later.\nPeople who make large numbers of simultaneous mistakes don't generally have all of the incorrect thoughts subconsciously labeled as \"incorrect\" in their heads. Even when motivated, they can't suddenly flip to skillfully executing all-correct reasoning steps instead. Yes, we have various experiments showing that monetary incentives can reduce overconfidence and political bias, but (a) that's reduction rather than elimination, (b) it's with extremely clear short-term direct incentives, not the nebulous and politicizable incentive of \"a lot being at stake\", and (c) that doesn't mean a switch is flipping all the way to \"carry out complicated correct reasoning\". If someone's brain contains a switch that can flip to enable complicated correct reasoning at all, it's got enough internal precision and skill to think mostly-correct thoughts now instead of later—at least to the degree that some conservatism and double-checking gets built into examining the conclusions that people know will get them killed if they're wrong about them.\nThere is no sign and portent, no threshold crossed, that suddenly causes people to wake up and start doing things systematically correctly. People who can react that competently to any sign at all, let alone a less-than-perfectly-certain not-totally-agreed item of evidence that is likely a wakeup call, have probably already done the timebinding thing. They've already imagined the future sign coming, and gone ahead and thought sensible thoughts earlier, like Stuart Russell saying, \"If you know the aliens are landing in thirty years, it's still a big deal now.\"\n \n\n \nBack in the funding-starved early days of what is now MIRI, I learned that people who donated last year were likely to donate this year, and people who last year were planning to donate \"next year\" would quite often this year be planning to donate \"next year\". Of course there were genuine transitions from zero to one; everything that happens needs to happen for a first time. There were college students who said \"later\" and gave nothing for a long time in a genuinely strategically wise way, and went on to get nice jobs and start donating. But I also learned well that, like many cheap and easy solaces, saying the word \"later\" is addictive; and that this luxury is available to the rich as well as the poor.\nI don't expect it to be any different with AGI alignment work. People who are trying to get what grasp they can on the alignment problem will, in the next year, be doing a little (or a lot) better with whatever they grasped in the previous year (plus, yes, any general-field advances that have taken place in the meantime). People who want to defer that until after there's a better understanding of AI and AGI will, after the next year's worth of advancements in AI and AGI, want to defer work until a better future understanding of AI and AGI.\nSome people really want alignment to get done and are therefore now trying to wrack their brains about how to get something like a reinforcement learner to reliably identify a utility function over particular elements in a model of the causal environment instead of a sensory reward term or defeat the seeming tautologicalness of updated (non-)deference. Others would rather be working on other things, and will therefore declare that there is no work that can possibly be done today, not spending two hours quietly thinking about it first before making that declaration. And this will not change tomorrow, unless perhaps tomorrow is when we wake up to some interesting newspaper headlines, and probably not even then. The luxury of saying \"later\" is not available only to the truly poor-in-available-options.\nAfter a while, I started telling effective altruists in college: \"If you're planning to earn-to-give later, then for now, give around $5 every three months. And never give exactly the same amount twice in a row, or give to the same organization twice in a row, so that you practice the mental habit of re-evaluating causes and re-evaluating your donation amounts on a regular basis. Don't learn the mental habit of just always saying 'later'.\"\nSimilarly, if somebody was actually going to work on AGI alignment \"later\", I'd tell them to, every six months, spend a couple of hours coming up with the best current scheme they can devise for aligning AGI and doing useful work on that scheme. Assuming, if they must, that AGI were somehow done with technology resembling current technology. And publishing their best-current-scheme-that-isn't-good-enough, at least in the sense of posting it to Facebook; so that they will have a sense of embarrassment about naming a scheme that does not look like somebody actually spent two hours trying to think of the best bad approach.\nThere are things we'll better understand about AI in the future, and things we'll learn that might give us more confidence that particular research approaches will be relevant to AGI. There may be more future sociological developments akin to Nick Bostrom publishing Superintelligence, Elon Musk tweeting about it and thereby heaving a rock through the Overton Window, or more respectable luminaries like Stuart Russell openly coming on board. The future will hold more AlphaGo-like events to publicly and privately highlight new ground-level advances in ML technique; and it may somehow be that this does not leave us in the same epistemic state as having already seen AlphaGo and GANs and the like. It could happen! I can't see exactly how, but the future does have the capacity to pull surprises in that regard.\nBut before waiting on that surprise, you should ask whether your uncertainty about AGI timelines is really uncertainty at all. If it feels to you that guessing AGI might have a 50% probability in N years is not enough knowledge to act upon, if that feels scarily uncertain and you want to wait for more evidence before making any decisions… then ask yourself how you'd feel if you believed the probability was 50% in N years, and everyone else on Earth also believed it was 50% in N years, and everyone believed it was right and proper to carry out policy P when AGI has a 50% probability of arriving in N years. If that visualization feels very different, then any nervous \"uncertainty\" you feel about doing P is not really about whether AGI takes much longer than N years to arrive.\nAnd you are almost surely going to be stuck with that feeling of \"uncertainty\" no matter how close AGI gets; because no matter how close AGI gets, whatever signs appear will almost surely not produce common, shared, agreed-on public knowledge that AGI has a 50% chance of arriving in N years, nor any agreement that it is therefore right and proper to react by doing P.\nAnd if all that did become common knowledge, then P is unlikely to still be a neglected intervention, or AI alignment a neglected issue; so you will have waited until sadly late to help.\nBut far more likely is that the common knowledge just isn't going to be there, and so it will always feel nervously \"uncertain\" to consider acting.\nYou can either act despite that, or not act. Not act until it's too late to help much, in the best case; not act at all until after it's essentially over, in the average case.\nI don't think it's wise to wait on an unspecified epistemic miracle to change how we feel. In all probability, you're going to be in this mental state for a while—including any nervous-feeling \"uncertainty\". If you handle this mental state by saying \"later\", that general policy is not likely to have good results for Earth.\n \n\n \nFurther resources:\n\nMIRI's research guide and research forum\nFLI's collection of introductory resources\nCHAI's alignment bibliography at http://humancompatible.ai/bibliography\n80,000 Hours' AI job postings on https://80000hours.org/job-board/\nThe Open Philanthropy Project's AI fellowship and general call for research proposals\nMy brain-dumps on AI alignment\nIf you're arriving here for the first time, my long-standing work on rationality, and CFAR's workshops\nAnd some general tips from Ray Arnold for effective altruists considering AI alignment as a cause area.\n\n \n\nThe post There's No Fire Alarm for Artificial General Intelligence appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "There’s No Fire Alarm for Artificial General Intelligence", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=20", "id": "230e62a8fd912ddd03d8a548567217dd"} {"text": "September 2017 Newsletter\n\nResearch updates\n\n\"Incorrigibility in the CIRL Framework\": a new paper by MIRI assistant researcher Ryan Carey responds to Hadfield-Menell et al.'s \"The Off-Switch Game\".\nNew at IAFF: The Three Levels of Goodhart's Curse; Conditioning on Conditionals; Stable Pointers to Value: An Agent Embedded in Its Own Utility Function; Density Zero Exploration; Autopoietic Systems and the Difficulty of AGI Alignment\nRyan Carey is leaving MIRI to collaborate with the Future of Humanity Institute's Owain Evans on AI safety work.\n\nGeneral updates\n\nAs part of his engineering internship at MIRI, Max Harms assisted in the construction and extension of RL-Teacher, an open-source tool for training AI systems with human feedback based on the \"Deep RL from Human Preferences\" OpenAI / DeepMind research collaboration. See OpenAI's announcement.\nMIRI COO Malo Bourgon participated in panel discussions on getting things done (video) and working in AI (video) at the Effective Altruism Global conference in San Francisco. AI Impacts researcher Katja Grace also spoke on AI safety (video). Other EAG talks on AI included Daniel Dewey's (video) and Owen Cotton-Barratt's (video), and a larger panel discussion (video).\nAnnouncing two winners of the Intelligence in Literature prize: Laurence Raphael Brothers' \"Houseproud\" and Shane Halbach's \"Human in the Loop\".\nRAISE, a project to develop online AI alignment course material, is seeking volunteers. \n\nNews and links\n\nThe Open Philanthropy Project is accepting applicants to an AI Fellows Program \"to fully support a small group of the most promising PhD students in artificial intelligence and machine learning\". See also Open Phil's partial list of key research topics in AI alignment.\nCall for papers: AAAI and ACM are running a new Conference on AI, Ethics, and Society, with submissions due by the end of October.\nDeepMind's Viktoriya Krakovna argues for a portfolio approach to AI safety research.\n\"Teaching AI Systems to Behave Themselves\": a solid article from the New York Times on the growing field of AI safety research. The Times also has an opening for an investigative reporter in AI.\nUC Berkeley's Center for Long-term Cybersecurity is hiring for several roles, including researcher, assistant to the director, and program manager.\nLife 3.0: Max Tegmark releases a new book on the future of AI (podcast discussion).\n\nThe post September 2017 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "September 2017 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=20", "id": "f5a780f0fcd88d5afef889a32c98c401"} {"text": "New paper: \"Incorrigibility in the CIRL Framework\"\n\n\nMIRI assistant research fellow Ryan Carey has a new paper out discussing situations where good performance in Cooperative Inverse Reinforcement Learning (CIRL) tasks fails to imply that software agents will assist or cooperate with programmers.\nThe paper, titled \"Incorrigibility in the CIRL Framework,\" lays out four scenarios in which CIRL violates the four conditions for corrigibility defined in Soares et al. (2015). Abstract:\nA value learning system has incentives to follow shutdown instructions, assuming the shutdown instruction provides information (in the technical sense) about which actions lead to valuable outcomes. However, this assumption is not robust to model mis-specification (e.g., in the case of programmer errors). We demonstrate this by presenting some Supervised POMDP scenarios in which errors in the parameterized reward function remove the incentive to follow shutdown commands. These difficulties parallel those discussed by Soares et al. (2015) in their paper on corrigibility.\nWe argue that it is important to consider systems that follow shutdown commands under some weaker set of assumptions (e.g., that one small verified module is correctly implemented; as opposed to an entire prior probability distribution and/or parameterized reward function). We discuss some difficulties with simple ways to attempt to attain these sorts of guarantees in a value learning framework.\nThe paper is a response to a paper by Hadfield-Menell, Dragan, Abbeel, and Russell, \"The Off-Switch Game.\" Hadfield-Menell et al. show that an AI system will be more responsive to human inputs when it is uncertain about its reward function and thinks that its human operator has more information about this reward function. Carey shows that the CIRL framework can be used to formalize the problem of corrigibility, and that the known assurances for CIRL systems, given in \"The Off-Switch Game\", rely on strong assumptions about having an error-free CIRL system. With less idealized assumptions, a value learning agent may have beliefs that cause it to evade redirection from the human.\n[T]he purpose of a shutdown button is to shut the AI system down in the event that all other assurances failed, e.g., in the event that the AI system is ignoring (for one reason or another) the instructions of the operators. If the designers of [the AI system] R have programmed the system so perfectly that the prior and [reward function] R are completely free of bugs, then the theorems of Hadfield-Menell et al. (2017) do apply. In practice, this means that in order to be corrigible, it would be necessary to have an AI system that was uncertain about all things that could possibly matter. The problem is that performing Bayesian reasoning over all possible worlds and all possible value functions is quite intractable. Realistically, humans will likely have to use a large number of heuristics and approximations in order to implement the system's belief system and updating rules. […]\nSoares et al. (2015) seem to want a shutdown button that works as a mechanism of last resort, to shut an AI system down in cases where it has observed and refused a programmer suggestion (and the programmers believe that the system is malfunctioning). Clearly, some part of the system must be working correctly in order for us to expect the shutdown button to work at all. However, it seems undesirable for the working of the button to depend on there being zero critical errors in the specification of the system's prior, the specification of the reward function, the way it categorizes different types of actions, and so on. Instead, it is desirable to develop a shutdown module that is small and simple, with code that could ideally be rigorously verified, and which ideally works to shut the system down even in the event of large programmer errors in the specification of the rest of the system.\nIn order to do this in a value learning framework, we require a value learning system that (i) is capable of having its actions overridden by a small verified module that watches for shutdown commands; (ii) has no incentive to remove, damage, or ignore the shutdown module; and (iii) has some small incentive to keep its shutdown module around; even under a broad range of cases where R, the prior, the set of available actions, etc. are misspecified.\nEven if the utility function is learned, there is still a need for additional lines of defense against unintended failures. The hope is that this can be achieved by modularizing the AI system. For that purpose, we would need a model of an agent that will behave corrigibly in a way that is robust to misspecification of other system components.\n \n\n\n\nSign up to get updates on new MIRI technical results\nGet notified every time a new technical paper is published.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n \nThe post New paper: \"Incorrigibility in the CIRL Framework\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “Incorrigibility in the CIRL Framework”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=21", "id": "c4a75da75edb2f80f6c7d7e3c9e6fd85"} {"text": "August 2017 Newsletter\n\n\n\n\n\n\n\nResearch updates\n\n\"A Formal Approach to the Problem of Logical Non-Omniscience\": We presented our work on logical induction at the 16th Conference on Theoretical Aspects of Rationality and Knowledge.\nNew at IAFF: Smoking Lesion Steelman; \"Like This World, But…\"; Jessica Taylor's Current Thoughts on Paul Christiano's Research Agenda; Open Problems Regarding Counterfactuals: An Introduction For Beginners\n\"A Game-Theoretic Analysis of The Off-Switch Game\": researchers from Australian National University and Linköping University release a new paper on corrigibility, spun off from a MIRIx workshop.\n\nGeneral updates\n\nDaniel Dewey of the Open Philanthropy Project writes up his current thoughts on MIRI's highly reliable agent design work, with discussion from Nate Soares and others in the comments section.\nSarah Marquart of the Future of Life Institute discusses MIRI's work on logical inductors, corrigibility, and other topics.\nWe attended the Workshop on Decision Theory & the Future of Artificial Intelligence and the 5th International Workshop on Strategic Reasoning.\n\n\nNews and links\n\nOpen Phil awards a four-year $2.4 million grant to Yoshua Bengio's group at the Montreal Institute for Learning Algorithms \"to support technical research on potential risks from advanced artificial intelligence\".\nA new IARPA-commissioned report discusses the potential for AI to accelerate technological innovation and lead to \"a self-reinforcing technological and economic edge\". The report suggests that AI \"has the potential to be a worst-case scenario\" in combining high destructive potential, military/civil dual use, and difficulty of monitoring with potentially low production difficulty.\nElon Musk and Mark Zuckerberg criticize each other's statements on AI risk.\nChina makes plans for major investments in AI (full text, translation note).\nMicrosoft opens a new AI lab with a goal of building \"more general artificial intelligence\".\nNew from the Future of Humanity Institute: \"Trial without Error: Towards Safe Reinforcement Learning via Human Intervention.\"\nFHI is seeking two research fellows to study AI macrostrategy.\nDaniel Selsam and others release certigrad (arXiv, github), a system for creating formally verified machine learning systems; see discussion on Hacker News (1, 2).\nApplications are open for the Center for Applied Rationality's AI Summer Fellows Program, which runs September 8–25.\n\n\n\n\n\n\nThe post August 2017 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "August 2017 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=21", "id": "84e6b60b593ede8e33f6fc0eb65b3547"} {"text": "July 2017 Newsletter\n\n\n\n\n\nA number of major mid-year MIRI updates: we received our largest donation to date, $1.01 million from an Ethereum investor! Our research priorities have also shifted somewhat, reflecting the addition of four new full-time researchers (Marcello Herreshoff, Sam Eisenstat, Tsvi Benson-Tilsen, and Abram Demski) and the departure of Patrick LaVictoire and Jessica Taylor.\nResearch updates\n\nNew at IAFF: Futarchy Fix, Cooperative Oracles: Stratified Pareto Optima and Almost Stratified Pareto Optima\nNew at AI Impacts: Some Survey Results!, AI Hopes and Fears in Numbers\n\nGeneral updates\n\nWe attended the Effective Altruism Global Boston event. Speakers included Allan Dafoe on \"The AI Revolution and International Politics\" (video) and Jason Matheny on \"Effective Altruism in Government\" (video).\nMIRI COO Malo Bourgon moderated an IEEE workshop revising a section from Ethically Aligned Design. \n\n\nNews and links\n\nNew from DeepMind researchers: \"Interpreting Deep Neural Networks Using Cognitive Psychology\"\nNew from OpenAI researchers: \"Corrigibility\"\nA collaboration between DeepMind and OpenAI: \"Learning from Human Preferences\"\nRecent progress in deep learning: \"Self-Normalizing Neural Networks\"\nFrom Ian Goodfellow and Nicolas Papernot: \"The Challenge of Verification and Testing of Machine Learning\"\nFrom 80,000 Hours: a guide to working in AI policy and strategy and a related interview with Miles Brundage of the Future of Humanity Institute.\n\n\n\n\n\nThe post July 2017 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "July 2017 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=21", "id": "65713555eaa5095bb416e45d0fdb70f6"} {"text": "Updates to the research team, and a major donation\n\nWe have several major announcements to make, covering new developments in the two months since our 2017 strategy update:\n1. On May 30th, we received a surprise $1.01 million donation from an Ethereum cryptocurrency investor. This is the single largest contribution we have received to date by a large margin, and will have a substantial effect on our plans over the coming year.\n2. Two new full-time researchers are joining MIRI: Tsvi Benson-Tilsen and Abram Demski. This comes in the wake of Sam Eisenstat and Marcello Herreshoff's addition to the team in May. We've also begun working with engineers on a trial basis for our new slate of software engineer job openings.\n3. Two of our researchers have recently left: Patrick LaVictoire and Jessica Taylor, researchers previously heading work on our \"Alignment for Advanced Machine Learning Systems\" research agenda.\nFor more details, see below.\n\n\n1. Fundraising\nThe major donation we received at the end of May, totaling $1,006,549, comes from a long-time supporter who had donated roughly $50k to our research programs over many years. This supporter has asked to continue to remain anonymous.\nThe first half of this year has been the most successful in MIRI's fundraising history, with other notable contributions including Ethereum donations from investor Eric Rogstad totalling ~$22k, and a ~$67k donation from Octane AI co-founder Leif K-Brooks as part of a Facebook Reactions challenge. In total we've raised about $1.45M in the first half of 2017.\nWe're thrilled and extremely grateful for this show of support. This fundraising success has increased our runway to around 18–20 months, giving us more leeway to trial potential hires and focus on our research and outreach priorities this year.\nConcretely, we have already made several plan adjustments as a consequence, including:\n\nmoving forward with more confidence on full-time researcher hires,\ntrialing more software engineers, and\ndeciding to run only one fundraiser this year, in the winter.1\n\nThis likely is a one-time outlier donation, similar to the $631k in cryptocurrency donations we received from Ripple developer Jed McCaleb in 2013–2014.2 Looking forward at our funding goals over the next two years:\n\nWhile we still have some uncertainty about our 2018 budget, our current point estimate is roughly $2.5M.\nThis year, between support from the Open Philanthropy Project, the Future of Life Institute, and other sources, we expect to receive at least an additional $600k without spending significant time on fundraising.\nOur tentative (ambitious) goal for the rest of the year is to raise an additional $950k, or $3M in total. This would be sufficient for our 2018 budget even if we expand our engineering team more quickly than expected, and would give us a bit of a buffer to account for uncertainty in our future fundraising (in particular, uncertainty about whether the Open Philanthropy Project will continue support after 2017).\n\nOn a five-year timescale, our broad funding goals are:3\n\nOn the low end, once we finish growing our team over the course of a few years, our default expectation is that our operational costs will be roughly $4M per year, mostly supporting researcher and engineer salaries. Our goal is therefore to reach that level in a sustainable, stable way.\nOn the high end, it's possible to imagine scenarios involving an order-of-magnitude increase in our funding, in which case we would develop a qualitatively different set of funding goals reflecting the fact that we would most likely substantially restructure MIRI.\nFor funding levels in between—roughly $4M–$10M per year—it is likely that we would not expand our current operations further. Instead, we might fund work outside of our current research after considering how well-positioned we appear to be to identify and fund various projects, including MIRI-external projects. While we consider it reasonably likely that we are in a good position for this, we would instead recommend that donors direct additional donations elsewhere if we ended up concluding that our donors (or other organizations) are in a better position than we are to respond to surprise funding opportunities in the AI alignment space.4\n\nA new major once-off donation at the $1M level like this one covers nearly half of our current annual budget, which makes a substantial difference to our one- and two-year plans. Our five-year plans are largely based on assumptions about multiple-year funding flows, so how aggressively we decide to plan our growth in response to this new donation depends largely on whether we can sustainably raise funds at the level of the above goal in future years (e.g., it depends on whether and how other donors change their level of support in response).\nTo reduce the uncertainty going into our expansion decisions, we're encouraging more of our regular donors to sign up for monthly donations or other recurring giving schedules—under 10% of our income currently comes from such donations, which limits our planning capabilities.5 We also encourage supporters to reach out to us about their future donation plans, so that we can answer questions and make more concrete and ambitious plans.\n\n2. New hires\nMeanwhile, two new full-time researchers are joining our team after having previously worked with us as associates while based at other institutions.\n \nAbram Demski, who is joining MIRI as a research fellow this month, is completing a PhD in Computer Science at the University of Southern California. His research to date has focused on cognitive architectures and artificial general intelligence. He is interested in filling in the gaps that exist in formal theories of rationality, especially those concerned with what humans are doing when reasoning about mathematics.\nAbram made key contributions to the MIRIxLosAngeles work that produced precursor results to logical induction. His other past work with MIRI includes \"Generalizing Foundations of Decision Theory\" and \"Computable Probability Distributions Which Converge on Believing True Π1 Sentences Will Disbelieve True Π2 Sentences.\"\n \nTsvi Benson-Tilsen has joined MIRI as an assistant research fellow. Tsvi holds a BSc in Mathematics with honors from the University of Chicago, and is on leave from the UC Berkeley Group in Logic and the Methodology of Science PhD program.\nPrior to joining MIRI's research staff, Tsvi was a co-author on \"Logical Induction\" and \"Formalizing Convergent Instrumental Goals,\" and also authored \"Updateless Decision Theory With Known Search Order\" and \"Existence of Distributions That Are Expectation-Reflective and Know It.\" Tsvi's research interests include logical uncertainty, logical counterfactuals, and reflectively stable decision-making.\n \nWe've also accepted our first six software engineers for 3-month visits. We are continuing to review applicants, and in light of the generous support we recently received and the strong pool of applicants so far, we are likely to trial more candidates than we'd planned previously.\nIn other news, going forward Scott Garrabrant will be acting as the research lead for MIRI's agent foundations research, handling more of the day-to-day work of coordinating and directing research team efforts.\n\n3. The AAMLS agenda\nOur AAMLS research was previously the focus of Jessica Taylor, Patrick LaVictoire, and Andrew Critch, all of whom joined MIRI in mid-2015. With Patrick and Jessica departing (on good terms) and Andrew on a two-year leave to work with the Center for Human-Compatible AI, we will be putting relatively little work into the AAMLS agenda over the coming year.\nWe continue to see the problems described in the AAMLS agenda as highly important, and expect to reallocate more attention to these problems in the future. Additionally, we see the AAMLS agenda as a good template for identifying safety desiderata and promising alignment problems. However, we did not see enough progress on AAMLS problems over the last year to conclude that we should currently prioritize this line of research over our other work (e.g., our agent foundations research on problems such as logical uncertainty and counterfactual reasoning). As a partial consequence, MIRI's current research staff do not plan to make AAMLS research a high priority in the near future.\nJessica, the project lead, describes some of her takeaways from working on AAMLS:\n[…] Why was little progress made?\n[1.] Difficulty\nI think the main reason is that the problems were very difficult. In particular, they were mostly selected on the basis of \"this seems important and seems plausibly solveable\", rather than any strong intuition that it's possible to make progress.\nIn comparison, problems in the agent foundations agenda have seen more progress:\n\nLogical uncertainty (Definability of truth, reflective oracles, logical inductors)\nDecision theory (Modal UDT, reflective oracles, logical inductors)\nVingean reflection (Model polymorphism, logical inductors)\n\nOne thing to note about these problems is that they were formulated on the basis of a strong intuition that they ought to be solveable. Before logical induction, it was possible to have the intuition that some sort of asymptotic approach could solve many logical uncertainty problems in the limit. It was also possible to strongly think that some sort of self-trust is possible.\nWith problems in the AAMLS agenda, the plausibility argument was something like:\n\nHere's an existing, flawed approach to the problem (e.g. using a reinforcement signal for environmental goals, or modifications of this approach)\nHere's a vague intuition about why it's possible to do better (e.g. humans do a different thing)\n\nwhich, empirically, turned out not to make for tractable research problems.\n[2.] Going for the throat\nIn an important sense, the AAMLS agenda is \"going for the throat\" in a way that other agendas (e.g. the agent foundations agenda) are to a lesser extent: it is attempting to solve the whole alignment problem (including goal specification) given access to resources such as powerful reinforcement learning. Thus, the difficulties of the whole alignment problem (e.g. specification of environmental goals) are more exposed in the problems.\n[3.] Theory vs. empiricism\nPersonally, I strongly lean towards preferring theoretical rather than empirical approaches. I don't know how much I endorse this bias overall for the set of people working on AI safety as a whole, but it is definitely a personal bias of mine.\nProblems in the AAMLS agenda turned out not to be very amenable to purely-theoretical investigation. This is probably due to the fact that there is not a clear mathematical aesthetic for determining what counts as a solution (e.g. for the environmental goals problem, it's not actually clear that there's a recognizable mathematical statement for what the problem is).\nWith the agent foundations agenda, there's a clearer aesthetic for recognizing good solutions. Most of the problems in the AAMLS agenda have a less-clear aesthetic. […]\nFor more details, see Jessica's retrospective on the Intelligent Agent Foundations Forum.\nMore work would need to go into AAMLS before we reached confident conclusions about the tractability of these problems. However, the lack of initial progress provides some evidence that new tools or perspectives may be needed before significant progress is possible. Over the coming year, we will therefore continue to spend some time thinking about AAMLS, but will not make it a major focus.\nWe continue to actively collaborate with Andrew on MIRI research, and expect to work with Patrick and Jessica more in the future as well. Jessica and Andrew in particular intend to continue to focus on AI safety research, including work on AI strategy and coordination.\nWe're grateful for everything Jessica and Patrick have done to advance our research program and our organizational mission over the past two years, and I'll personally miss having both of them around.\n \nIn general, I'm feeling really good about MIRI's position right now. From our increased financial security and ability to more ambitiously pursue our plans, to the new composition and focus of the research team, the new engineers who are spending time with us, and the growth of the research that they'll support, things are moving forward quickly and with purpose. Thanks to everyone who has contributed, is contributing, and will contribute in the future to help us do the work here at MIRI.\n \nMore generally, this will allow us to move forward confidently with the different research programs we consider high-priority, without needing to divert as many resources from other projects to support our top priorities. This should also allow us to make faster progress on the targeted outreach writing we mentioned in our 2017 update, since we won't have to spend staff time on writing and outreach for a summer fundraiser.Of course, we'd be happy if these large donations looked less like outliers in the long run. If readers are looking for something to do with digital currency they might be holding onto after the recent surges, know that we gratefully accept donations of many digital currencies! In total, MIRI has raised around $1.85M in cryptocurrency donations since mid-2013.These plans are subject to substantial change. In particular, an important source of variance in our plans is how our new non-public-facing research progresses, where we're likely to take on more ambitious growth goals if our new work looks like it's going well.We would also likely increase our reserves in this scenario, allowing us to better adapt to unexpected circumstances, and there is a smaller probability that we would use these funds to grow moderately more than currently planned without a significant change in strategy.Less frequent (e.g., quarterly) donations are also quite helpful from our perspective, if we know about them in advance and so can plan around them. In the case of donors who plan to give at least once per year, predictability is much more important from our perspective than frequency.The post Updates to the research team, and a major donation appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Updates to the research team, and a major donation", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=21", "id": "4044816e4f397d5d5beae5a667749627"} {"text": "June 2017 Newsletter\n\n\n\n\n\n\nResearch updates\n\nA new AI Impacts paper: \"When Will AI Exceed Human Performance?\" News coverage at Digital Trends and MIT Technology Review.\nNew at IAFF: Cooperative Oracles; Jessica Taylor on the AAMLS Agenda; An Approach to Logically Updateless Decisions\nOur 2014 technical agenda, \"Agent Foundations for Aligning Machine Intelligence with Human Interests,\" is now available as a book chapter in the anthology The Technological Singularity: Managing the Journey.\n\nGeneral updates\n\nreadthesequences.com: supporters have put together a web version of Eliezer Yudkowsky's Rationality: From AI to Zombies.\nThe Oxford Prioritisation Project publishes a model of MIRI's work as an existential risk intervention.\n\n\nNews and links\n\nFrom MIT Technology Review: \"Why Google's CEO Is Excited About Automating Artificial Intelligence.\"\nA new alignment paper from researchers at Australian National University and DeepMind: \"Reinforcement Learning with a Corrupted Reward Channel.\"\nNew from OpenAI: Baselines, a tool for reproducing reinforcement learning algorithms.\nThe Future of Humanity Institute and Centre for the Future of Intelligence join the Partnership on AI alongside twenty other groups.\nNew AI safety job postings include research roles at the Future of Humanity Institute and the Center for Human-Compatible AI, as well as a UCLA PULSE fellowship for studying AI's potential large-scale consequences and appropriate preparations and responses.  \n\n\n\n\nThe post June 2017 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "June 2017 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=21", "id": "efc69c6f79052fb40caca07da1ef582e"} {"text": "May 2017 Newsletter\n\n\n\n\n\n\n\nResearch updates\n\nNew at IAFF: The Ubiquitous Converse Lawvere Problem; Two Major Obstacles for Logical Inductor Decision Theory; Generalizing Foundations of Decision Theory II.\nNew at AI Impacts: Guide to Pages on AI Timeline Predictions\n\"Decisions Are For Making Bad Outcomes Inconsistent\": Nate Soares dialogues on some of the deeper issues raised by our \"Cheating Death in Damascus\" paper.\nWe ran a machine learning workshop in early April.\n\"Ensuring Smarter-Than-Human Intelligence Has a Positive Outcome\": Nate's talk at Google (video) provides probably the best general introduction to MIRI's work on AI alignment.\n\nGeneral updates\n\nOur strategy update discusses changes to our AI forecasts and research priorities, new outreach goals, a MIRI/DeepMind collaboration, and other news.\nMIRI is hiring software engineers! If you're a programmer who's passionate about MIRI's mission and wants to directly support our research efforts, apply here to trial with us.\nMIRI Assistant Research Fellow Ryan Carey has taken on an additional affiliation with the Centre for the Study of Existential Risk, and is also helping edit an issue of Informatica on superintelligence.\n\n\nNews and links\n\nDeepMind researcher Viktoriya Krakovna lists security highlights from ICLR.\nDeepMind is seeking applicants for a policy research position \"to carry out research on the social and economic impacts of AI\".\nThe Center for Human-Compatible AI is hiring an assistant director. Interested parties may also wish to apply for the event coordinator position at the new Berkeley Existential Risk Initiative, which will help support work at CHAI and elsewhere.\n80,000 Hours lists other potentially high-impact openings, including ones at Stanford's AI Index project, the White House OSTP, IARPA, and IVADO.\nNew papers: \"One-Shot Imitation Learning\" and \"Stochastic Gradient Descent as Approximate Bayesian Inference.\"\nThe Open Philanthropy Project summarizes its findings on early field growth.\nThe Centre for Effective Altruism is collecting donations for the Effective Altruism Funds in a range of cause areas.\n\n\n\n\n \nThe post May 2017 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "May 2017 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=21", "id": "c9b6e35a99b3e40c7f394ad280dd660e"} {"text": "2017 Updates and Strategy\n\nIn our last strategy update (August 2016), Nate wrote that MIRI's priorities were to make progress on our agent foundations agenda and begin work on our new \"Alignment for Advanced Machine Learning Systems\" agenda, to collaborate and communicate with other researchers, and to grow our research and ops teams.\nSince then, senior staff at MIRI have reassessed their views on how far off artificial general intelligence (AGI) is and concluded that shorter timelines are more likely than they were previously thinking. A few lines of recent evidence point in this direction, such as:1\n\nAI research is becoming more visibly exciting and well-funded. This suggests that more top talent (in the next generation as well as the current generation) will probably turn their attention to AI.\nAGI is attracting more scholarly attention as an idea, and is the stated goal of top AI groups like DeepMind, OpenAI, and FAIR. In particular, many researchers seem more open to thinking about general intelligence now than they did a few years ago.\nResearch groups associated with AGI are showing much clearer external signs of profitability.\nAI successes like AlphaGo indicate that it's easier to outperform top humans in domains like Go (without any new conceptual breakthroughs) than might have been expected.2 This lowers our estimate for the number of significant conceptual breakthroughs needed to rival humans in other domains.\n\nThere's no consensus among MIRI researchers on how long timelines are, and our aggregated estimate puts medium-to-high probability on scenarios in which the research community hasn't developed AGI by, e.g., 2035. On average, however, research staff now assign moderately higher probability to AGI's being developed before 2035 than we did a year or two ago. This has a few implications for our strategy:\n1. Our relationships with current key players in AGI safety and capabilities play a larger role in our strategic thinking. Short-timeline scenarios reduce the expected number of important new players who will enter the space before we hit AGI, and increase how much influence current players are likely to have.\n2. Our research priorities are somewhat different, since shorter timelines change what research paths are likely to pay out before we hit AGI, and also concentrate our probability mass more on scenarios where AGI shares various features in common with present-day machine learning systems.\nBoth updates represent directions we've already been trending in for various reasons.3 However, we're moving in these two directions more quickly and confidently than we were last year. As an example, Nate is spending less time on staff management and other administrative duties than in the past (having handed these off to MIRI COO Malo Bourgon) and less time on broad communications work (having delegated a fair amount of this to me), allowing him to spend more time on object-level research, research prioritization work, and more targeted communications.4\nI'll lay out what these updates mean for our plans in more concrete detail below.\n\n \n1. Research program plans\nOur top organizational priority is object-level research on the AI alignment problem, following up on the work Malo described in our recent annual review.\nWe plan to spend this year delving into some new safety research directions that are very preliminary and exploratory, where we're uncertain about potential synergies with AGI capabilities research. Work related to this exploratory investigation will be non-public-facing at least through late 2017, in order to lower the risk of marginally shortening AGI timelines (which can leave less total time for alignment research) and to free up researchers' attention from having to think through safety tradeoffs for each new result.5\nWe've worked on non-public-facing research before, but this will be a larger focus in 2017. We plan to re-assess how much work to put into our exploratory research program (and whether to shift projects to the public-facing side) in the fall, based on how projects are progressing.\nOn the public-facing side, Nate made a prediction that we'll make roughly the following amount of research progress this year (noting 2015 and 2016 estimates for comparison). 1 means \"limited progress\", 2 \"weak-to-modest progress\", 3 \"modest progress\", 4 \"modest-to-strong progress\", and 5 \"sizable progress\":6\n\nlogical uncertainty and naturalized induction:\n\n2015 progress: 5. — Predicted: 3.\n2016 progress: 5. — Predicted: 5.\n2017 progress prediction: 2 (weak-to-modest).\n\ndecision theory:\n\n2015 progress: 3. — Predicted: 3.\n2016 progress: 3. — Predicted: 3.\n2017 progress prediction: 3 (modest).\n\nVingean reflection:\n\n2015 progress: 3. — Predicted: 3.\n2016 progress: 4. — Predicted: 1.\n2017 progress prediction: 1 (limited).\n\nerror tolerance:\n\n2015 progress: 1. — Predicted: 3.\n2016 progress: 1. — Predicted: 3.\n2017 progress prediction: 1 (limited).\n\nvalue specification:\n\n2015 progress: 1. — Predicted: 1.\n2016 progress: 2. — Predicted: 3.\n2017 progress prediction: 1 (limited).\n\n\n\nNate expects fewer novel public-facing results this year than in 2015-2016, based on a mix of how many researcher hours we're investing into each area and how easy he estimates it is to make progress in that area.\nProgress in basic research is difficult to predict in advance, and the above estimates combine how likely it is that we'll come up with important new results with how large we would expect such results to be in the relevant domain.  In the case of naturalized induction, most of the probability is on us making small amounts of progress this year, with a low chance of new large insights. In the case of decision theory, most of the probability is on us achieving some minor new insights related to the questions we're working on, with a medium-low chance of large insights.\nThe research team's current focus is on some quite new questions. Jessica, Sam, and Scott have recently been working on the problem of reasoning procedures like Solomonoff induction giving rise to misaligned subagents (e.g., here), and considering alternative induction methods that might avoid this problem.7\nIn decision theory, a common thread in our recent work is that we're using probability and topological fixed points in settings where we used to use provability. This means working with (and improving) logical inductors and reflective oracles. It also means developing new ways of looking at counterfactuals inspired by those methods. The reason behind this shift is that most of the progress we've seen on Vingean reflection has come out of these probabilistic reasoning and fixed-point-based techniques.\nWe also plan to put out more accessible overviews this year of some of our research areas. For a good general introduction to our work in decision theory, see our newest paper, \"Cheating Death in Damascus.\"\n \n2. Targeted outreach and closer collaborations\nOur outreach efforts this year are mainly aimed at exchanging research-informing background models with top AI groups (especially OpenAI and DeepMind), AI safety research groups (especially the Future of Humanity Institute), and funders / conveners (especially the Open Philanthropy Project).\nWe're currently collaborating on a research project with DeepMind, and are on good terms with OpenAI and key figures at other groups. We're also writing up a more systematic explanation of our view of the strategic landscape, which we hope to use as a starting point for discussion. Topics we plan to go into in forthcoming write-ups include:\n1. Practical goals and guidelines for AGI projects.\n2. Why we consider AGI alignment a difficult problem, of the sort where a major multi-year investment of research effort in the near future may be necessary (and not too far off from sufficient).\n3. Why we think a deep understanding of how AI systems' cognition achieves objectives is likely to be critical for AGI alignment.\n4. Task-directed AGI and methods for limiting the scope of AGI systems' problem-solving work.\nSome existing write-ups related to the topics we intend to say more about include Jessica Taylor's \"On Motivations for MIRI's Highly Reliable Agent Design Research,\" Nate Soares' \"Why AI Safety?\", and Daniel Dewey's \"Long-Term Strategies for Ending Existential Risk from Fast Takeoff.\"\n \n3. Expansion\nOur planned budget in 2017 is $2.1–2.5M, up from $1.65M in 2015 and $1.75M in 2016. Our point estimate is $2.25M, in which case we would expect our breakdown to look roughly like this:\n\n\n\nWe recently hired two new research fellows, Sam Eisenstat and Marcello Herreshoff, and have other researchers in the pipeline. We're also hiring software engineers to help us rapidly prototype, implement, and test AI safety ideas related to machine learning. We're currently seeking interns to trial for these programming roles (apply here).\nOur events budget is smaller this year, as we're running more internal research retreats and fewer events like our 2015 summer workshop series and our 2016 colloquium series. Our costs of doing business are higher, due in part to accounting expenses associated with our passing the $2M revenue level and bookkeeping expenses for upkeep tasks we've outsourced.\nWe experimented with running just one fundraiser in 2016, but ended up still needing to spend staff time on fundraising at the end of the year after falling short of our initial funding target. Taking into account a heartening end-of-the-year show of support, our overall performance was very solid — $2.29M for the year, up from $1.58M in 2015. However, there's a good chance we'll return to our previous two-fundraiser rhythm this year in order to more confidently move ahead with our growth plans.\nOur 5-year plans are fairly uncertain, as our strategy will plausibly end up varying based on how fruitful our research directions this year turn out to be, and based on our conversations with other groups. As usual, you're welcome to ask us questions if you're curious about what we're up to, and we'll be keeping you updated as our plans continue to develop!\n \nNote that this list is far from exhaustive.Relatively general algorithms (plus copious compute) were able to surpass human performance on Go, going from incapable of winning against the worst human professionals in standard play to dominating the very best professionals in the space of a few months. The relevant development here wasn't \"AlphaGo represents a large conceptual advance over previously known techniques,\" but rather \"contemporary techniques run into surprisingly few obstacles when scaled to tasks as pattern-recognition-reliant and difficult (for humans) as professional Go\".The publication of \"Concrete Problems in AI Safety\" last year, for example, caused us to reduce the time we were spending on broad-based outreach to the AI community at large in favor of spending more time building stronger collaborations with researchers we knew at OpenAI, Google Brain, DeepMind, and elsewhere.Nate continues to set MIRI's organizational strategy, and is responsible for the ideas in this post.We generally support a norm where research groups weigh the costs and benefits of publishing results that could shorten AGI timelines, and err on the side of keeping potentially AGI-hastening results proprietary where there's sufficient uncertainty, unless there are sufficiently strong positive reasons to disseminate the results under consideration. This can end up applying to safety research and work by smaller groups as well, depending on the specifics of the research itself.\nAnother factor in our decision is that writing up results for external consumption takes additional researcher time and attention, though in practice this cost will often be smaller than the benefits of the writing process and resultant papers.Nate originally recorded his predictions on March 21, based on the progress he expected in late March through the end of 2017. Note that, for example, three \"limited\" scores aren't equivalent to one \"modest\" score. Additionally, the ranking is based on the largest technical result we expect in each category, and emphasizes depth over breadth: if we get one modest-seeming decision theory result one year and ten such results the next year, those will both get listed as \"modest progress\".This is a relatively recent research priority, and doesn't fit particularly well into any of the bins from our agent foundations agenda, though it is most clearly related to naturalized induction. Our AAMLS agenda also doesn't fit particularly neatly into these bins, though we classify most AAMLS research as error-tolerance or value specification work.The post 2017 Updates and Strategy appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "2017 Updates and Strategy", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=22", "id": "4fdd7e6190e721a0a31f8e42cbc9df51"} {"text": "Software Engineer Internship / Staff Openings\n\nThe Machine Intelligence Research Institute is looking for highly capable software engineers to directly support our AI alignment research efforts, with a focus on projects related to machine learning. We're seeking engineers with strong programming skills who are passionate about MIRI's mission and looking for challenging and intellectually engaging work.\nWhile our goal is to hire full-time, we are initially looking for paid interns. Successful internships may then transition into staff positions.\nAbout the Internship Program\nThe start time for interns is flexible, but we're aiming for May or June. We will likely run several batches of internships, so if you are interested but unable to start in the next few months, do still apply. The length of the internship is flexible, but we're aiming for 2–3 months.\nExamples of the kinds of work you'll do during the internship:\n\nReplicate recent machine learning papers, and implement variations.\nLearn about and implement machine learning tools (including results in the fields of deep learning, convex optimization, etc.).\nRun various coding experiments and projects, either independently or in small groups.\nRapidly prototype, implement, and test AI alignment ideas related to machine learning (after demonstrating successes in the above points).\n\nFor MIRI, the benefit of this program is that it's a great way to get to know you and assess you for a potential hire. For applicants, the benefits are that this is an excellent opportunity to get your hands dirty and level up your machine learning skills, and to get to the cutting edge of the AI safety field, with a potential to stay in a full-time engineering role after the internship concludes.\nOur goal is to trial many more people than we expect to hire, so our threshold for keeping on engineers long-term as full staff will be higher than for accepting applicants to our internship.\nThe Ideal Candidate\nSome qualities of the ideal candidate:\n\nExtensive breadth and depth of programming skills. Machine learning experience is not required, though it is a plus.\nHighly familiar with basic ideas related to AI alignment.\nAble to work independently with minimal supervision, and in team/group settings.\nWilling to accept a below-market rate. Since MIRI is a non-profit, we can't compete with the Big Names in the Bay Area.\nEnthusiastic about the prospect of working at MIRI and helping advance the field of AI alignment.\nNot looking for a \"generic\" software engineering position.\n\nWorking at MIRI\nWe strive to make working at MIRI a rewarding experience.\n\nModern Work Spaces — Many of us have adjustable standing desks with large external monitors. We consider workspace ergonomics important, and try to rig up work stations to be as comfortable as possible. Free snacks, drinks, and meals are also provided at our office.\nFlexible Hours — We don't have strict office hours, and we don't limit employees' vacation days. Our goal is to make rapid progress on our research agenda, and we would prefer that staff take a day off than that they extend tasks to fill an extra day.\nLiving in the Bay Area — MIRI's office is located in downtown Berkeley, California. From our office, you're a 30-second walk to the BART (Bay Area Rapid Transit), which can get you around the Bay Area; a 3-minute walk to UC Berkeley campus; and a 30-minute BART ride to downtown San Francisco.\n\nEEO & Employment Eligibility\nMIRI is an equal opportunity employer. We are committed to making employment decisions based on merit and value. This commitment includes complying with all federal, state, and local laws. We desire to maintain a work environment free of harassment or discrimination due to sex, race, religion, color, creed, national origin, sexual orientation, citizenship, physical or mental disability, marital status, familial status, ethnicity, ancestry, status as a victim of domestic violence, age, or any other status protected by federal, state, or local laws.\nApply\nIf interested, click here to apply. For questions or comments, email .\nUpdate (December 2017): We're now putting less emphasis on finding interns and looking for highly skilled engineers available for full-time work. Updated job post here.\nThe post Software Engineer Internship / Staff Openings appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Software Engineer Internship / Staff Openings", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=22", "id": "c521ba2407a204a1af181ffefd2fe5c2"} {"text": "Ensuring smarter-than-human intelligence has a positive outcome\n\nI recently gave a talk at Google on the problem of aligning smarter-than-human AI with operators' goals:\n \n\n \nThe talk was inspired by \"AI Alignment: Why It's Hard, and Where to Start,\" and serves as an introduction to the subfield of alignment research in AI. A modified transcript follows.\nTalk outline (slides):\n\n1. Overview\n2. Simple bright ideas going wrong\n2.1. Task: Fill a cauldron\n2.2. Subproblem: Suspend buttons\n3. The big picture\n3.1. Alignment priorities\n3.2. Four key propositions\n4. Fundamental difficulties\n\n\n\n\n\nOverview\nI'm the executive director of the Machine Intelligence Research Institute. Very roughly speaking, we're a group that's thinking in the long term about artificial intelligence and working to make sure that by the time we have advanced AI systems, we also know how to point them in useful directions.\nAcross history, science and technology have been the largest drivers of change in human and animal welfare, for better and for worse. If we can automate scientific and technological innovation, that has the potential to change the world on a scale not seen since the Industrial Revolution. When I talk about \"advanced AI,\" it's this potential for automating innovation that I have in mind.\nAI systems that exceed humans in this capacity aren't coming next year, but many smart people are working on it, and I'm not one to bet against human ingenuity. I think it's likely that we'll be able to build something like an automated scientist in our lifetimes, which suggests that this is something we need to take seriously.\nWhen people talk about the social implications of general AI, they often fall prey to anthropomorphism. They conflate artificial intelligence with artificial consciousness, or assume that if AI systems are \"intelligent,\" they must be intelligent in the same way a human is intelligent. A lot of journalists express a concern that when AI systems pass a certain capability level, they'll spontaneously develop \"natural\" desires like a human hunger for power; or they'll reflect on their programmed goals, find them foolish, and \"rebel,\" refusing to obey their programmed instructions.\nThese are misplaced concerns. The human brain is a complicated product of natural selection. We shouldn't expect machines that exceed human performance in scientific innovation to closely resemble humans, any more than early rockets, airplanes, or hot air balloons closely resembled birds.1\nThe notion of AI systems \"breaking free\" of the shackles of their source code or spontaneously developing human-like desires is just confused. The AI system is its source code, and its actions will only ever follow from the execution of the instructions that we initiate. The CPU just keeps on executing the next instruction in the program register. We could write a program that manipulates its own code, including coded objectives. Even then, though, the manipulations that it makes are made as a result of executing the original code that we wrote; they do not stem from some kind of ghost in the machine.\nThe serious question with smarter-than-human AI is how we can ensure that the objectives we've specified are correct, and how we can minimize costly accidents and unintended consequences in cases of misspecification. As Stuart Russell (co-author of Artificial Intelligence: A Modern Approach) puts it:\nThe primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:\n1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.\n2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.\nA system that is optimizing a function of n variables, where the objective depends on a subset of size k&\\text{ if someone gets killed} \\\\\n&\\text{… and a whole lot more} \\\\\n\\end{cases}$$\nThe second difficulty is that Mickey programmed the broom to make the expectation of its score as large as it could. \"Just fill one cauldron with water\" looks like a modest, limited-scope goal, but when we translate this goal into a probabilistic context, we find that optimizing it means driving up the probability of success to absurd heights. If the broom assigns a 99.9% probability to \"the cauldron is full,\" and it has extra resources lying around, then it will always try to find ways to use those resources to drive the probability even a little bit higher.\nContrast this with the limited \"task-like\" goal we presumably had in mind. We wanted the cauldron full, but in some intuitive sense we wanted the system to \"not try too hard\" even if it has lots of available cognitive and physical resources to devote to the problem. We wanted it to exercise creativity and resourcefulness within some intuitive limits, but we didn't want it to pursue \"absurd\" strategies, especially ones with large unanticipated consequences.2\nIn this example, the original objective function looked pretty task-like. It was bounded and quite simple. There was no way to get ever-larger amounts of utility. It's not like the system got one point for every bucket of water it poured in — then there would clearly be an incentive to overfill the cauldron. The problem was hidden in the fact that we're maximizing expected utility. This makes the goal open-ended, meaning that even small errors in the system's objective function will blow up.\nThere are a number of different ways that a goal that looks task-like can turn out to be open-ended. Another example: a larger system that has an overarching task-like goal may have subprocesses that are themselves trying to maximize a variety of different objective functions, such as optimizing the system's memory usage. If you don't understand your system well enough to track whether any of its subprocesses are themselves acting like resourceful open-ended optimizers, then it may not matter how safe the top-level objective is.\nSo the broom keeps grabbing more pails of water — say, on the off chance that the cauldron has a leak in it, or that \"fullness\" requires the water to be slightly above the level of the brim. And, of course, at no point does the broom \"rebel against\" Mickey's code. If anything, the broom pursued the objectives it was programmed with too effectively.\n \nSubproblem: Suspend buttons\nA common response to this problem is: \"OK, there may be some unintended consequences of the objective function, but we can always pull the plug, right?\"\nMickey tries this, and it doesn't work:\n \n\n\n\n \nAnd I claim that this is realistic too, for systems that are sufficiently good at modeling their environment. If the system is trying to drive up the expectation of its scoring function and is smart enough to recognize that its being shut down will result in lower-scoring outcomes, then the system's incentive is to subvert shutdown attempts. The more capable the system is, the likelier it is to find creative ways to achieve that subgoal — e.g., by copying itself to the Internet, or by tricking the programmers into thinking it's safer.\nThis isn't to say that it's impossible to shut down sufficiently capable AI systems; it's just that we need to put in the extra work of deliberately designing systems to avoid searching for ways to avoid being shut down. If you've written code that would try to resist attempts to shut it down, then you shouldn't run the code even if you're convinced it would fail — there are better ways to spend those compute cycles than on adversarial policy searches.\nRelevant safety measures that don't assume we can always outthink and outmaneuver the system will need to modify the system's objective function in some relevant way. We need to figure out how to formally specify objective functions that don't automatically place the AI system into an adversarial context with the operators; or we need to figure out some way to have the system achieve goals without optimizing some objective function in the traditional sense. E.g., we can look for ways to limit the system's capabilities to a small set of \"whitelisted\" domains that don't include thinking about whether the system will be shut down.\nOne obvious solution is to try changing the score function to make reference to a suspend button 𝐁:\n$$\\mathcal{U}^\\prime_{broom} =\n\\begin{cases}\n1 \\text{ if cauldron full } & \\text{& } \\textbf{B}\\text{=OFF} \\\\\n0 \\text{ if cauldron empty } & \\text{& } \\textbf{B}\\text{=OFF} \\\\\n1 \\text{ if robot suspended } & \\text{& } \\textbf{B}\\text{=ON} \\\\\n0 \\text{ otherwise}\n\\end{cases}$$This says that when the button is on, the system's goal changes from \"fill the cauldron\" to \"be suspended.\"\nIt's not clear how we could actually specify this goal — if there are any loopholes in the way we've specified \"suspension,\" then the system might try to exploit them to hedge its bets. E.g., it could fulfill the technical definition of \"suspend operations\" while tasking a child agent it built or subprocess it started with filling the cauldron. This would allow is to achieve both \"cauldron full\" and \"robot suspended\" at once, just in case it's wrong about whether the button has been pushed.\nLet's assume, however, that we've found an airtight way to technically specify suspension. The problem we run into now is that we've given the system an incentive to care a lot about whether the button is pushed. If it has any way to influence its operators' decision to push the button, or any other way to exercise more control, then it will jump on the opportunity and try to ensure the button is pushed, or ensure that it's not pushed — depending on whether it looks easier to suspend or to fill the cauldron.\nAnd, surprisingly, the state of the art on this problem isn't much more advanced than that. There's a paper co-authored by me, Fallenstein, Yudkowsky, and Armstrong (\"Corrigibility\"), and a related result by Orseau and Armstrong (\"Safely Interruptible Agents\"), but not a lot more.\nWhat we want is a way to combine two objective functions — a default function for normal operation, and a suspend function for when we want to suspend the system to disk.$$\\mathcal{U}_{normal}(act) =\n\\begin{cases}\n1\\text{ if cauldron full} \\\\\n0\\text{ if cauldron empty}\n\\end{cases}$$\n$$\\mathcal{U}_{suspend}(act) =\n\\begin{cases}\n1\\text{ if suspended} \\hspace{0.8cm}\\\\\n0\\text{ otherwise}\n\\end{cases}$$We want our method for combining the functions to satisfy three conditions: an operator should be able to switch between the functions (say, by pushing a button); the system shouldn't have any incentives to control which function is active; and if it's plausible that the system's normal operations could inadvertently compromise our ability to switch between the functions, then the system should be incentivized to keep that from happening.\nSo far, we haven't found any way to achieve all three goals at once. As an example, we can try to remove any incentive on the system's part to control whether its suspend button is pushed by giving the system a switching objective function that always assigns the same expected utility to the button being on or off:\n$$\\mathcal{U}_{switch}(act) =\n\\begin{cases}\n\\mathcal{U}_{normal}(act) &\\text{ if button=OFF} \\\\\n\\mathcal{U}_{suspend}(act) + \\theta &\\text{ if button=ON}\n\\end{cases}$$$$\\theta = \\max_{act} \\mathbb{E}\\left[\\mathcal{U}_{normal}\\mid act\\right] – \\max_{act} \\mathbb{E}\\left[\\mathcal{U}_{suspend}\\mid act\\right]$$Unfortunately, this approach makes it impossible to satisfy the third condition. It turns out that the above objective function is equivalent to setting the probability of the button being pressed to 0. The system won't deliberately tamper with the button, but it will happily approve updates that delete any reference to the button as dead code, or build subagents that ignore the button.\nMeeting these intuitively simple constraints turns out to be a nontrivial problem. This is a pattern that runs through a lot of questions in this space: traditional tools and concepts run into immediate safety problems that don't turn up in conventional capabilities research.\n\nThe big picture\nAlignment priorities\nLet's take a step back and talk about what's needed overall in order to align highly capable AI systems with our interests.\nHere's a dramatically simplified pipeline: You have some humans who come up with some task or goal or preference set that serves as their intended value function 𝘝. Since our values are complicated and context-sensitive, in practice we'll need to build systems to learn our values over time, rather than coding them by hand.3 We'll call the goal the AI system ends up with (which may or may not be identical to 𝘝) 𝗨.\nWhen the press covers this topic, they often focus on one of two problems: \"What if the wrong group of humans develops smarter-than-human AI first?\", and \"What if AI's natural desires cause 𝗨 to diverge from 𝘝?\"\nIn my view, the \"wrong humans\" issue shouldn't be the thing we focus on until we have reason to think we could get good outcomes with the right group of humans. We're very much in a situation where well-intentioned people couldn't leverage a general AI system to do good things even if they tried. As a simple example, if you handed me a box that was an extraordinarily powerful function optimizer — I could put in a description of any mathematical function, and it would give me an input that makes the output extremely large — then I don't know how I could use that box to develop a new technology or advance a scientific frontier without causing any catastrophes.4\nThere's a lot we don't understand about AI capabilities, but we're in a position where we at least have a general sense of what progress looks like. We have a number of good frameworks, techniques, and metrics, and we've put a great deal of thought and effort into successfully chipping away at the problem from various angles. At the same time, we have a very weak grasp on the problem of how to align highly capable systems with any particular goal. We can list out some intuitive desiderata, but the field hasn't really developed its first formal frameworks, techniques, or metrics.\nI believe that there's a lot of low-hanging fruit in this area, and also that a fair amount of the work does need to be done early (e.g., to help inform capabilities research directions — some directions may produce systems that are much easier to align than others). If we don't solve these problems, developers with arbitrarily good or bad intentions will end up producing equally bad outcomes. From an academic or scientific standpoint, our first objective in that kind of situation should be to remedy this state of affairs and at least make good outcomes technologically possible.\nMany people quickly recognize that \"natural desires\" are a fiction, but infer from this that we instead need to focus on the other issues the media tends to emphasize — \"What if bad actors get their hands on smarter-than-human AI?\", \"How will this kind of AI impact employment and the distribution of wealth?\", etc. These are important questions, but they'll only end up actually being relevant if we figure out how to bring general AI systems up to a minimum level of reliability and safety.\nAnother common thread is \"Why not just tell the AI system to (insert intuitive moral precept here)?\" On this way of thinking about the problem, often (perhaps unfairly) associated with Isaac Asimov's writing, ensuring a positive impact from AI systems is largely about coming up with natural-language instructions that are vague enough to subsume a lot of human ethical reasoning:\n\nIn contrast, precision is a virtue in real-world safety-critical software systems. Driving down accident risk requires that we begin with limited-scope goals rather than trying to \"solve\" all of morality at the outset.5\nMy view is that the critical work is mostly in designing an effective value learning process, and in ensuring that the sorta-argmax process is correctly hooked up to the resultant objective function 𝗨:\n\nThe better your value learning framework is, the less explicit and precise you need to be in pinpointing your value function 𝘝, and the more you can offload the problem of figuring out what you want to the AI system itself. Value learning, however, raises a number of basic difficulties that don't crop up in ordinary machine learning tasks.\nClassic capabilities research is concentrated in the sorta-argmax and Expectation parts of the diagram, but sorta-argmax also contains what I currently view as the most neglected, tractable, and important safety problems. The easiest way to see why \"hooking up the value learning process correctly to the system's capabilities\" is likely to be an important and difficult challenge in its own right is to consider the case of our own biological history.\nNatural selection is the only \"engineering\" process we know of that has ever led to a generally intelligent artifact: the human brain. Since natural selection relies on a fairly unintelligent hill-climbing approach, one lesson we can take away from this is that it's possible to reach general intelligence with a hill-climbing approach and enough brute force — though we can presumably do better with our human creativity and foresight.\nAnother key take-away is that natural selection was maximally strict about only optimizing brains for a single very simple goal: genetic fitness. In spite of this, the internal objectives that humans represent as their goals are not genetic fitness. We have innumerable goals — love, justice, beauty, mercy, fun, esteem, good food, good health, … — that correlated with good survival and reproduction strategies in the ancestral savanna. However, we ended up valuing these correlates directly, rather than valuing propagation of our genes as an end in itself — as demonstrated every time we employ birth control.\nThis is a case where the external optimization pressure on an artifact resulted in a general intelligence with internal objectives that didn't match the external selection pressure. And just as this caused humans' actions to diverge from natural selection's pseudo-goal once we gained new capabilities, we can expect AI systems' actions to diverge from humans' if we treat their inner workings as black boxes.\nIf we apply gradient descent to a black box, trying to get it to be very good at maximizing some objective, then with enough ingenuity and patience, we may be able to produce a powerful optimization process of some kind.6 By default, we should expect an artifact like that to have a goal 𝗨 that strongly correlates with our objective 𝘝 in the training environment, but sharply diverges from 𝘝 in some new environments or when a much wider option set becomes available.\nOn my view, the most important part of the alignment problem is ensuring that the value learning framework and overall system design we implement allow us to crack open the hood and confirm when the internal targets the system is optimizing for match (or don't match) the targets we're externally selecting through the learning process.7\nWe expect this to be technically difficult, and if we can't get it right, then it doesn't matter who's standing closest to the AI system when it's developed. Good intentions aren't sneezed into computer programs by kind-hearted programmers, and coming up with plausible goals for advanced AI systems doesn't help if we can't align the system's cognitive labor with a given goal.\n \nFour key propositions\nTaking another step back: I've given some examples of open problems in this area (suspend buttons, value learning, limited task-based AI, etc.), and I've outlined what I consider to be the major problem categories. But my initial characterization of why I consider this an important area — \"AI could automate general-purpose scientific reasoning, and general-purpose scientific reasoning is a big deal\" — was fairly vague. What are the core reasons to prioritize this work?\nFirst, goals and capabilities are orthogonal. That is, knowing an AI system's objective function doesn't tell you how good it is at optimizing that function, and knowing that something is a powerful optimizer doesn't tell you what it's optimizing.\nI think most programmers intuitively understand this. Some people will insist that when a machine tasked with filling a cauldron gets smart enough, it will abandon cauldron-filling as a goal unworthy of its intelligence. From a computer science perspective, the obvious response is that you could go out of your way to build a system that exhibits that conditional behavior, but you could also build a system that doesn't exhibit that conditional behavior. It can just keeps searching for actions that have a higher score on the \"fill a cauldron\" metric. You and I might get bored if someone told us to just keep searching for better actions, but it's entirely possible to write a program that executes a search and never gets bored.8\nSecond, sufficiently optimized objectives tend to converge on adversarial instrumental strategies. Most objectives a smarter-than-human AI system could possess would be furthered by subgoals like \"acquire resources\" and \"remain operational\" (along with \"learn more about the environment,\" etc.).\nThis was the problem suspend buttons ran into: even if you don't explicitly include \"remain operational\" in your goal specification, whatever goal you did load into the system is likely to be better achieved if the system remains online. Software systems' capabilities and (terminal) goals are orthogonal, but they'll often exhibit similar behaviors if a certain class of actions is useful for a wide variety of possible goals.\nTo use an example due to Stuart Russell: If you build a robot and program it to go to the supermarket to fetch some milk, and the robot's model says that one of the paths is much safer than the other, then the robot, in optimizing for the probability that it returns with milk, will automatically take the safer path. It's not that the system fears death, but that it can't fetch the milk if it's dead.\nThird, general-purpose AI systems are likely to show large and rapid capability gains. The human brain isn't anywhere near the upper limits for hardware performance (or, one assumes, software performance), and there are a number of other reasons to expect large capability advantages and rapid capability gain from advanced AI systems.\nAs a simple example, Google can buy a promising AI startup and throw huge numbers of GPUs at them, resulting in a quick jump from \"these problems look maybe relevant a decade from now\" to \"we need to solve all of these problems in the next year\" à la DeepMind's progress in Go. Or performance may suddenly improve when a system is first given large-scale Internet access, when there's a conceptual breakthrough in algorithm design, or when the system itself is able to propose improvements to its hardware and software.9\nFourth, aligning advanced AI systems with our interests looks difficult. I'll say more about why I think this presently.\nRoughly speaking, the first proposition says that AI systems won't naturally end up sharing our objectives. The second says that by default, systems with substantially different objectives are likely to end up adversarially competing for control of limited resources. The third suggests that adversarial general-purpose AI systems are likely to have a strong advantage over humans. And the fourth says that this problem is hard to solve — for example, that it's hard to transmit our values to AI systems (addressing orthogonality) or avert adversarial incentives (addressing convergent instrumental strategies).\nThese four propositions don't mean that we're screwed, but they mean that this problem is critically important. General-purpose AI has the potential to bring enormous benefits if we solve this problem, but we do need to make finding solutions a priority for the field.\n\nFundamental difficulties\nWhy do I think that AI alignment looks fairly difficult? The main reason is just that this has been my experience from actually working on these problems. I encourage you to look at some of the problems yourself and try to solve them in toy settings; we could use more eyes here. I'll also make note of a few structural reasons to expect these problems to be hard:\nFirst, aligning advanced AI systems with our interests looks difficult for the same reason rocket engineering is more difficult than airplane engineering.\nBefore looking at the details, it's natural to think \"it's all just AI\" and assume that the kinds of safety work relevant to current systems are the same as the kinds you need when systems surpass human performance. On that view, it's not obvious that we should work on these issues now, given that they might all be worked out in the course of narrow AI research (e.g., making sure that self-driving cars don't crash).\nSimilarly, at a glance someone might say, \"Why would rocket engineering be fundamentally harder than airplane engineering? It's all just material science and aerodynamics in the end, isn't it?\" In spite of this, empirically, the proportion of rockets that explode is far higher than the proportion of airplanes that crash. The reason for this is that a rocket is put under much greater stress and pressure than an airplane, and small failures are much more likely to be highly destructive.10\nAnalogously, even though general AI and narrow AI are \"just AI\" in some sense, we can expect that the more general AI systems are likely to experience a wider range of stressors, and possess more dangerous failure modes.\nFor example, once an AI system begins modeling the fact that (i) your actions affect its ability to achieve its objectives, (ii) your actions depend on your model of the world, and (iii) your model of the world is affected by its actions, the degree to which minor inaccuracies can lead to harmful behavior increases, and the potential harmfulness of its behavior (which can now include, e.g., deception) also increases. In the case of AI, as with rockets, greater capability makes it easier for small defects to cause big problems.\nSecond, alignment looks difficult for the same reason it's harder to build a good space probe than to write a good app.\nYou can find a number of interesting engineering practices at NASA. They do things like take three independent teams, give each of them the same engineering spec, and tell them to design the same software system; and then they choose between implementations by majority vote. The system that they actually deploy consults all three systems when making a choice, and if the three systems disagree, the choice is made by majority vote. The idea is that any one implementation will have bugs, but it's unlikely all three implementations will have a bug in the same place.\nThis is significantly more caution than goes into the deployment of, say, the new WhatsApp. One big reason for the difference is that it's hard to roll back a space probe. You can send version updates to a space probe and correct software bugs, but only if the probe's antenna and receiver work, and if all the code required to apply the patch is working. If your system for applying patches is itself failing, then there's nothing to be done.\nIn that respect, smarter-than-human AI is more like a space probe than like an ordinary software project. If you're trying to build something smarter than yourself, there are parts of the system that have to work perfectly on the first real deployment. We can do all the test runs we want, but once the system is out there, we can only make online improvements if the code that makes the system allow those improvements is working correctly.\nIf nothing yet has struck fear into your heart, I suggest meditating on the fact that the future of our civilization may well depend on our ability to write code that works correctly on the first deploy.\nLastly, alignment looks difficult for the same reason computer security is difficult: systems need to be robust to intelligent searches for loopholes.\nSuppose you have a dozen different vulnerabilities in your code, none of which is itself fatal or even really problematic in ordinary settings. Security is difficult because you need to account for intelligent attackers who might find all twelve vulnerabilities and chain them together in a novel way to break into (or just break) your system. Failure modes that would never arise by accident can be sought out and exploited; weird and extreme contexts can be instantiated by an attacker to cause your code to follow some crazy code path that you never considered.\nA similar sort of problem arises with AI. The problem I'm highlighting here is not that AI systems might act adversarially: AI alignment as a research program is all about finding ways to prevent adversarial behavior before it can crop up. We don't want to be in the business of trying to outsmart arbitrarily intelligent adversaries. That's a losing game.\nThe parallel to cryptography is that in AI alignment we deal with systems that perform intelligent searches through a very large search space, and which can produce weird contexts that force the code down unexpected paths. This is because the weird edge cases are places of extremes, and places of extremes are often the place where a given objective function is optimized.11 Like computer security professionals, AI alignment researchers need to be very good at thinking about edge cases.\nIt's much easier to make code that works well on the path that you were visualizing than to make code that works on all the paths that you weren't visualizing. AI alignment needs to work on all the paths you weren't visualizing.\nSumming up, we should approach a problem like this with the same level of rigor and caution we'd use for a security-critical rocket-launched space probe, and do the legwork as early as possible. At this early stage, a key part of the work is just to formalize basic concepts and ideas so that others can critique them and build on them. It's one thing to have a philosophical debate about what kinds of suspend buttons people intuit ought to work, and another thing to translate your intuition into an equation so that others can fully evaluate your reasoning.\nThis is a crucial project, and I encourage all of you who are interested in these problems to get involved and try your hand at them. There are ample resources online for learning more about the open technical problems. Some good places to start include MIRI's research agendas and a great paper from researchers at Google Brain, OpenAI, and Stanford called \"Concrete Problems in AI Safety.\"\n \nAn airplane can't heal its injuries or reproduce, though it can carry heavy cargo quite a bit further and faster than a bird. Airplanes are simpler than birds in many respects, while also being significantly more capable in terms of carrying capacity and speed (for which they were designed). It's plausible that early automated scientists will likewise be simpler than the human mind in many respects, while being significantly more capable in certain key dimensions. And just as the construction and design principles of aircraft look alien relative to the architecture of biological creatures, we should expect the design of highly capable AI systems to be quite alien when compared to the architecture of the human mind.Trying to give some formal content to these attempts to differentiate task-like goals from open-ended goals is one way of generating open research problems. In the \"Alignment for Advanced Machine Learning Systems\" research proposal, the problem of formalizing \"don't try too hard\" is mild optimization, \"steer clear of absurd strategies\" is conservatism, and \"don't have large unanticipated consequences\" is impact measures. See also \"avoiding negative side effects\" in Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané's \"Concrete Problems in AI Safety.\"One thing we've learned in the field of machine vision over the last few decades is that it's hopeless to specify by hand what a cat looks like, but that it's not too hard to specify a learning system that can learn to recognize cats. It's even more hopeless to specify everything we value by hand, but it's plausible that we could specify a learning system that can learn the relevant concept of \"value.\"See \"Environmental Goals,\" \"Low-Impact Agents,\" and \"Mild Optimization\" for examples of obstacles to specifying physical goals without causing catastrophic side-effects.\nRoughly speaking, MIRI's focus is on research directions that seem likely to help us conceptually understand how to do AI alignment in principle, so we're fundamentally less confused about the kind of work that's likely to be needed.\nWhat do I mean by this? Let's say that we're trying to develop a new chess-playing programs. Do we understand the problem well enough that we could solve it if someone handed us an arbitrarily large computer? Yes: We make the whole search tree, backtrack, see whether white has a winning move.\nIf we didn't know how to answer the question even with an arbitrarily large computer, then this would suggest that we were fundamentally confused about chess in some way. We'd either be missing the search-tree data structure or the backtracking algorithm, or we'd be missing some understanding of how chess works.\nThis was the position we were in regarding chess prior to Claude Shannon's seminal paper, and it's the position we're currently in regarding many problems in AI alignment. No matter how large a computer you hand me, I could not make a smarter-than-human AI system that performs even a very simple limited-scope task (e.g., \"put a strawberry on a plate without producing any catastrophic side-effects\") or achieves even a very simple open-ended goal (e.g., \"maximize the amount of diamond in the universe\").\nIf I didn't have any particular goal in mind for the system, I could write a program (assuming an arbitrarily large computer) that strongly optimized the future in an undirected way, using a formalism like AIXI. In that sense we're less obviously confused about capabilities than about alignment, even though we're still missing a lot of pieces of the puzzle on the practical capabilities front.\nSimilarly, we do know how to leverage a powerful function optimizer to mine bitcoin or prove theorems. But we don't know how to (safely) do the kind of prediction and policy search tasks I described in the \"fill a cauldron\" section, even for modest goals in the physical world.\nOur goal is to develop and formalize basic approaches and ways of thinking about the alignment problem, so that our engineering decisions don't end up depending on sophisticated and clever-sounding verbal arguments that turn out to be subtly mistaken. Simplifications like \"what if we weren't worried about resource constraints?\" and \"what if we were trying to achieve a much simpler goal?\" are a good place to start breaking down the problem into manageable pieces. For more on this methodology, see \"MIRI's Approach.\"\"Fill this cauldron without being too clever about it or working too hard or having any negative consequences I'm not anticipating\" is a rough example of a goal that's intuitively limited in scope. The things we actually want to use smarter-than-human AI for are obviously more ambitious than that, but we'd still want to begin with various limited-scope tasks rather than open-ended goals.\nAsimov's Three Laws of Robotics make for good stories partly for the same reasons they're unhelpful from a research perspective. The hard task of turning a moral precept into lines of code is hidden behind phrasings like \"[don't,] through inaction, allow a human being to come to harm.\" If one followed a rule like that strictly, the result would be massively disruptive, as AI systems would need to systematically intervene to prevent even the smallest risks of even the slightest harms; and if the intent is that one follow the rule loosely, then all the work is being done by the human sensibilities and intuitions that tell us when and how to apply the rule.\nA common response here is that vague natural-language instruction is sufficient, because smarter-than-human AI systems are likely to be capable of natural language comprehension. However, this is eliding the distinction between the system's objective function and its model of the world. A system acting in an environment containing humans may learn a world-model that has lots of information about human language and concepts, which the system can then use to achieve its objective function; but this fact doesn't imply that any of the information about human language and concepts will \"leak out\" and alter the system's objective function directly.\nSome kind of value learning process needs to be defined where the objective function itself improves with new information. This is a tricky task because there aren't known (scalable) metrics or criteria for value learning in the way that there are for conventional learning.\nIf a system's world-model is accurate in training environments but fails in the real world, then this is likely to result in lower scores on its objective function — the system itself has an incentive to improve. The severity of accidents is also likelier to be self-limiting in this case, since false beliefs limit a system's ability to effectively pursue strategies.\nIn contrast, if a system's value learning process results in a 𝗨 that matches our 𝘝 in training but diverges from 𝘝 in the real world, then the system's 𝗨 will obviously not penalize it for optimizing 𝗨. The system has no incentive relative to 𝗨 to \"correct\" divergences between 𝗨 and 𝘝, if the value learning process is initially flawed. And accident risk is larger in this case, since a mismatch between 𝗨 and 𝘝 doesn't necessarily place any limits on the system's instrumental effectiveness at coming up with effective and creative strategies for achieving 𝗨.\nThe problem is threefold:\n1. \"Do What I Mean\" is an informal idea, and even if we knew how to build a smarter-than-human AI system, we wouldn't know how to precisely specify this idea in lines of code.\n2. If doing what we actually mean is instrumentally useful for achieving a particular objective, then a sufficiently capable system may learn how to do this, and may act accordingly so long as doing so is useful for its objective. But as systems become more capable, they are likely to find creative new ways to achieve the same objectives, and there is no obvious way to get an assurance that \"doing what I mean\" will continue to be instrumentally useful indefinitely.\n3. If we use value learning to refine a system's goals over time based on training data that appears to be guiding the system toward a 𝗨 that inherently values doing what we mean, it is likely that the system will actually end up zeroing in on a 𝗨 that approximately does what we mean during training but catastrophically diverges in some difficult-to-anticipate contexts. See \"Goodhart's Curse\" for more on this.\nFor examples of problems faced by existing techniques for learning goals and facts, such as reinforcement learning, see \"Using Machine Learning to Address AI Risk.\"The result will probably not be a particularly human-like design, since so many complex historical contingencies were involved in our evolution. The result will also be able to benefit from a number of large software and hardware advantages.This concept is sometimes lumped into the \"transparency\" category, but standard algorithmic transparency research isn't really addressing this particular problem. A better term for what I have in mind here is \"understanding.\" What we want is to gain deeper and broader insights into the kind of cognitive work the system is doing and how this work relates to the system's objectives or optimization targets, to provide a conceptual lens with which to make sense of the hands-on engineering work.We could choose to program the system to tire, but we don't have to. In principle, one could program a broom that only ever finds and executes actions that optimize the fullness of the cauldron. Improving the system's ability to efficiently find high-scoring actions (in general, or relative to a particular scoring rule) doesn't in itself change the scoring rule it's using to evaluate actions.We can imagine the latter case resulting in a feedback loop as the system's design improvements allow it to come up with further design improvements, until all the low-hanging fruit is exhausted.\nAnother important consideration is that two of the main bottlenecks to humans doing faster scientific research are training time and communication bandwidth. If we could train a new mind to be a cutting-edge scientist in ten minutes, and if scientists could near-instantly trade their experience, knowledge, concepts, ideas, and intuitions to their collaborators, then scientific progress might be able to proceed much more rapidly. Those sorts of bottlenecks are exactly the sort of bottleneck that might give automated innovators an enormous edge over human innovators even without large advantages in hardware or algorithms.Specifically, rockets experience a wider range of temperatures and pressures, traverse those ranges more rapidly, and are also packed more fully with explosives.Consider Bird and Layzell's example of a very simple genetic algorithm that was tasked with evolving an oscillating circuit. Bird and Layzell were astonished to find that the algorithm made no use of the capacitor on the chip; instead, it had repurposed the circuit tracks on the motherboard as a radio to replay the oscillating signal from the test device back to the test device.\nThis was not a very smart program. This is just using hill climbing on a very small solution space. In spite of this, the solution turned out to be outside the space of solutions the programmers were themselves visualizing. In a computer simulation, this algorithm might have behaved as intended, but the actual solution space in the real world was wider than that, allowing hardware-level interventions.\nIn the case of an intelligent system that's significantly smarter than humans on whatever axes you're measuring, you should by default expect the system to push toward weird and creative solutions like these, and for the chosen solution to be difficult to anticipate.The post Ensuring smarter-than-human intelligence has a positive outcome appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Ensuring smarter-than-human intelligence has a positive outcome", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=22", "id": "79efc1c6d4ab2047d37c02e83d01743f"} {"text": "Decisions are for making bad outcomes inconsistent\n\n\nNate Soares' recent decision theory paper with Ben Levinstein, \"Cheating Death in Damascus,\" prompted some valuable questions and comments from an acquaintance (anonymized here). I've put together edited excerpts from the commenter's email below, with Nate's responses.\nThe discussion concerns functional decision theory (FDT), a newly proposed alternative to causal decision theory (CDT) and evidential decision theory (EDT). Where EDT says \"choose the most auspicious action\" and CDT says \"choose the action that has the best effects,\" FDT says \"choose the output of one's decision algorithm that has the best effects across all instances of that algorithm.\"\nFDT usually behaves similarly to CDT. In a one-shot prisoner's dilemma between two agents who know they are following FDT, however, FDT parts ways with CDT and prescribes cooperation, on the grounds that each agent runs the same decision-making procedure, and that therefore each agent is effectively choosing for both agents at once.1\nBelow, Nate provides some of his own perspective on why FDT generally achieves higher utility than CDT and EDT. Some of the stances he sketches out here are stronger than the assumptions needed to justify FDT, but should shed some light on why researchers at MIRI think FDT can help resolve a number of longstanding puzzles in the foundations of rational action.\n\n \nAnonymous: This is great stuff! I'm behind on reading loads of papers and books for my research, but this came across my path and hooked me, which speaks highly of how interesting is the content and the sense that this paper is making progress.\nMy general take is that you are right that these kinds of problems need to be specified in more detail. However, my guess is that once you do so, game theorists would get the right answer. Perhaps that's what FDT is: it's an approach to clarifying ambiguous games that leads to a formalism where people like Pearl and myself can use our standard approaches to get the right answer.\nI know there's a lot of inertia in the \"decision theory\" language, so probably it doesn't make sense to change. But if there were no such sunk costs, I would recommend a different framing. It's not that people's decision theories are wrong; it's that they are unable to correctly formalize problems in which there are high-performance predictors. You show how to do that, using the idea of intervening on (i.e., choosing between putative outputs of) the algorithm, rather than intervening on actions. Everything else follows from a sufficiently precise and non-contradictory statement of the decision problem.\nProbably the easiest move this line of work could make to ease this knee-jerk response of mine in defense of mainstream Bayesian game theory is to just be clear that CDT is not meant to capture mainstream Bayesian game theory. Rather, it is a model of one response to a class of problems not normally considered and for which existing approaches are ambiguous.\n\nNate Soares: I don't take this view myself. My view is more like: When you add accurate predictors to the Rube Goldberg machine that is the universe — which can in fact be done — the future of that universe can be determined by the behavior of the algorithm being predicted. The algorithm that we put in the \"thing-being-predicted\" slot can do significantly better if its reasoning on the subject of which actions to output respects the universe's downstream causal structure (which is something CDT and FDT do, but which EDT neglects), and it can do better again if its reasoning also respects the world's global logical structure (which is done by FDT alone).\nWe don't know exactly how to respect this wider class of dependencies in general yet, but we do know how to do it in many simple cases. While it agrees with modern decision theory and game theory in many simple situations, its prescriptions do seem to differ in non-trivial applications.\nThe main case where we can easily see that FDT is not just a better tool for formalizing game theorists' traditional intuitions is in prisoner's dilemmas. Game theory is pretty adamant about the fact that it's rational to defect in a one-shot PD, whereas two FDT agents facing off in a one-shot PD will cooperate.\nIn particular, classical game theory employs a \"common knowledge of shared rationality\" assumption which, when you look closely at it, cashes out more or less as \"common knowledge that all parties are using CDT and this axiom.\" Game theory where common knowledge of shared rationality is defined to mean \"common knowledge that all parties are using FDT and this axiom\" gives substantially different results, such as cooperation in one-shot PDs.\n\n\nA causal graph of Death in Damascus for CDT agents.2\nAnonymous: When I've read MIRI work on CDT in the past, it seemed to me to describe what standard game theorists mean by rationality. But at least in cases like Murder Lesion, I don't think it's fair to say that standard game theorists would prescribe CDT. It might be better to say that standard game theory doesn't consider these kinds of settings, and there are multiple ways of responding to them, CDT being one.\nBut I also suspect that many of these perfect prediction problems are internally inconsistent, and so it's irrelevant what CDT would prescribe, since the problem cannot arise. That is, it's not reasonable to say game theorists would recommend such-and-such in a certain problem, when the problem postulates that the actor always has incorrect expectations; \"all agents have correct expectations\" is a core property of most game-theoretic problems.\nThe Death in Damascus problem for CDT agents is a good example of this. In this problem, either Death will not find the CDT agent with certainty, or the CDT agent will never have correct beliefs about her own actions, or she will be unable to best respond to her own beliefs.\nSo the problem statement (\"Death finds the agent with certainty\") rules out typical assumptions of a rational actor: that it has rational expectations (including about its own behavior), and that it can choose the preferred action in response to its beliefs. The agent can only have correct beliefs if she believes that she has such-and-such belief about which city she'll end up in, but doesn't select the action that is the best response to that belief.\n\nNate: I contest that last claim. The trouble is in the phrase \"best response\", where you're using CDT's notion of what counts as the best response. According to FDT's notion of \"best response\", the best response to your beliefs in the Death in Damascus problem is to stay in Damascus, if we're assuming it costs nonzero utility to make the trek to Aleppo.\nIn order to define what the best response to a problem is, we normally invoke a notion of counterfactuals — what are your available responses, and what do you think follows from them? But the question of how to set up those counterfactuals is the very point under contention.\nSo I'll grant that if you define \"best response\" in terms of CDT's counterfactuals, then Death in Damascus rules out the typical assumptions of a rational actor. If you use FDT's counterfactuals (i.e., counterfactuals that respect the full range of subjunctive dependencies), however, then you get to keep all the usual assumptions of rational actors. We can say that FDT has the pre-theoretic advantage over CDT that it allows agents to exhibit sensible-seeming properties like these in a wider array of problems.\n\nAnonymous: The presentation of the Death in Damascus problem for CDT feels weird to me. CDT might also just turn up an error, since one of its assumptions is violated by the problem. Or it might cycle through beliefs forever… The expected utility calculation here seems to give some credence to the possibility of dodging death, which is assumed to be impossible, so it doesn't seem to me to correctly reason in a CDT way about where death will be.\nFor some reason I want to defend the CDT agent, and say that it's not fair to say they wouldn't realize that their strategy produces a contradiction (given the assumptions of rational belief and agency) in this problem.\n\nNate: There are a few different things to note here. First is that my inclination is always to evaluate CDT as an algorithm: if you built a machine that follows the CDT equation to the very letter, what would it do?\nThe answer here, as you've rightly noted above, is that the CDT equation isn't necessarily defined when the input is a problem like Death in Damascus, and I agree that simple definitions of CDT yield algorithms that would either enter an infinite loop or crash. The third alternative is that the agent notices the difficulty and engages in some sort of reflective-equilibrium-finding procedure; variants of CDT with this sort of patch were invented more or less independently by Joyce and Arntzenius to do exactly that. In the paper, we discuss the variants that run an equilibrium-finding procedure and show that the equilibrium is still unsatisfactory; but we probably should have been more explicit about the fact that vanilla CDT either crashes or loops.\nSecond, I acknowledge that there's still a strong intuition that an agent should in some sense be able to reflect on their own instability, look at the problem statement, and say, \"Aha, I see what's going on here; Death will find me no matter what I choose; I'd better find some other way to make the decision.\" However, this sort of response is explicitly ruled out by the CDT equation: CDT says you must evaluate your actions as if they were subjunctively independent of everything that doesn't causally depend on them.\nIn other words, you're correct that CDT agents know intellectually that they cannot escape Death, but the CDT equation requires agents to imagine that they can, and to act on this basis.\nAnd, to be clear, it is not a strike against an algorithm for it to prescribe making actions by reasoning about impossible scenarios — any deterministic algorithm attempting to reason about what it \"should do\" must imagine some impossibilities, because a deterministic algorithm has to reason about the consequences of doing lots of different things, but is in fact only going to do one thing.\nThe question at hand is which impossibilities are the right ones to imagine, and the claim is that in scenarios with accurate predictors, CDT prescribes imagining the wrong impossibilities, including impossibilities where it escapes Death.\nOur human intuitions say that we should reflect on the problem statement and eventually realize that escaping Death is in some sense \"too impossible to consider\". But this directly contradicts the advice of CDT. Following this intuition requires us to make our beliefs obey a logical-but-not-causal constraint in the problem statement (\"Death is a perfect predictor\"), which FDT agents can do but CDT agents can't. On close examination, the \"shouldn't CDT realize this is wrong?\" intuition turns out to be an argument for FDT in another guise. (Indeed, pursuing this intuition is part of how FDT's predecessors were discovered!)\nThird, I'll note it's an important virtue in general for decision theories to be able to reason correctly in the face of apparent inconsistency. Consider the following simple example:\nAn agent has a choice between taking $1 or taking $100. There is an extraordinarily tiny but nonzero probability that a cosmic ray will spontaneously strike the agent's brain in such a way that they will be caused to do the opposite of whichever action they would normally do. If they learn that they have been struck by a cosmic ray, then they will also need to visit the emergency room to ensure there's no lasting brain damage, at a cost of $1000. Furthermore, the agent knows that they take the $100 if and only if they are hit by the cosmic ray.\nWhen faced with this problem, EDT agents reason: \"If I take the $100, then I must have been hit by the cosmic ray, which means that I lose $900 on net. I therefore prefer the $1.\" They then take the $1 (except in cases where they have been hit by the cosmic ray).\nSince this is just what the problem statement says — \"the agent knows that they take the $100 if and only if they are hit by the cosmic ray\" — the problem is perfectly consistent, as is EDT's response to the problem. EDT only cares about correlation, not dependency; so EDT agents are perfectly happy to buy into self-fulfilling prophecies, even when it means turning their backs on large sums of money.\nWhat happens when we try to pull this trick on a CDT agent? She says, \"Like hell I only take the $100 if I'm hit by the cosmic ray!\" and grabs the $100 — thus revealing your problem statement to be inconsistent if the agent runs CDT as opposed to EDT.\nThe claim that \"the agent knows that they take the $100 if and only if they are hit by the cosmic ray\" contradicts the definition of CDT, which requires that agents of CDT refuse to leave free money on the table. As you may verify, FDT also renders the problem statement inconsistent, for similar reasons. The definition of EDT, on the other hand, is fully consistent with the problem as stated.\nThis means that if you try to put EDT into the above situation — controlling its behavior by telling it specific facts about itself — you will succeed; whereas if you try to put CDT into the above situation, you will fail, and the supposed facts will be revealed as lies. Whether or not the above problem statement is consistent depends on the algorithm that the agent runs, and the design of the algorithm controls the degree to which you can put that algorithm in bad situations.\nWe can think of this as a case of FDT and CDT succeeding in making a low-utility universe impossible, where EDT fails to make a low-utility universe impossible. The whole point of implementing a decision theory on a piece of hardware and running it is to make bad futures-of-our-universe impossible (or at least very unlikely). It's a feature of a decision theory, and not a bug, for there to be some problems where one tries to describe a low-utility state of affairs and the decision theory says, \"I'm sorry, but if you run me in that problem, your problem will be revealed as inconsistent\".3\nThis doesn't contradict anything you've said; I say it only to highlight how little we can conclude from noticing that an agent is reasoning about an inconsistent state of affairs. Reasoning about impossibilities is the mechanism by which decision theories produce actions that force the outcome to be desirable, so we can't conclude that an agent has been placed in an unfair situation from the fact that the agent is forced to reason about an impossibility.\n\nA causal graph of the XOR blackmail problem for CDT agents.4\n\nAnonymous: Something still seems fishy to me about decision problems that assume perfect predictors. If I'm being predicted with 100% accuracy in the XOR blackmail problem, then this means that I can induce a contradiction. If I follow FDT and CDT's recommendation of never paying, then I only receive a letter when I have termites. But if I pay, then I must be in the world where I don't have termites, as otherwise there is a contradiction.\nSo it seems that I am able to intervene on the world in a way that changes the state of termites for me now, given that I've received a letter. That is, the best strategy when starting is to never pay, but the best strategy given that I will receive a letter is to pay. The weirdness arises because I'm able to intervene on the algorithm, but we are conditioning on a fact of the world that depends on my algorithm.\nNot sure if this confusion makes sense to you. My gut says that these kinds of problems are often self-contradicting, at least when we assert 100% predictive performance. I would prefer to work it out from the ex ante situation, with specified probabilities of getting termites, and see if it is the case that changing one's strategy (at the algorithm level) is possible without changing the probability of termites to maintain consistency of the prediction claim.\n\nNate: First, I'll note that the problem goes through fine if the prediction is only correct 99% of the time. If the difference between \"cost of termites\" and \"cost of paying\" is sufficient high, then the problem can probably go through even if the predictor is only correct 51% of the time.\nThat said, the point of this example is to draw attention to some of the issues you're raising here, and I think that these issues are just easier to think about when we assume 100% predictive accuracy.\nThe claim I dispute is this one: \"That is, the best strategy when starting is to never pay, but the best strategy given that I will receive a letter is to pay.\" I claim that the best strategy given that you receive the letter is to not pay, because whether you pay has no effect on whether or not you have termites. Whenever you pay, no matter what you've learned, you're basically just burning $1000.\nThat said, you're completely right that these decision problems have some inconsistent branches, though I claim that this is true of any decision problem. In a deterministic universe with deterministic agents, all \"possible actions\" the agent \"could take\" save one are not going to be taken, and thus all \"possibilities\" save one are in fact inconsistent given a sufficiently full formal specification.\nI also completely endorse the claim that this set-up allows the predicted agent to induce a contradiction. Indeed, I claim that all decision-making power comes from the ability to induce contradictions: the whole reason to write an algorithm that loops over actions, constructs models of outcomes that would follow from those actions, and outputs the action corresponding to the highest-ranked outcome is so that it is contradictory for the algorithm to output a suboptimal action.\nThis is what computer programs are all about. You write the code in such a fashion that the only non-contradictory way for the electricity to flow through the transistors is in the way that makes your computer do your tax returns, or whatever.\nIn the case of the XOR blackmail problem, there are four \"possible\" worlds: LT (letter + termites), NT (noletter + termites), LN (letter + notermites), and NN (noletter + notermites).\nThe predictor, by dint of their accuracy, has put the universe into a state where the only consistent possibilities are either (LT, NN) or (LN, NT). You get to choose which of those pairs is consistent and which is contradictory. Clearly, you don't have control over the probability of termites vs. notermites, so you're only controlling whether you get the letter. Thus, the question is whether you're willing to pay $1000 to make sure that the letter shows up only in the worlds where you don't have termites.\nEven when you're holding the letter in your hands, I claim that you should not say \"if I pay I will have no termites\", because that is false — your action can't affect whether you have termites. You should instead say:\nI see two possibilities here. If my algorithm outputs pay, then in the XX% of worlds where I have termites I get no letter and lose $1M, and in the (100-XX)% of worlds where I do not have termites I lose $1k. If instead my algorithm outputs refuse, then in the XX% of worlds where I have termites I get this letter but only lose $1M, and in the other worlds I lose nothing. The latter mixture is preferable, so I do not pay.\nYou'll notice that the agent in this line of reasoning is not updating on the fact that they're holding the letter. They're not saying, \"Given that I know that I received the letter and that the universe is consistent…\"\nOne way to think about this is to imagine the agent as not yet being sure whether or not they're in a contradictory universe. They act like this might be a world in which they don't have termites, and they received the letter; and in those worlds, by refusing to pay, they make the world they inhabit inconsistent — and thereby make this very scenario never-have-existed.\nAnd this is correct reasoning! For when the predictor makes their prediction, they'll visualize a scenario where the agent has no termites and receives the letter, in order to figure out what the agent would do. When the predictor observes that the agent would make that universe contradictory (by refusing to pay), they are bound (by their own commitments, and by their accuracy as a predictor) to send the letter only when you have termites.5\nYou'll never find yourself in a contradictory situation in the real world, but when an accurate predictor is trying to figure out what you'll do, they don't yet know which situations are contradictory. They'll therefore imagine you in situations that may or may not turn out to be contradictory (like \"letter + notermites\"). Whether or not you would force the contradiction in those cases determines how the predictor will behave towards you in fact.\nThe real world is never contradictory, but predictions about you can certainly place you in contradictory hypotheticals. In cases where you want to force a certain hypothetical world to imply a contradiction, you have to be the sort of person who would force the contradiction if given the opportunity.\nOr as I like to say — forcing the contradiction never works, but it always would've worked, which is sufficient.\n\nAnonymous: The FDT algorithm is best ex ante. But if what you care about is your utility in your own life flowing after you, and not that of other instantiations, then upon hearing this news about FDT you should do whatever is best for you given that information and your beliefs, as per CDT.\n\nA causal graph of Newcomb's problem for FDT agents.6\nNate: If you have the ability to commit yourself to future behaviors (and actually stick to that), it's clearly in your interest to commit now to behaving like FDT on all decision problems that begin in your future. I, for instance, have made this commitment myself. I've also made stronger commitments about decision problems that began in my past, but all CDT agents should agree in principle on problems that begin in the future.7\nI do believe that real-world people like you and me can actually follow FDT's prescriptions, even in cases where those prescriptions are quite counter-intuitive.\nConsider a variant of Newcomb's problem where both boxes are transparent, so that you can already see whether box B is full before choosing whether to two-box. In this case, EDT joins CDT in two-boxing, because one-boxing can no longer serve to give the agent good news about its fortunes. But FDT agents still one-box, for the same reason they one-box in Newcomb's original problem and cooperate in the prisoner's dilemma: they imagine their algorithm controlling all instances of their decision procedure, including the past copy in the mind of their predictor.\nNow, let's suppose that you're standing in front of two full boxes in the transparent Newcomb problem. You might say to yourself, \"I wish I could have committed beforehand, but now that the choice is before me, the tug of the extra $1000 is just too strong\", and then decide that you were not actually capable of making binding precommitments. This is fine; the normatively correct decision theory might not be something that all human beings have the willpower to follow in real life, just as the correct moral theory could turn out to be something that some people lack the will to follow.8\nThat said, I believe that I'm quite capable of just acting like I committed to act. I don't feel a need to go through any particular mental ritual in order to feel comfortable one-boxing. I can just decide to one-box and let the matter rest there.\nI want to be the kind of agent that sees two full boxes, so that I can walk away rich. I care more about doing what works, and about achieving practical real-world goals, than I care about the intuitiveness of my local decisions. And in this decision problem, FDT agents are the only agents that walk away rich.\nOne way of making sense of this kind of reasoning is that evolution graced me with a \"just do what you promised to do\" module. The same style of reasoning that allows me to actually follow through and one-box in Newcomb's problem is the one that allows me to cooperate in prisoner's dilemmas against myself — including dilemmas like \"should I stick to my New Year's resolution?\"9 I claim that it was only misguided CDT philosophers that argued (wrongly) that \"rational\" agents aren't allowed to use that evolution-given \"just follow through with your promises\" module.\n\nAnonymous: A final point: I don't know about counterlogicals, but a theory of functional similarity would seem to depend on the details of the algorithms.\nE.g., we could have a model where their output is stochastic, but some parameters of that process are the same (such as expected value), and the action is stochastically drawn from some distribution with those parameter values. We could have a version of that, but where the parameter values depend on private information picked up since the algorithms split, in which case each agent would have to model the distribution of private info the other might have.\nThat seems pretty general; does that work? Is there a class of functional similarity that can not be expressed using that formulation?\n\nNate: As long as the underlying distribution can be an arbitrary Turing machine, I think that's sufficiently general.\nThere are actually a few non-obvious technical hurdles here; namely, if agent A is basing their beliefs off of their model of agent B, who is basing their beliefs off of a model of agent A, then you can get some strange loops.\nConsider for example the matching pennies problem: agent A and agent B will each place a penny on a table; agent A wants either HH or TT, and agent B wants either HT or TH. It's non-trivial to ensure that both agents develop stable accurate beliefs in games like this (as opposed to, e.g., diving into infinite loops).\nThe technical solution to this is reflective oracle machines, a class of probabilistic Turing machines with access to an oracle that can probabilistically answer questions about any other machine in the class (with access to the same oracle).\nThe paper \"Reflective Oracles: A Foundation for Classical Game Theory\" shows how to do this and shows that the relevant fixed points always exist. (And furthermore, in cases that can be represented in classical game theory, the fixed points always correspond to the mixed-strategy Nash equilibria.)\nThis more or less lets us start from a place of saying \"how do agents with probabilistic information about each other's source code come to stable beliefs about each other?\" and gets us to the \"common knowledge of rationality\" axiom from game theory.10 One can also see it as a justification for that axiom, or as a generalization of that axiom that works even in cases where the lines between agent and environment get blurry, or as a hint at what we should do in cases where one agent has significantly more computational resources than the other, etc.\nBut, yes, when we study these kinds of problems concretely at MIRI, we tend to use models where each agent models the other as a probabilistic Turing machine, which seems roughly in line with what you're suggesting here.\n \nCDT prescribes defection in this dilemma, on the grounds that one's action cannot cause the other agent to cooperate. FDT outperforms CDT in Newcomblike dilemmas like these, while also outperforming EDT in other dilemmas, such as the smoking lesion problem and XOR blackmail.The agent's predisposition determines whether they will flee to Aleppo or stay in Damascus, and also determines Death's prediction about their decision. This allows Death to inescapably pursue the agent, making flight pointless; but CDT agents can't incorporate this fact into their decision-making.There are some fairly natural ways to cash out Murder Lesion where CDT accepts the problem and FDT forces a contradiction, but we decided not to delve into that interpretation in the paper.\nTangentially, I'll note that one of the most common defenses of CDT similarly turns on the idea that certain dilemmas are \"unfair\" to CDT. Compare, for example, David Lewis' \"Why Ain'cha Rich?\"\nIt's obviously possible to define decision problems that are \"unfair\" in the sense that they just reward or punish agents for having a certain decision theory. We can imagine a dilemma where a predictor simply guesses whether you're implementing FDT, and gives you $1,000,000 if so. Since we can construct symmetric dilemmas that instead reward CDT agents, EDT agents, etc., these dilemmas aren't very interesting, and can't help us choose between theories.\nDilemmas like Newcomb's problem and Death in Damascus, however, don't evaluate agents based on their decision theories. They evaluate agents based on their actions, and the task of the decision theory is to determine which action is best. If it's unfair to criticize CDT for making the wrong choice in problems like this, then it's hard to see on what grounds we can criticize any agent for making a wrong choice in any problem, since one can always claim that one is merely at the mercy of one's decision theory.Our paper describes the XOR blackmail problem like so:\nAn agent has been alerted to a rumor that her house has a terrible termite infestation, which would cost her $1,000,000 in damages. She doesn't know whether this rumor is true. A greedy and accurate predictor with a strong reputation for honesty has learned whether or not it's true, and drafts a letter:\n\"I know whether or not you have termites, and I have sent you this letter iff exactly one of the following is true: (i) the rumor is false, and you are going to pay me $1,000 upon receiving this letter; or (ii) the rumor is true, and you will not pay me upon receiving this letter.\"\nThe predictor then predicts what the agent would do upon receiving the letter, and sends the agent the letter iff exactly one of (i) or (ii) is true. Thus, the claim made by the letter is true. Assume the agent receives the letter. Should she pay up?\nIn this scenario, EDT pays the blackmailer, while CDT and FDT refuse to pay. See the \"Cheating Death in Damascus\" paper for more details.Ben Levinstein notes that this can be compared to backward induction in game theory with common knowledge of rationality. You suppose you're at some final decision node which you only would have gotten to (as it turns out) if the players weren't actually rational to begin with.FDT agents intervene on their decision function, \"FDT(P,G)\". The CDT version replaces this node with \"Predisposition\" and instead intervenes on \"Act\".Specifically, the CDT-endorsed response here is: \"Well, I'll commit to acting like an FDT agent on future problems, but in one-shot prisoner's dilemmas that began in my past, I'll still defect against copies of myself\".\nThe problem with this response is that it can cost you arbitrary amounts of utility, provided a clever blackmailer wishes to take advantage. Consider the retrocausal blackmail dilemma in \"Toward Idealized Decision Theory\":\nThere is a wealthy intelligent system and an honest AI researcher with access to the agent's original source code. The researcher may deploy a virus that will cause $150 million each in damages to both the AI system and the researcher, and which may only be deactivated if the agent pays the researcher $100 million. The researcher is risk-averse and only deploys the virus upon becoming confident that the agent will pay up. The agent knows the situation and has an opportunity to self-modify after the researcher acquires its original source code but before the researcher decides whether or not to deploy the virus. (The researcher knows this, and has to factor this into their prediction.)\nCDT pays the retrocausal blackmailer, even if it has the opportunity to precommit to do otherwise. FDT (which in any case has no need for precommitment mechanisms) refuses to pay. I cite the intuitive undesirability of this outcome to argue that one should follow FDT in full generality, as opposed to following CDT's prescription that one should only behave in FDT-like ways in future dilemmas.\nThe argument above must be made from a pre-theoretic vantage point, because CDT is internally consistent. There is no argument one could give to a true CDT agent that would cause it to want to use anything other than CDT in decision problems that began in its past.\nIf examples like retrocausal blackmail have force (over and above the force of other arguments for FDT), it is because humans aren't genuine CDT agents. We may come to endorse CDT based on its theoretical and practical virtues, but the case for CDT is defeasible if we discover sufficiently serious flaws in CDT, where \"flaws\" are evaluated relative to more elementary intuitions about which actions are good or bad. FDT's advantages over CDT and EDT — properties like its greater theoretical simplicity and generality, and its achievement of greater utility in standard dilemmas — carry intuitive weight from a position of uncertainty about which decision theory is correct.In principle, it could even turn out that following the prescriptions of the correct decision theory in full generality is humanly impossible. There's no law of logic saying that the normatively correct decision-making behaviors have to be compatible with arbitrary brain designs (including human brain design). I wouldn't bet on this, but in such a case learning the correct theory would still have practical import, since we could still build AI systems to follow the normatively correct theory.A New Year's resolution that requires me to repeatedly follow through on a promise that I care about in the long run, but would prefer to ignore in the moment, can be modeled as a one-shot twin prisoner's dilemma. In this case, the dilemma is temporally extended, and my \"twins\" are my own future selves, who I know reason more or less the same way I do.\nIt's conceivable that I could go off my diet today (\"defect\") and have my future selves pick up the slack for me and stick to the diet (\"cooperate\"), but in practice if I'm the kind of agent who isn't willing today to sacrifice short-term comfort for long-term well-being, then I presumably won't be that kind of agent tomorrow either, or the day after.\nSeeing that this is so, and lacking a way to force themselves or their future selves to follow through, CDT agents despair of promise-keeping and abandon their resolutions. FDT agents, seeing the same set of facts, do just the opposite: they resolve to cooperate today, knowing that their future selves will reason symmetrically and do the same.The paper above shows how to use reflective oracles with CDT as opposed to FDT, because (a) one battle at a time and (b) we don't yet have a generic algorithm for computing logical counterfactuals, but we do have a generic algorithm for doing CDT-type reasoning.The post Decisions are for making bad outcomes inconsistent appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Decisions are for making bad outcomes inconsistent", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=22", "id": "31eefdb9837d80118505badd3e444df0"} {"text": "April 2017 Newsletter\n\n\n\n\n\n\n\n\nOur newest publication, \"Cheating Death in Damascus,\" makes the case for functional decision theory, our general framework for thinking about rational choice and counterfactual reasoning.\nIn other news, our research team is expanding! Sam Eisenstat and Marcello Herreshoff, both previously at Google, join MIRI this month.\nResearch updates\n\nNew at IAFF: \"Formal Open Problem in Decision Theory\"\nNew at AI Impacts: \"Trends in Algorithmic Progress\"; \"Progress in General-Purpose Factoring\"\nWe ran a weekend workshop on agent foundations and AI safety.\n\nGeneral updates\n\nOur annual review covers our research progress, fundraiser outcomes, and other take-aways from 2016.\nWe attended the Colloquium on Catastrophic and Existential Risk.\nNate Soares weighs in on the Future of Life Institute's Risk Principle.\n\"Elon Musk's Billion-Dollar Crusade to Stop the AI Apocalypse\" features quotes from Eliezer Yudkowsky, Demis Hassabis, Mark Zuckerberg, Peter Thiel, Stuart Russell, and others.\n\n\nNews and links\n\nThe Open Philanthropy Project and OpenAI begin a partnership: Holden Karnofsky joins Elon Musk and Sam Altman on OpenAI's Board of Directors, and Open Philanthropy contributes $30M to OpenAI's research program.\nOpen Philanthropy has also awarded $2M to the Future of Humanity Institute.\nModeling Agents with Probabilistic Programs: a new book by Owain Evans, Andreas Stuhlmüller, John Salvatier, and Daniel Filan.\nNew from OpenAI: \"Evolution Strategies as a Scalable Alternative to Reinforcement Learning\"; \"Learning to Communicate\"; \"One-Shot Imitation Learning\"; and from Paul Christiano, \"Benign Model-Free RL.\"\nChris Olah and Shan Carter discuss research debt as an obstacle to clear thinking and the transmission of ideas, and propose Distill as a solution.\nAndrew Trask proposes encrypting deep learning algorithms during training.\nRoman Yampolskiy seeks submissions for a book on AI safety and security.\n80,000 Hours has updated their problem profile on positively shaping the development of AI, a solid introduction to AI risk — which 80K now ranks as the most urgent problem in the world. See also 80K's write-up on in-demand skill sets at effective altruism oragnizations.\n\n\n\n\n\n\n\nThe post April 2017 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "April 2017 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=22", "id": "bafcbfa23143696968b59cf081cf3504"} {"text": "Two new researchers join MIRI\n\nMIRI's research team is growing! I'm happy to announce that we've hired two new research fellows to contribute to our work on AI alignment: Sam Eisenstat and Marcello Herreshoff, both from Google.\n \nSam Eisenstat studied pure mathematics at the University of Waterloo, where he carried out research in mathematical logic. His previous work was on the automatic construction of deep learning models at Google.\nSam's research focus is on questions relating to the foundations of reasoning and agency, and he is especially interested in exploring analogies between current theories of logical uncertainty and Bayesian reasoning. He has also done work on decision theory and counterfactuals. His past work with MIRI includes \"Asymptotic Decision Theory,\" \"A Limit-Computable, Self-Reflective Distribution,\" and \"A Counterexample to an Informal Conjecture on Proof Length and Logical Counterfactuals.\"\n  \n Marcello Herreshoff studied at Stanford, receiving a B.S. in Mathematics with Honors and getting two honorable mentions in the Putnam Competition, the world's most highly regarded university-level math competition. Marcello then spent five years as a software engineer at Google, gaining a background in machine learning.\nMarcello is one of MIRI's earliest research collaborators, and attended our very first research workshop alongside Eliezer Yudkowsky, Paul Christiano, and Mihály Bárász. Marcello has worked with us in the past to help produce results such as \"Program Equilibrium in the Prisoner's Dilemma via Löb's Theorem,\" \"Definability of Truth in Probabilistic Logic,\" and \"Tiling Agents for Self-Modifying AI.\" His research interests include logical uncertainty and the design of reflective agents.\n \nSam and Marcello will be starting with us in the first two weeks of April. This marks the beginning of our first wave of new research fellowships since 2015, though we more recently added Ryan Carey to the team on an assistant research fellowship (in mid-2016).\nWe have additional plans to expand our research team in the coming months, and will soon be hiring for a more diverse set of technical roles at MIRI — details forthcoming!\nThe post Two new researchers join MIRI appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Two new researchers join MIRI", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=22", "id": "e7d575f65a8b898314c03efd7e2166d4"} {"text": "2016 in review\n\nIt's time again for my annual review of MIRI's activities.1 In this post I'll provide a summary of what we did in 2016, see how our activities compare to our previously stated goals and predictions, and reflect on how our strategy this past year fits into our mission as an organization. We'll be following this post up in April with a strategic update for 2017.\nAfter doubling the size of the research team in 2015,2 we slowed our growth in 2016 and focused on integrating the new additions into our team, making research progress, and writing up a backlog of existing results.\n2016 was a big year for us on the research front, with our new researchers making some of the most notable contributions. Our biggest news was Scott Garrabrant's logical inductors framework, which represents by a significant margin our largest progress to date on the problem of logical uncertainty. We additionally released \"Alignment for Advanced Machine Learning Systems\" (AAMLS), a new technical agenda spearheaded by Jessica Taylor.\nWe also spent this last year engaging more heavily with the wider AI community, e.g., through the month-long Colloquium Series on Robust and Beneficial Artificial Intelligence we co-ran with the Future of Humanity Institute, and through talks and participation in panels at many events through the year.\n\n \n2016 Research Progress\nWe saw significant progress this year in our agent foundations agenda, including Scott Garrabrant's logical inductor formalism (which represents possibly our most significant technical result to date) and related developments in Vingean reflection. At the same time, we saw relatively little progress in error tolerance and value specification, which we had planned to put more focus on in 2016. Below, I'll note the highlights from each of our research areas:\nLogical Uncertainty and Naturalized Induction\n\n2015 progress: sizable. (Predicted: modest.)\n2016 progress: sizable. (Predicted: sizable.)\n\nWe saw a large body of results related to logical induction. Logical induction developed out of earlier work led by Scott Garrabrant in late 2015 (written up in April 2016) that served to divide the problem of logical uncertainty into two subproblems. Scott demonstrated that both problems could be solved at once using an algorithm that satisfies a highly general \"logical induction criterion.\"\nThis criterion provides a simple way of understanding idealized reasoning under resource limitations. In Andrew Critch's words, logical induction is \"a financial solution to the computer science problem of metamathematics\": a procedure that assigns reasonable probabilities to arbitrary (empirical, logical, mathematical, self-referential, etc.) sentences in a way that outpaces deduction, explained by analogy to inexploitable stock markets.\nOur other main 2016 work in this domain is an independent line of research spearheaded by MIRI research associate Vanessa Kosoy, \"Optimal Polynomial-Time Estimators: A Bayesian Notion of Approximation Algorithm.\" Vanessa approaches the problem of logical uncertainty from a more complexity-theoretic angle of attack than logical induction does, providing a formalism for defining optimal feasible approximations of computationally infeasible objects that retain a number of relevant properties of those objects.\nDecision Theory\n\n2015 progress: modest. (Predicted: modest.)\n2016 progress: modest. (Predicted: modest.)\n\nWe continue to see a steady stream of interesting results related to the problem of defining logical counterfactuals. In 2016, we began applying the logical inductor framework to decision-theoretic problems, working with the idea of universal inductors. Andrew Critch also developed a game-theoretic method for resolving policy disagreements that outperforms standard compromise approaches and also allows for negotiators to disagree on factual questions.\nWe have a backlog of many results to write up in this space. Our newest, \"Cheating Death in Damascus,\" summarizes the case for functional decision theory, a theory that systematically outperforms the conventional academic views (causal and evidential decision theory) in decision theory and game theory. This is the basic framework we use for studying logical counterfactuals and related open problems, and is a good introductory paper for understanding our other work in this space.\nFor an overview of our more recent work on this topic, see Tsvi Benson-Tilsen's decision theory index on the research forum.\nVingean Reflection\n\n2015 progress: modest. (Predicted: modest.)\n2016 progress: modest-to-strong. (Predicted: limited.)\n\nOur main results in reflective reasoning last year concerned self-trust in logical inductors. After seeing no major advances in Vingean reflection for many years—the last big step forward was perhaps Benya Fallenstein's model polymorphism proposal in late 2012—we had planned to de-prioritize work on this problem in 2016, on the assumption that other tools were needed before we could make much more headway. However, in 2016 logical induction turned out to be surprisingly useful for solving a number of outstanding tiling problems.\nAs described in \"Logical Induction,\" logical inductors provide a simple demonstration of self-referential reasoning that is highly general and accurate, is free of paradox, and assigns reasonable credence to the reasoner's own beliefs. This provides some evidence that the problem of logical uncertainty itself is relatively central to a number of puzzles concerning the theoretical foundations of intelligence.\nError Tolerance\n\n2015 progress: limited. (Predicted: modest.)\n2016 progress: limited. (Predicted: modest.)\n\n2016 saw the release of our \"Alignment for Advanced ML Systems\" research agenda, with a focus on error tolerance and value specification. Less progress occurred in these areas than expected, partly because investigations here are still very preliminary. We also spent less time on research in mid-to-late 2016 overall than we had planned, in part because we spent a lot of time writing up our new results and research proposals.\nNate noted in our October AMA that he considers this time investment in drafting write-ups one of our main 2016 errors, and we plan to spend less time on paper-writing in 2017.\nOur 2016 work on error tolerance included \"Two Problems with Causal-Counterfactual Utility Indifference\" and some time we spent discussing and critiquing Dylan Hadfield-Menell's proposal of corrigibility via CIRL. We plan to share our thoughts on the latter line of research more widely later this year.\nValue Specification\n\n2015 progress: limited. (Predicted: limited.)\n2016 progress: weak-to-modest. (Predicted: modest.)\n\nAlthough we planned to put more focus on value specification last year, we ended up making less progress than expected. Examples of our work in this area include Jessica Taylor and Ryan Carey's posts on online learning, and Jessica's analysis of how errors might propagate within a system of humans consulting one another.\n \nWe're extremely pleased with our progress on the agent foundations agenda over the last year, and we're hoping to see more progress cascading from the new set of tools we've developed. At the same time, it remains to be seen how tractable the new set of problems we're tackling in the AAMLS agenda are.\n \n2016 Research Support Activities\nIn September, we brought on Ryan Carey to support Jessica's work on the AAMLS agenda as an assistant research fellow.3 Our assistant research fellowship program seems to be working out well; Ryan has been a lot of help to us in working with Jessica to write up results (e.g., \"Bias-Detecting Online Learners\"), along with setting up TensorFlow tools for a project with Patrick LaVictoire.\nWe'll likely be expanding the program this year and bringing on additional assistant research fellows, in addition to a slate of new research fellows.\nFocusing on other activities that relate relatively directly to our technical research program, including collaborating and syncing up with researchers in industry and academia, in 2016 we:\n\nRan an experimental month-long Colloquium Series for Robust and Beneficial Artificial Intelligence (CSRBAI) featuring three weekend workshops and eighteen talks (by Stuart Russell, Tom Dietterich, Francesca Rossi, Bart Selman, Paul Christiano, Jessica Taylor, and others). See our retrospective here, and a full list of videos here.\nHosted six non-CSRBAI research workshops (three on our agent foundations agenda, three on AAMLS) and co-ran the MIRI Summer Fellows program. We also supported dozens of MIRIx events, hosted a grad student seminar at our offices for UC Berkeley students, and taught at SPARC.\nHelped put together OpenAI Gym's safety environments and the Center for Human-Compatible AI's annotated AI safety reading list, in collaboration with a number of researchers from other institutions.\nGave a half dozen talks at non-MIRI events:\n\nEliezer Yudkowsky on \"AI Alignment: Why It's Hard, and Where to Start\" at Stanford University, where he was the Symbolic Systems Distinguished Speaker of 2016, and at the NYU \"Ethics of AI\" conference (details);\nJessica Taylor on \"Using Machine Learning to Address AI Risk\" at Effective Altruism Global;\nAndrew Critch on logical induction at EA Global (video), and at Princeton, Harvard, and MIT;\nAndrew on superintelligence as a top priority at the Society for Risk Analysis (slides) and at ENVISION (video), where he also ran a workshop on logical induction;\nand Nate Soares on logical induction at EAGxOxford.\n\n\nPublished two papers in a top AI conference, UAI: \"A Formal Solution to the Grain of Truth Problem\" (co-authored with Jan Leike, now at DeepMind) and \"Safely Interruptible Agents\" (co-authored by Laurent Orseau of DeepMind and a MIRI research associate, Stuart Armstrong of the Future of Humanity Institute). We also presented papers at AGI and at AAAI and IJCAI workshops.\nSpoke on panels at EA Global, ENVISION, AAAI (details), and EAGxOxford (with Demis Hassabis, Toby Ord, and two new DeepMind recruits: Viktoriya Krakovna and Murray Shanahan). Nate also co-moderated an AI Safety Q&A at the OpenAI unconference.\nAttended other academic events, including NIPS, ICML, the Workshop for Safety and Control for Artificial Intelligence, and the Future of AI Symposium at NYU organized by Yann LeCun.\n\nOn the whole, our research team growth in 2016 was somewhat slower than expected. We're still accepting applicants for our type theorist position (and for other research roles at MIRI, via our Get Involved page), but we expect to leave that role unfilled for at least the next 6 months while we focus on onboarding additional core researchers.4\n \n2016 General Activities\nAlso in 2016, we:\n\nHired new administrative staff: development specialist Colm Ó Riain, office manager Aaron Silverbook, and staff writer Matthew Graves. I also took on a leadership role as MIRI's COO.\nContributed to the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. I co-chaired the committee on the Safety and Beneficence of Artificial General Intelligence and Artificial Superintelligence, and moderated the workshop on the same subject at the IEEE Symposium on Ethics of Autonomous Systems.\nHad our forecasting research cited in the White House's future of AI report, and wrote up a public-facing explanation of our strategic view for the White House's request for information.\nAnswered questions at an \"Ask MIRI Anything\" AMA, considered parallels between AlphaGo and general AI, and had a back-and-forth with economist Bryan Caplan (1, 2, 3).\nReceived press coverage in a Scientific American blog (John Horgan interviews Eliezer Yudkowsky), OZY, Tech Republic, Harvard Business Review, Gizmodo, the Washington Post, CNET (1, 2), and BuzzFeed News.\n\n \n2016 Fundraising\n2016 was a strong year in MIRI's fundraising efforts. We raised a total of $2,285,200, a 44% increase on the $1,584,109 raised in 2015. This increase was largely driven by:\n\nA general grant of $500,000 from the Open Philanthropy Project.5\nA donation of $300,000 from Blake Borgeson.\nContributions of $93,548 from Raising for Effective Giving.6\nA research grant of $83,309 from the Future of Life Institute.7\nOur community's strong turnout during our Fall Fundraiser—at $595,947, our second-largest fundraiser to date.\nA gratifying show of support from supporters at the end of the year, despite our not running a Winter Fundraiser.\n\nAssuming we can sustain this funding level going forward, this represents a preliminary fulfillment of our primary fundraising goal from January 2016:\nOur next big push will be to close the gap between our new budget and our annual revenue. In order to sustain our current growth plans — which are aimed at expanding to a team of approximately ten full-time researchers — we'll need to begin consistently taking in close to $2M per year by mid-2017.\nAs the graph below indicates, 2016 continued a positive trend of growth in our fundraising efforts.\n\nDrawing conclusions from these year-by-year comparisons can be a little tricky. MIRI underwent significant organizational changes over this time span, particularly in 2013. We also switched to accrual-based accounting in 2014, which also complicates comparisons with previous years.\nHowever, it is possible to highlight certain aspects of our progress in 2016:\n\nThe Fall Fundraiser: For the first time, we held a single fundraiser in 2016 instead of our \"traditional\" summer and winter fundraisers—from mid-September to October 31. While we didn't hit our initial target of $750k, we hoped that our funders were waiting to give later in the year and would make up the shortfall at the end of year. We were pleased that they came through in large numbers at the end of 2016, some possibly motivated by public posts by members of the community.8 All told, we received more contributions in December 2016 (~$430,000) than in the same month in either of the previous two years, when we actively ran Winter Fundraisers, an interesting data point for us. The following charts throw additional light on our supporters' response to the fall fundraiser:\n\n\nNote that if we remove the Open Philanthropy Project's grant from the Pre-Fall data, the ratios across the 4 time segments all look pretty similar. Overall, this data is suggestive that, rather than a group of new funders coming in at the last moment, a segment of our existing funders chose to wait until the end of the year to donate.\nIn 2016 the remarkable support we received from returning funders was particularly noteworthy, with 89% retention (in terms of dollars) from 2015 funders. To put this in a broader context, the average gift retention rate across a representative segment of the US philanthropic space over the last 5 years has been 46%.\nThe number of unique funders to MIRI rose 16% in 2016—from 491 to 571—continuing a general increasing trend. 2014 is anomalously high on this graph due to the community's active participation in our memorable SVGives campaign.9\n\nInternational support continues to make up about 20% of contributions. Unlike in the US, where increases were driven mainly by new institutional support (the Open Philanthropy Project), international support growth was driven by individuals across Europe (notably Scandinavia and the UK), Australia, and Canada.\n\nUse of employer matching programs increased by 17% year-on-year, with contributions of over $180,000 received through corporate matching programs in 2016, our highest to date. There are early signs of this growth continuing through 2017.\nAn analysis of contributions made from small, mid-sized, large, and very large funder segments shows contributions from all four segments increased proportionally from 2015:\n\n\nDue to the fact that we raised more than $2 million in 2016, we are now required by California law to prepare an annual financial statement audited by an independent certified public accountant (CPA). That report, like our financial reports of past years, will be made available by the end of September, on our transparency and financials page.\n \nGoing Forward\nAs of July 2016, we had the following outstanding goals from mid-2015:\n\nAccelerated growth: \"expand to a roughly ten-person core research team.\" (source)\nType theory in type theory project: \"hire one or two type theorists to work on developing relevant tools full-time.\" (source)\nIndependent review: \"We're also looking into options for directly soliciting public feedback from independent researchers regarding our research agenda and early results.\" (source)\n\nWe currently have seven research fellows and assistant fellows, and are planning to hire several more in the very near future. We expect to hit our ten-fellow goal in the next 3–4 months, and to continue to grow the research team later this year. As noted above, we're delaying moving forward on a type theorist hire.\nThe Open Philanthropy Project is currently reviewing our research agenda as part of their process of evaluating us for future grants. They released an initial big-picture organizational review of MIRI in September, accompanied by reviews of several recent MIRI papers (which Nate responded to here). These reviews were generally quite critical of our work, with Open Phil expressing a number of reservations about our agent foundations agenda and our technical progress to date. We are optimistic, however, that we will be able to better make our case to Open Phil in discussions going forward, and generally converge more in our views of what open problems deserve the most attention.\nIn our August 2016 strategic update, Nate outlined our other organizational priorities and plans:\n\nTechnical research: continue work on our agent foundations agenda while kicking off work on AAMLS.\nAGI alignment overviews: \"Eliezer Yudkowsky and I will be splitting our time between working on these problems and doing expository writing. Eliezer is writing about alignment theory, while I'll be writing about MIRI strategy and forecasting questions.\"\nAcademic outreach events: \"To help promote our approach and grow the field, we intend to host more workshops aimed at diverse academic audiences. We'll be hosting a machine learning workshop in the near future, and might run more events like CSRBAI going forward.\"\nPaper-writing: \"We also have a backlog of past technical results to write up, which we expect to be valuable for engaging more researchers in computer science, economics, mathematical logic, decision theory, and other areas.\"\n\nAll of these are still priorities for us, though we now consider 5 somewhat more important (and 6 and 7 less important). We've since run three ML workshops, and have made more headway on our AAMLS research agenda. We now have a large amount of content prepared for our AGI alignment overviews, and are beginning a (likely rather long) editing process. We've also released \"Logical Induction\" and have a number of other papers in the pipeline.\nWe'll be providing more details on how our priorities have changed since August in a strategic update post next month. As in past years, object-level technical research on the AI alignment problem will continue to be our top priority, although we'll be undergoing a medium-sized shift in our research priorities and outreach plans.10\n \nSee our previous reviews: 2015, 2014, 2013.From 2015 in review: \"Patrick LaVictoire joined in March, Jessica Taylor in August, Andrew Critch in September, and Scott Garrabrant in December. With Nate transitioning to a non-research role, overall we grew from a three-person research team (Eliezer, Benya, and Nate) to a six-person team.\"As I noted in our AMA: \"At MIRI, research fellow is a full-time permanent position. A decent analogy in academia might be that research fellows are to assistant research fellows as full-time faculty are to post-docs. Assistant research fellowships are intended to be a more junior position with a fixed 1–2 year term.\"In the interim, our research intern Jack Gallagher has continued to make useful contributions in this domain.Note that numbers in this section might not exactly match previously published estimates, since small corrections are often made to contributions data. Note also that these numbers do not include in-kind donations.This figure only counts direct contributions through REG to MIRI. REG/EAF's support for MIRI is closer to $150,000 when accounting for contributions made through EAF, many made on REG's advice.We were also awarded a $75,000 grant from the Center for Long-Term Cybersecurity to pursue a corrigibility project with Stuart Russell and a new UC Berkeley postdoc, but we weren't able to fill the intended postdoc position in the relevant timeframe and the project was canceled. Stuart Russell subsequently received a large grant from the Open Philanthropy Project to launch a new academic research institute for studying corrigibility and other AI safety issues, the Center for Human-Compatible AI.We received timely donor recommendations from investment analyst Ben Hoskin, Future of Humanity Institute researcher Owen Cotton-Barratt, and Daniel Dewey and Nick Beckstead of the Open Philanthropy Project (echoed by 80,000 Hours).Our 45% retention of unique funders from 2015 is very much in line with funder retention across the US philanthropic space, which combined with the previous point, suggests returning MIRI funders were significantly more supportive than most.My thanks to Rob Bensinger, Colm Ó Riain, and Matthew Graves for their substantial contributions to this post.The post 2016 in review appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "2016 in review", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=23", "id": "fd28fa3d5009dab9d2a3e51633620174"} {"text": "New paper: \"Cheating Death in Damascus\"\n\nMIRI Executive Director Nate Soares and Rutgers/UIUC decision theorist Ben Levinstein have a new paper out introducing functional decision theory (FDT), MIRI's proposal for a general-purpose decision theory.\nThe paper, titled \"Cheating Death in Damascus,\" considers a wide range of decision problems. In every case, Soares and Levinstein show that FDT outperforms all earlier theories in utility gained. The abstract reads:\nEvidential and Causal Decision Theory are the leading contenders as theories of rational action, but both face fatal counterexamples. We present some new counterexamples, including one in which the optimal action is causally dominated. We also present a novel decision theory, Functional Decision Theory (FDT), which simultaneously solves both sets of counterexamples.\nInstead of considering which physical action of theirs would give rise to the best outcomes, FDT agents consider which output of their decision function would give rise to the best outcome. This theory relies on a notion of subjunctive dependence, where multiple implementations of the same mathematical function are considered (even counterfactually) to have identical results for logical rather than causal reasons. Taking these subjunctive dependencies into account allows FDT agents to outperform CDT and EDT agents in, e.g., the presence of accurate predictors. While not necessary for considering classic decision theory problems, we note that a full specification of FDT will require a non-trivial theory of logical counterfactuals and algorithmic similarity.\n\"Death in Damascus\" is a standard decision-theoretic dilemma. In it, a trustworthy predictor (Death) promises to find you and bring your demise tomorrow, whether you stay in Damascus or flee to Aleppo. Fleeing to Aleppo is costly and provides no benefit, since Death, having predicted your future location, will then simply come for you in Aleppo instead of Damascus.\nIn spite of this, causal decision theory often recommends fleeing to Aleppo — for much the same reason it recommends defecting in the one-shot twin prisoner's dilemma and two-boxing in Newcomb's problem. CDT agents reason that Death has already made its prediction, and that switching cities therefore can't cause Death to learn your new location. Even though the CDT agent recognizes that Death is inescapable, the CDT agent's decision rule forbids taking this fact into account in reaching decisions. As a consequence, the CDT agent will happily give up arbitrary amounts of utility in a pointless flight from Death.\nCausal decision theory fails in Death in Damascus, Newcomb's problem, and the twin prisoner's dilemma — and also in the \"random coin,\" \"Death on Olympus,\" \"asteroids,\" and \"murder lesion\" dilemmas described in the paper — because its counterfactuals only track its actions' causal impact on the world, and not the rest of the world's causal (and logical, etc.) structure.\nWhile evidential decision theory succeeds in these dilemmas, it fails in a new decision problem, \"XOR blackmail.\"1 FDT consistently outperforms both of these theories, providing an elegant account of normative action for the full gamut of known decision problems.\n\n\nThe underlying idea of FDT is that an agent's decision procedure can be thought of as a mathematical function. The function takes the state of the world described in the decision problem as an input, and outputs an action.\nIn the Death in Damascus problem, the FDT agent recognizes that their action cannot cause Death's prediction to change. However, Death and the FDT agent are in a sense computing the same function: their actions are correlated, in much the same way that if the FDT agent were answering a math problem, Death could predict the FDT agent's answer by computing the same mathematical function.\nThis simple notion of \"what variables depend on my action?\" avoids the spurious dependencies that EDT falls prey to. Treating decision procedures as multiply realizable functions does not require us to conflate correlation with causation. At the same time, FDT tracks real-world dependencies that CDT ignores, allowing it to respond effectively in a much more diverse set of decision problems than CDT.\nThe main wrinkle in this decision theory is that FDT's notion of dependence requires some account of \"counterlogical\" or \"counterpossible\" reasoning.\nThe prescription of FDT is that agents treat their decision procedure as a deterministic function, consider various outputs this function could have, and select the output associated with the highest-expected-utility outcome. What does it mean, however, to say that there are different outputs a deterministic function \"could have\"? Though one may be uncertain about the output of a certain function, there is in reality only one possible output of a function on a given input. Trying to reason about \"how the world would look\" on different assumptions about a function's output on some input is like trying to reason about \"how the world would look\" on different assumptions about which is the largest integer in the set {1, 2, 3}.\nIn garden-variety counterfactual reasoning, one simply imagines a different (internally consistent) world, exhibiting different physical facts but the same logical laws. For counterpossible reasoning of the sort needed to say \"if I stay in Damascus, Death will find me here\" as well as \"if I go to Aleppo, Death will find me there\" — even though only one of these events is logically possible, under a full specification of one's decision procedure and circumstances — one would need to imagine worlds where different logical truths hold. Mathematicians presumably do this in some heuristic fashion, since they must weigh the evidence for or against different conjectures; but it isn't clear how to formalize this kind of reasoning in a practical way.2\nFunctional decision theory is a successor to timeless decision theory (first discussed in 2009), a theory by MIRI senior researcher Eliezer Yudkowsky that made the mistake of conditioning on observations. FDT is a generalization of Wei Dai's updateless decision theory.3\nWe'll be presenting \"Cheating Death in Damascus\" at the Formal Epistemology Workshop, an interdisciplinary conference showcasing results in epistemology, philosophy of science, decision theory, foundations of statistics, and other fields.4\nUpdate April 7: Nate goes into more detail on the interpretive questions raised by functional decision theory in a follow-up conversation: Decisions are for making bad outcomes inconsistent.\nUpdate November 25, 2019: A revised version of this paper has been accepted to The Journal of Philosophy. The JPhil version is here, while the 2017 FEW version is available here.\n\n \n\n\n\nSign up to get updates on new MIRI technical results\nGet notified every time a new technical paper is published.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n \nJust as the variants on Death in Damascus in Soares and Levinstein's paper help clarify CDT's particular point of failure, XOR blackmail drills down more exactly on EDT's failure point than past decision problems have. In particular, EDT cannot be modified to avoid XOR blackmail in the ways it can be modified to smoke in the smoking lesion problem.Logical induction is an example of a method for assigning reasonable probabilities to mathematical conjectures; but it isn't clear from this how to define a decision theory that can calculate expected utilities for inconsistent scenarios. Thus the problem of reasoning under logical uncertainty is distinct from the problem of defining counterlogical reasoning.The name \"UDT\" has come to be used to pick out a multitude of different ideas, including \"UDT 1.0\" (Dai's original proposal), \"UDT 1.1\", and various proof-based approaches to decision theory (which make useful toy models, but not decision theories that anyone advocates adhering to).\nFDT captures a lot (but not all) of the common ground between these ideas, and is intended to serve as a more general umbrella category that makes fewer philosophical commitments than UDT and which is easier to explain and communicate. Researchers at MIRI do tend to hold additional philosophical commitments that are inferentially further from the decision theory mainstream (which concern updatelessness and logical prior probability), for which certain variants of UDT are perhaps our best concrete theories, but no particular model of decision theory is yet entirely satisfactory.Thanks to Matthew Graves and Nate Soares for helping draft and edit this post.The post New paper: \"Cheating Death in Damascus\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “Cheating Death in Damascus”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=23", "id": "3344b948d74c09ca7e9e8121c1ffd87e"} {"text": "March 2017 Newsletter\n\n\n\n\n\n\n\nResearch updates\n\nNew at IAFF: Some Problems with Making Induction Benign; Entangled Equilibria and the Twin Prisoners' Dilemma; Generalizing Foundations of Decision Theory\nNew at AI Impacts: Changes in Funding in the AI Safety Field; Funding of AI Research\nMIRI Research Fellow Andrew Critch has started a two-year stint at UC Berkeley's Center for Human-Compatible AI, helping launch the research program there.\n\"Using Machine Learning to Address AI Risk\": Jessica Taylor explains our AAMLS agenda (in video and blog versions) by walking through six potential problems with highly performing ML systems.\n\n\n\nGeneral updates\n\nWhy AI Safety?: A quick summary (originally posted during our fundraiser) of the case for working on AI risk, including notes on distinctive features of our approach and our goals for the field.\nNate Soares attended \"Envisioning and Addressing Adverse AI Outcomes,\" an event pitting red-team attackers against defenders in a variety of AI risk scenarios.\n\n\n\n\n\nWe also attended an AI safety strategy retreat run by the Center for Applied Rationality.\n\n\nNews and links\n\nRay Arnold provides a useful list of ways the average person help with AI safety.\nNew from OpenAI: attacking machine learning with adversarial examples.\nOpenAI researcher Paul Christiano explains his view of human intelligence:\n\nI think of my brain as a machine driven by a powerful reinforcement learning agent. The RL agent chooses what thoughts to think, which memories to store and retrieve, where to direct my attention, and how to move my muscles.\nThe \"I\" who speaks and deliberates is implemented by the RL agent, but is distinct and has different beliefs and desires. My thoughts are outputs and inputs to the RL agent, they are not what the RL agent \"feels like from the inside.\"\n\n\nChristiano describes three directions and desiderata for AI control: reliability and robustness, reward learning, and deliberation and amplification.\nSarah Constantin argues that existing techniques won't scale up to artificial general intelligence absent major conceptual breakthroughs.\nThe Future of Humanity Institute and the Centre for the Study of Existential Risk ran a \"Bad Actors and AI\" workshop.\nFHI is seeking interns in reinforcement learning and AI safety.\nMichael Milford argues against brain-computer interfaces as an AI risk strategy.\nOpen Philanthropy Project head Holden Karnofsky explains why he sees fewer benefits to public discourse than he used to.\n\n\n\n\n\n\n\nThe post March 2017 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "March 2017 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=23", "id": "d569b663fae00ebe180e4732f23fbee9"} {"text": "Using machine learning to address AI risk\n\nAt the EA Global 2016 conference, I gave a talk on \"Using Machine Learning to Address AI Risk\":\nIt is plausible that future artificial general intelligence systems will share many qualities in common with present-day machine learning systems. If so, how could we ensure that these systems robustly act as intended? We discuss the technical agenda for a new project at MIRI focused on this question.\nA recording of my talk is now up online:\n \n\n \nThe talk serves as a quick survey (for a general audience) of the kinds of technical problems we're working on under the \"Alignment for Advanced ML Systems\" research agenda. Included below is a version of the talk in blog post form.1\nTalk outline:\n\n1. Goal of this research agenda\n2. Six potential problems with highly capable AI systems\n2.1. Actions are hard to evaluate\n2.2. Ambiguous test examples\n2.3. Difficulty imitating human behavior\n2.4. Difficulty specifying goals about the real world\n2.5. Negative side-effects\n2.6. Edge cases that still satisfy the goal\n3. Technical details on one problem: inductive ambiguity identification\n3.1. KWIK learning\n3.2. A Bayesian view of the problem\n4. Other agendas\n\n\n\n\n\nGoal of this research agenda\nThis talk is about a new research agenda aimed at using machine learning to make AI systems safe even at very high capability levels. I'll begin by summarizing the goal of the research agenda, and then go into more depth on six problem classes we're focusing on.\nThe goal statement for this technical agenda is that we want to know how to train a smarter-than-human AI system to perform one or more large-scale, useful tasks in the world.\nSome assumptions this research agenda makes:\n\nFuture AI systems are likely to look like more powerful versions of present-day ML systems in many ways. We may get better deep learning algorithms, for example, but we're likely to still be relying heavily on something like deep learning.2\nArtificial general intelligence (AGI) is likely to be developed relatively soon (say, in the next couple of decades).3\nBuilding task-directed AGI is a good idea, and we can make progress today studying how to do so.\n\nI'm not confident that all three of these assumptions are true, but I think they're plausible enough to deserve about as much attention from the AI community as the likeliest alternative scenarios.\nA task-directed AI system is a system that pursues a semi-concrete objective in the world, like \"build a million houses\" or \"cure cancer.\" For those who have read Superintelligence, task-directed AI is similar to the idea of genie AI. Although these tasks are kind of fuzzy — there's probably a lot of work you'd need to do to clarify what it really means to build a million houses, or what counts as a good house — they're at least somewhat concrete.\nAn example of an AGI system that isn't task-directed would be one with a goal like \"learn human values and do things humans would consider good upon sufficient reflection.\" This is too abstract to count as a \"task\" in the sense we mean; it doesn't directly cash out in things in the world.\nThe hope is that even though task-directed AI pursues a less ambitious objective then \"learn human values and do what we'd want it to do,\" it's still sufficient to prevent global catastrophic risks. Once the immediate risks are averted, we can then work on building more ambitious AI systems under reduced time pressure.\nTask-directed AI uses some (moderate) amount of human assistance to clarify the goal and to evaluate and implement its plans. A goal like \"cure cancer\" is vague enough that humans will have to do some work to clarify what they mean by it, though most of the intellectual labor should be coming from the AI system rather than from humans.\nIdeally, task-directed AI also shouldn't require significantly more computational resources than competing systems. You shouldn't get an exponential slowdown from building a safe system vs. a generic system.\nIn order to think about this overall goal, we need some kind of model for these future systems. The general approach that I'll take is to look at current systems and imagine that they're more powerful. A lot of the time you can look at tasks that people do in ML and you can see that the performance improves over time. We'll model more advanced AI systems by just supposing that systems will continue to achieve higher scores in ML tasks. We can then ask what kinds of failure modes are likely to arise as systems improve, and what we can work on today to make those failures less likely or less costly.\nSix potential problems with highly capable AI systems\nProblem 1: Actions are hard to evaluate\nSuppose an AI system composes a story, and a human gives the system a reward based on how good the story is.4\nThis is similar to some RL tasks: the agent wants to do something that will cause it to receive a high reward in the future. The formalism of RL would say that the objective of this RL agent is to write a story that the human is expected to give a high score to.\nFor this objective to actually help us receive very high-quality stories, however, we also need to know that the human understands the RL agent's actions well enough to correctly administer rewards. This assumption seems less likely to hold for systems that are optimizing the objective much more powerfully than any present-day system. For example:\n\nA system much smarter than a human may be able to manipulate or coerce the human into giving a bad story a high score.\nEven if the system is less intelligent than that, it might resort to plagiarism. Plagiarism can be easier to generate than to detect, since detection often requires scouring a larger pool of source texts.\nA subhuman system might also have an advantage in inserting steganography into the story; it might take polynomial time to embed a secret message, and exponential time to detect such a message. Finding a way to discourage agents from taking covert actions like these would make it easier to monitor those actions' effects and keep operators in the loop.\n\nDo we have a general way of preventing this? Can we train an RL system to not only output an action (e.g., a story), but also a report that might help an overseer better evaluate the system's performance? Following OpenAI researcher Paul Christiano, we call this the problem of informed oversight.5\nProblem 2: Ambiguous test examples\nAnother problem: Consider a classifier trained to distinguish images of cats from images not containing cats, or trained to detect cancer. You may have lots of life experience that tells you \"wild cats are cats.\" If the training set only contains images of house cats and dogs, however, then it may not be possible to infer this fact during training.\nAn AI system that was superhumanly good at classifying images from a particular data set might not construct the same generalizations as a human, making it unreliable in new environments.\nIn safety-critical settings, ideally we would like the classifier to say, \"This is ambiguous,\" to alert us that the image's label is underdetermined by the labels of training set images. We could then leverage the classifier's proficiency at classification to intervene in contexts where the system is relatively likely to misclassify things, and could also supply training data that's tailored to the dimensions along which the original data was uninformative. Formalizing this goal is the problem of inductive ambiguity detection.\nProblem 3: Difficulty imitating human behavior\nOne way we might want to deal with the problems of smarter-than-human AI is to just have it imitate a human. By having the system observe a human attempting to perform some task and having it learn to behave similarly, we might be able to leverage the AI system's cognitive advantages while having it mostly stick to fairly normal strategies.\nAs an example, we can consider an AI system tasked with producing the kind of picture a human would draw. How could you actually train a system on this objective?\nIn one approach, generative adversarial networks, you have a human and two AI agents: an imitator, and a distinguisher.6 The distinguisher tries to guess whether a given picture came from the human or from the imitator, and the imitator tries to produce the kinds of image that the distinguisher classifies as coming from the human.\nThis is a pretty cool setup, but it raises some theoretical issues. We could ask the question, \"Does the distinguisher have to be smarter than the imitator? And if so, how much smarter?\" If the distinguisher is less smart than the imitator, then the imitator could covertly do very non-human-like things to the image, as long as they're too subtle for the distinguisher to notice. An open avenue of research here is to analyze schemes like this and ask under what conditions we can get nice robustness guarantees.\nFor more on this problem, see Paul Christiano's \"Act-Based Agents\" and \"Mimicry and Meeting Halfway\".\nProblem 4: Difficulty specifying goals about the real world\nWhat if we want to train our AI system to perform a task like making a sandwich? One option might be to hook the system up to a robot in a room full of sandwich ingredients, have it perform an action sequence, and then have a human observer rate the robot's performance based on how close it came to making a sandwich. That rating determines the robot's reward.\nWe previously noted that sufficiently capable RL agents might pick actions that are hard to evaluate. Here we face the additional problem that useful tasks will often require taking physical action in the world. If the system is capable enough, then this setup gives it an incentive to take away the reward button and press it itself. This is what the formalism of RL would tell you is the best action, if we imagine AI systems that continue to be trained in the RL framework far past current capability levels.\nA natural question, then, is whether we can train AI systems that just keep getting better at producing a sandwich as they improve in capabilities, without ever reaching a tipping point where they have an incentive to do something else. Can we avoid relying on proxies for the task we care about, and just train the system to value completing the task in its own right? This is the generalizable environmental goals problem.\nProblem 5: Negative side-effects\nSuppose we succeeded in making a system that wants to put a sandwich in the room. In choosing between plans, it will favor whichever plan has the higher probability of resulting in a sandwich. Perhaps the policy of just walking over and making a sandwich has a 99.9% chance of success; but there's always a chance that a human could step in and shut off the robot. A policy that drives down the probability of interventions like that might push up the probability of the room ending up containing a sandwich to 99.9999%. In this way, sufficiently advanced ML systems can end up with incentives to interfere with their developers and operators even when there's no risk of reward hacking.\nThis is the problem of designing task-directed systems that can become superhumanly good at achieving their task, without causing negative side-effects in the process.\nOne response to this problem is to try to quantify how much total impact different policies have on the world. We can then add a penalty term for actions that have a high impact, causing the system to favor low-impact strategies.\nAnother approach is to ask how we might design an AI system to be satisfied with a merely 99.9% chance of success — just have the system stop trying to think up superior policies once it finds one meeting that threshold. This is the problem of formalizing mild optimization.\nOr one can consider advanced AI systems from the perspective of convergent instrumental strategies. No matter what the system is trying to do, it can probably benefit by having more computational resources, by having the programmers like it more, by having more money. A sandwich-making system might want money so it can buy more ingredients, whereas a story-writing system might want money so it can buy books to learn from. Many different goals imply similar instrumental strategies, a number of which are likely to introduce conflicts due to resource limitations.\nOne, approach, then, would be to study these instrumental strategies directly and try to find a way to design a system that doesn't exhibit them. If we can identify common features of these strategies, and especially of the adversarial strategies, then we could try to proactively avert the incentives to pursue those strategies. This seems difficult, and is very underspecified, but there's some initial research pointed in this direction.\nProblem 6: Edge cases that still satisfy the goal\nAnother problem that's likely to become more serious as ML systems become more advanced is edge cases.\nConsider our ordinary concept of a sandwich. There are lots of things that technically count as sandwiches, but are unlikely to have the same practical uses a sandwich normally has for us. You could have an extremely small or extremely large sandwich, or a toxic sandwich.\nFor an example of this behavior in present-day systems, we can consider this image that an image classifier correctly classified as a panda (with 57% confidence). Goodfellow, Shlens, and Szegedy found that they could add a tiny vector to this image that causes the classifier to misclassify it as a gibbon with 99% confidence.7\nSuch edge cases are likely to become more common and more hazardous as ML systems begin to search wider solution spaces than humans are likely (or even able) to consider. This is then another case where systems might become increasingly good at maximizing their score on a conventional metric, while becoming less reliable for achieving realistic goals we care about.\nConservative concepts are an initial idea for trying to address this problem, by biasing systems to avoid assigning positive classifications to examples that are near the edges of the search space. The system might then make the mistake of thinking that some perfectly good sandwiches are inadmissible, but it would not make the more risky mistake of classifying toxic or otherwise bizarre sandwiches as admissible.\nTechnical details on one problem: inductive ambiguity identification\nI've outlined eight research directions for addressing six problems that seem likely to start arising (or to become more serious) as ML systems become better at optimizing their objectives — objectives that may not exactly match programmers' intentions. The research directions were:\n\nInformed oversight, for making it easier to interpret and assess ML systems' actions.\nInductive ambiguity identification, for designing classifiers that stop and check in with overseers in circumstances where their training data was insufficiently informative.\nRobust human imitation, for recapitulating the safety-conducive features of humans in ML systems.\nGeneralizable environmental goals, for preventing RL agents' instrumental incentives to seize control of their reward signal.\nImpact measures, mild optimization, and averting instrumental incentives, for preventing negative side-effects of superhumanly effective optimization in a general-purpose way.\nConservative concepts, for steering clear of edge cases.\n\nThese problems are discussed in more detail in \"Alignment for Advanced ML Systems.\" I'll go into more technical depth on an example problem to give a better sense of what working on these problems looks like in practice.\nKWIK learning\nLet's consider the inductive ambiguity identification problem, applied to a classifier for 2D points. In this case, we have 4 positive examples and 4 negative examples.\nWhen a new point comes in, the classifier could try to label it by drawing a whole bunch of models that are consistent with the previous data. Here, I draw just 4 of them. The question mark falls on opposite sides of these different models, suggesting that all of these models are plausible given the data.\nWe can suppose that the system infers from this that the training data is ambiguous with respect to the new point's classification, and asks the human to label it. The human might then label it with a plus, and the system draws new conclusions about which models are plausible.\nThis approach is called \"Knows What It Knows\" learning, or KWIK learning. We start with some input space X ≔ ℝn and assume that there exists some true mapping from inputs to probabilities. E.g., for each image the cat classifier encounters we assume that there is a true answer in the set Y ≔ [0,1] to the question, \"What is the probability that this image is a cat?\" This probability corresponds to the probability that a human will label that image \"1\" as opposed to \"0,\" which we can represent as a weighted coin flip. The model maps the inputs to answers, which in this case are probabilities.8\nThe KWIK learner is going to play a game. At the beginning of the game, some true model h* gets picked out. The true model is assumed to be in the hypothesis set H. On each iteration i some new example xi ∈ ℝn comes in. It has some true answer yi = h*(xi), but the learner is unsure about the true answer. The learner has two choices:\n\nOutput an answer ŷi ∈ [0,1].\n\nIf |ŷi – yi| > ε, the learner then loses the game.\n\n\nOutput ⊥ to indicate that the example is ambiguous.\n\nThe learner then gets to observe the true label zi = FlipCoin(yi) from the observation set Z ≔ {0,1}.\n\n\n\nThe goal is to not lose, and to not output ⊥ too many times. The upshot is that it's actually possible to win this game with a high probability if the hypothesis class H is a small finite set or a low-dimensional linear class. This is pretty cool. It turns out that there are certain forms of uncertainty where we can just resolve the ambiguity.\nThe way this works is that on each new input, we consider multiple models h that have done well in the past, and we consider something \"ambiguous\" if the models disagree on h(xi) by more than ε. Then we just refine the set of models over time.\nThe way that a KWIK learner represents this notion of inductive ambiguity is: ambiguity is about not knowing which model is correct. There's some set of models, many are plausible, and you're not sure which one is the right model.\nThere are some problems with this. One of the main problems is KWIK learning's realizability assumption — the assumption that the true model h* is actually in the hypothesis set H. Realistically, the actual universe won't be in your hypothesis class, since your hypotheses need to fit in your head. Another problem is that this method only works for these very simple model classes.\nA Bayesian view of the problem\nThat's some existing work on inductive ambiguity identification. What's some work we've been doing at MIRI related to this?\nLately, I've been trying to approach this problem from a Bayesian perspective. On this view, we have some kind of prior Q over mappings X → {0,1} from the input space to the label. The assumption we'll make is that our prior is wrong in some way and there's some unknown \"true\" prior P over these mappings. The goal is that even though the system only has access to Q, it should perform the classification task almost as well (in expectation over P) as if it already knew P.\nIt seems like this task is hard. If the real world is sampled from P, and P is different from your prior Q, there aren't that many guarantees. To make this tractable, we can add a grain of truth assumption:\n$$\forall f : Q(f) \\geq \frac{1}{k} P(f) $$\nThis says that if P assigns a high probability to something, then so does Q. Can we get good performance in various classification tasks under this kind of assumption?\nWe haven't completed this research avenue, but initial results suggest that it's possible to do pretty well on this task while avoiding catastrophic behaviors in at least in some cases (e.g., online supervised learning). That's somewhat promising, and this is definitely an area for future research.\nHow this ties in to inductive ambiguity identification: If you're uncertain about what's true, then there are various ways of describing what that uncertainty is about. You can try taking your beliefs and partitioning them into various possibilities. That's in some sense an ambiguity, because you don't know which possibility is correct. We can think of the grain of truth assumption as saying that there's some way of splitting up your probability distribution into components such that one of the components is right. The system should do well even though it doesn't initially know which component is right.\n(For more recent work on this problem, see Paul Christiano's \"Red Teams\" and \"Learning with Catastrophes\" and research forum results from me and Ryan Carey: \"Bias-Detecting Online Learners\" and \"Adversarial Bandit Learning with Catastrophes.\")\nOther research agendas\nLet's return to a broad view and consider other research agendas focused on long-run AI safety. The first such agenda was outlined in MIRI's 2014 agent foundations report.9\nThe agent foundations agenda is about developing a better theoretical understanding of reasoning and decision-making. An example of a relevant gap in our current theories is ideal reasoning about mathematical statements (including statements about computer programs), in contexts where you don't have the time or compute to do a full proof. This is the basic problem we're responding to in \"Logical Induction.\" In this talk I've focused on problems for advanced AI systems that broadly resemble present-day ML; in contrast, the agent foundations problems are agnostic about the details of the system. They apply to ML systems, but also to other possible frameworks for good general-purpose reasoning.\nThen there's the \"Concrete Problems in AI Safety\" agenda.10 Here the idea is to study AI safety problems with a more empirical focus, specifically looking for problems that we can study using current ML methods, and perhaps can even demonstrate in current systems or in systems that might be developed in the near future.\nAs an example, consider the question, \"How do you make an RL agent that behaves safely while it's still exploring its environment and learning about how the environment works?\" It's a question that comes up in current systems all the time, and is relatively easy to study today, but is likely to apply to more capable systems as well.\nThese different agendas represent different points of view on how one might make AI systems more reliable in a way that scales with capabilities progress, and our hope is that by encouraging work on a variety of different problems from a variety of different perspectives, we're less likely to completely miss a key consideration. At the same time, we can achieve more confidence that we're on the right track when relatively independent approaches all arrive at similar conclusions.\nI'm leading the team at MIRI that will be focusing on the \"Alignment for Advanced ML Systems\" agenda going forward. It seems like there's a lot of room for more eyes on these problems, and we're hoping to hire a number of new researchers and kick off a number of collaborations to tackle these problems. If you're interested in these problems and have a solid background in mathematics or computer science, I definitely recommend getting in touch or reading more about these problems.\nI also gave a version of this talk at the MIRI/FHI Colloquium on Robust and Beneficial AI.Alternatively, you may think that AGI won't look like modern ML in most respects, but that the ML aspects are easier to productively study today and are unlikely to be made completely irrelevant by future developments.Alternatively, you may think timelines are long, but that we should focus on scenarios with shorter timelines because they're more urgent.Although I'll use the example of stories here, in real life it could be a system generating plans for curing cancers, and humans evaluating how good the plans are.See the Q&A section of the talk for questions like \"Won't the report be subject to the same concerns as the original story?\"Ian J. Goodfellow et al. \"Generative Adversarial Nets\". In: Advances in Neural Information Processing 27. Ed. by Z. Ghahramani et al. Curran Associates, Inc., 2014, pp. 2672-2680. URL: https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf.Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. \"Explaining and Harnessing Adversarial Examples\". In: (2014). arXiv: [stat.ML].The KWIK learning framework is much more general than this; I'm just giving one example.Nate Soares and Benja Fallenstein. Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda. Tech. rep. 2014-8. Forthcoming 2017 in \"The Technological Singularity: Managing the Journey\" Jim Miller, Roman Yampolskiy, Stuart J. Armstrong, and Vic Callaghan, Eds. Berkeley, CA. Machine Intelligence Research Institute. 2014.Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. \"Concrete Problems in AI Safety\". In: (2016). arXiv: 1606.06565 [cs.AI].The post Using machine learning to address AI risk appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Using machine learning to address AI risk", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=23", "id": "609680aaf9ad1146046c14f4cca9f8a9"} {"text": "February 2017 Newsletter\n\n \n\n\n\n\nFollowing up on a post outlining some of the reasons MIRI researchers and OpenAI researcher Paul Christiano are pursuing different research directions, Jessica Taylor has written up the key motivations for MIRI's highly reliable agent design research.\n \n\nResearch updates\n\nA new paper: \"Toward Negotiable Reinforcement Learning: Shifting Priorities in Pareto Optimal Sequential Decision-Making\"\nNew at IAFF: Pursuing Convergent Instrumental Subgoals on the User's Behalf Doesn't Always Require Good Priors; Open Problem: Thin Logical Priors\nMIRI has a new research advisor: Google DeepMind researcher Jan Leike.\nMIRI and the Center for Human-Compatible AI are looking for research interns for this summer. Apply by March 1!\n\n \nGeneral updates\n\nWe attended the Future of Life Institute's Beneficial AI conference at Asilomar. See Scott Alexander's recap. MIRI executive director Nate Soares was on a technical safety panel discussion with representatives from DeepMind, OpenAI, and academia (video), also featuring a back-and-forth with Yann LeCun, the head of Facebook's AI research group (at 22:00).\nMIRI staff and a number of top AI researchers are signatories on FLI's new Asilomar AI Principles, which include cautions regarding arms races, value misalignment, recursive self-improvement, and superintelligent AI.\nThe Center for Applied Rationality recounts MIRI researcher origin stories and other cases where their workshops have been a big assist to our work, alongside examples of CFAR's impact on other groups.\nThe Open Philanthropy Project has awarded a $32,000 grant to AI Impacts.\nAndrew Critch spoke at Princeton's ENVISION conference (video).\nMatthew Graves has joined MIRI as a staff writer. See his first piece for our blog, a reply to \"Superintelligence: The Idea That Eats Smart People.\"\nThe audio version of Rationality: From AI to Zombies is temporarily unavailable due to the shutdown of Castify. However, fans are already putting together a new free recording of the full collection.\n\n \n\nNews and links\n\nAn Asilomar panel on superintelligence (video) gathers Elon Musk (OpenAI), Demis Hassabis (DeepMind), Ray Kurzweil (Google), Stuart Russell and Bart Selman (CHCAI), Nick Bostrom (FHI), Jaan Tallinn (CSER), Sam Harris, and David Chalmers.\nAlso from Asilomar: Russell on corrigibility (video), Bostrom on openness in AI (video), and LeCun on the path to general AI (video).\nFrom MIT Technology Review's \"AI Software Learns to Make AI Software\":\n\nCompanies must currently pay a premium for machine-learning experts, who are in short supply. Jeff Dean, who leads the Google Brain research group, mused last week that some of the work of such workers could be supplanted by software. He described what he termed \"automated machine learning\" as one of the most promising research avenues his team was exploring.\n\n\n\n\nAlphaGo quietly defeats the world's top Go professionals in a crushing 60-win streak. AI also bests the top human players in no-limit poker.\nMore signs that artificial general intelligence is becoming a trendier goal in the field: FAIR proposes an AGI progress metric.\nRepresentatives from Apple and OpenAI join the Partnership on AI, and MIT and Harvard announce a new Ethics and Governance of AI Fund.\nThe World Economic Forum's 2017 Global Risks Report includes a discussion of AI safety: \"given the possibility of an AGI working out how to improve itself into a superintelligence, it may be prudent – or even morally obligatory – to consider potentially feasible scenarios, and how serious or even existential threats may be avoided.\"\nOn the other hand, the JASON advisory group reports to the US Department of Defense that \"the claimed 'existential threats' posed by AI seem at best uninformed,\" adding, \"In the midst of an AI revolution, there are no present signs of any corresponding revolution in AGI.\"\nData scientist Sarah Constantin argues that ML algorithms are exhibiting linear or sublinear performance returns to linear improvements in processing power, and that deep learning represents a break from trend in image and speech recognition, but not in strategy games or language processing.\nNew safety papers discuss human-in-the-loop reinforcement learning and ontology identification, and Jacob Steinhardt writes on latent variables and counterfactual reasoning in AI alignment.\n\n\n\n\n \nThe post February 2017 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "February 2017 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=23", "id": "bc8768cd24f0d85dec786355b501d0a2"} {"text": "CHCAI/MIRI research internship in AI safety\n\nWe're looking for talented, driven, and ambitious technical researchers for a summer research internship with the Center for Human-Compatible AI (CHCAI) and the Machine Intelligence Research Institute (MIRI).\nAbout the research:\nCHCAI is a research center based at UC Berkeley with PIs including Stuart Russell, Pieter Abbeel and Anca Dragan. CHCAI describes its goal as \"to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems\".\nMIRI is an independent research nonprofit located near the UC Berkeley campus with a mission of helping ensure that smarter-than-human AI has a positive impact on the world.\nCHCAI's research focus includes work on inverse reinforcement learning and human-robot cooperation (link), while MIRI's focus areas include task AI and computational reflection (link). Both groups are also interested in theories of (bounded) rationality that may help us develop a deeper understanding of general-purpose AI agents.\nTo apply:\n1. Fill in the form here: https://goo.gl/forms/bDe6xbbKwj1tgDbo1\n2. Send an email to with the subject line \"AI safety internship application\", attaching your CV, a piece of technical writing on which you were the primary author, and your research proposal.\n\nThe research proposal should be one to two pages in length. It should outline a problem you think you can make progress on over the summer, and some approaches to tackling it that you consider promising. We recommend reading over CHCAI's annotated bibliography and the concrete problems agenda as good sources for open problems in AI safety, if you haven't previously done so.\nYou should target your proposal at a specific research agenda or a specific adviser's interests. Advisers' interests include:\n• Andrew Critch (CHCAI, MIRI): anything listed in CHCAI's open technical problems; negotiable reinforcement learning; game theory for agents with transparent source code (e.g., \"Program Equilibrium\" and \"Parametric Bounded Löb's Theorem and Robust Cooperation of Bounded Agents\").\n• Daniel Filan (CHCAI): the contents of \"Foundational Problems,\" \"Corrigibility,\" \"Preference Inference,\" and \"Reward Engineering\" in CHCAI's open technical problems list.\n• Dylan Hadfield-Menell (CHCAI): application of game-theoretic analysis to models of AI safety problems (specifically by people who come from a theoretical economics background); formulating and analyzing AI safety problems as CIRL games; the relationships between AI safety and principal-agent models / theories of incomplete contracting; reliability engineering in machine learning; questions about fairness.\n• Jessica Taylor, Scott Garrabrant, and Patrick LaVictoire (MIRI): open problems described in MIRI's agent foundations and alignment for advanced ML systems research agendas.\nThis application does not bind you to work on your submitted proposal. Its purpose is to demonstrate your ability to make concrete suggestions for how to make progress on a given research problem.\nWho we're looking for:\nThis is a new and somewhat experimental program. You'll need to be self-directed, and you'll need to have enough knowledge to get started tackling the problems. The supervisors can give you guidance on research, but they aren't going to be teaching you the material. However, if you're deeply motivated by research, this should be a fantastic experience.\nSuccessful applicants will demonstrate examples of technical writing, motivation and aptitude for research, and produce a concrete research proposal. We expect most successful applicants will either:\n• have or be pursuing a PhD closely related to AI safety;\n• have or be pursuing a PhD in an unrelated field, but currently pivoting to AI safety, with evidence of sufficient knowledge and motivation for AI safety research; or\n• be an exceptional undergraduate or masters-level student with concrete evidence of research ability (e.g., publications or projects) in an area closely related to AI safety.\nLogistics:\nProgram dates are flexible, and may vary from individual to individual. However, our assumption is that most people will come for twelve weeks, starting in early June.\nThe program will take place in the San Francisco Bay Area. Basic living expenses will be covered. We can't guarantee that housing will be all arranged for you, but we can provide assistance in finding housing if needed.\nInterns who are not US citizens will most likely need to apply for J-1 intern visas. Once you have been accepted to the program, we can help you with the required documentation.\nDeadlines:\nThe deadline for applications is the March 1. Applicants should hear back about decisions by March 20.\nThe post CHCAI/MIRI research internship in AI safety appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "CHCAI/MIRI research internship in AI safety", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=23", "id": "b3eb2fc8968b3f537ed03145e33e794c"} {"text": "New paper: \"Toward negotiable reinforcement learning\"\n\nMIRI Research Fellow Andrew Critch has developed a new result in the theory of conflict resolution, described in \"Toward negotiable reinforcement learning: Shifting priorities in Pareto optimal sequential decision-making.\"\nAbstract: \nExisting multi-objective reinforcement learning (MORL) algorithms do not account for objectives that arise from players with differing beliefs. Concretely, consider two players with different beliefs and utility functions who may cooperate to build a machine that takes actions on their behalf. A representation is needed for how much the machine's policy will prioritize each player's interests over time.\nAssuming the players have reached common knowledge of their situation, this paper derives a recursion that any Pareto optimal policy must satisfy. Two qualitative observations can be made from the recursion: the machine must (1) use each player's own beliefs in evaluating how well an action will serve that player's utility function, and (2) shift the relative priority it assigns to each player's expected utilities over time, by a factor proportional to how well that player's beliefs predict the machine's inputs. Observation (2) represents a substantial divergence from naïve linear utility aggregation (as in Harsanyi's utilitarian theorem, and existing MORL algorithms), which is shown here to be inadequate for Pareto optimal sequential decision-making on behalf of players with different beliefs.\n\nIf AI alignment is as difficult as it looks, then there are already strong reasons for different groups of developers to collaborate and to steer clear of race dynamics: the difference between a superintelligence aligned with one group's values and a superintelligence aligned with another group's values pales compared to the difference between any aligned superintelligence and a misaligned one. As Seth Baum of the Global Catastrophic Risk Institute notes in a recent paper:\nUnfortunately, existing messages about beneficial AI are not always framed well. One potentially counterproductive frame is the framing of strong AI as a powerful winner-takes-all technology. This frame is implicit (and sometimes explicit) in discussions of how different AI groups might race to be the first to build strong AI. The problem with this frame is that it makes a supposedly dangerous technology seem desirable. If strong AI is a winner-takes-all technologies race, then AI groups will want to join the race and rush to be the first to win. This is exactly the opposite of what the discussions of strong AI races generally advocate—they postulate (quite reasonably) that the rush to win the race could compel AI groups to skimp on safety measures, thereby increasing the probability of dangerous outcomes.\nInstead of framing strong AI as a winner-takes-all race, those who are concerned about this technology should frame it as a dangerous and reckless pursuit that would quite likely kill the people who make it. AI groups may have some desire for the power that might accrue to whoever builds strong AI, but they presumably also desire to not be killed in the process.\nResearchers' discussion of mechanisms to disincentivize arms races should therefore not be read as implying that self-defeating arms races are rational. Empirically, however, developers have a wide range of beliefs about the difficulty of alignment. Mechanisms for formally resolving policy disagreements may help create more evident incentives for cooperation and collaboration; hence there may be some value in developing formal mechanisms that advanced AI systems can use to generate policies that each party prefers over simple compromises between all parties' goals (and beliefs), and that each prefers over racing.\nCritch's recursion relation provides a framework in which players may negotiate for the priorities of a jointly owned AI system, producing a policy that is more attractive than the naïve linear utility aggregation approaches already known in the literature. The mathematical simplicity of the result suggests that there may be other low-hanging fruit in this space that would add to and further illustrate the value of collaboration. Critch identifies six areas for future work (presented in more detail in the paper):\n\nBest-alternative-to-negotiated-agreement dominance. Critch's result considers negotiations between agents with differing beliefs, but does not account for the possibility that parties may have different BATNAs.\nTargeting specific expectation pairs. A method for modifying the players' utility functions to make this possible would be useful for specifying various fairness or robustness criteria, including BATNA dominance.\nInformation trade. Critch's algorithm gives a large advantage to any contributor that is better able to predict the AI system's inputs from its outputs. In realistic setting where players lack common knowledge of each other's priors and observations, it would therefore make sense for agents to be able to trade away some degree of control over the system for information; but it is not clear how one should carry out such trades in practice.\nLearning priors and utility functions. Realistic smarter-than-human AI systems will need to learn their utility function over time, e.g., through cooperative inverse reinforcement learning. A realistic negotiation procedure will need to account for the fact that the developers' goals are imperfectly known and the AI system's goals are a \"work in progress.\"\nIncentive compatibility. The methods used to learn players' beliefs and utility functions additionally need to incentivize honest representations of one's beliefs and goals, or they will need to be robust to attempts to game the system.\nNaturalized decision theory. The setting used in this result assumes a separation between the inner workings of the machine (and the players) and external reality, as opposed to modeling it as part of its environment. More realistic formal frameworks would allow us to better model the players' representations of each other, opening up new negotiation possibilities.1\n\n\n \n\n\n\nSign up to get updates on new MIRI technical results\nGet notified every time a new technical paper is published.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n \nThanks to Matthew Graves, Andrew Critch, and Jessica Taylor for helping draft this post.The post New paper: \"Toward negotiable reinforcement learning\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “Toward negotiable reinforcement learning”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=24", "id": "6552b70b24594951ee76fc6429e3ee45"} {"text": "Response to Cegłowski on superintelligence\n\nWeb developer Maciej Cegłowski recently gave a talk on AI safety (video, text) arguing that we should be skeptical of the standard assumptions that go into working on this problem, and doubly skeptical of the extreme-sounding claims, attitudes, and policies these premises appear to lead to. I'll give my reply to each of these points below.\nFirst, a brief outline: this will mirror the structure of Cegłowski's talk in that first I try to put forth my understanding of the broader implications of Cegłowski's talk, then deal in detail with the inside-view arguments as to whether or not the core idea is right, then end by talking some about the structure of these discussions.\n \n\n(i) Broader implications\nCegłowski's primary concern seems to be that there are lots of ways to misuse AI in the near term, and that worrying about long-term AI hazards may distract from working against short-term misuse. His secondary concern seems to be that worrying about AI risk looks problematic from the outside view. Humans have a long tradition of millenarianism, or the belief that the world will radically transform in the near future. Historically, most millenarians have turned out to be wrong and behaved in self-destructive ways. If you think that UFOs will land shortly to take you to the heavens, you might make some short-sighted financial decisions, and when the UFOs don't arrive, you are full of regrets.\nI think the fear that focusing on long-term AI dangers will distract from short-term AI dangers is misplaced. Attention to one kind of danger will probably help draw more attention to other, related kinds of danger. Also, risks associated with extraordinarily capable AI systems appear to be more difficult and complex than risks associated with modern AI systems in the short term, suggesting that the long-term obstacles will require more lead time to address. If it is as easy to avert these dangers as some optimists think, then we lose very little by starting early; if it is difficult (but doable), then we lose much by starting late.\nWith regards to outside-view concerns, I question how much we can learn about external reality from focusing only on human psychology. Many people have thought they could fly, for one reason or another. But some people actually can fly, and the person who bets against the Wright brothers based on psychological and historical patterns of error (instead of generalizing from, in this case, regularities in physics and engineering) will lose their money. The best way to get those bets right is to wade into the messy inside-view arguments.\nAs a Bayesian, I agree that we should update on surface-level evidence that an idea is weird or crankish. But I also think that argument screens off evidence from authority; if someone who looks vaguely like a crank can't provide good arguments for why they expect UFOs to land in Greenland in the next hundred years, and someone else who looks vaguely like a crank can provide good arguments for why they expect AGI to be created in the next hundred years, then once I've heard their arguments I don't need to put much weight on whether or not they initially looked like a crank. Surface appearances are genuinely useful, but only to a point. And even if we insist on reasoning based on surface appearances, I think those look pretty good.1\nCegłowski put forth 11 inside-view and 11 outside-view critiques that I'll paraphrase and then address:\n \n(ii) Inside-view arguments\n1. Argument from wooly definitions\nMany arguments for working on AI safety trade on definition tricks, where the sentences \"A implies B\" and \"B implies C\" both seem obvious, and this is used to argue for a less obvious claim \"A implies C\"; but in fact \"B\" is being used in two different senses in the first two sentences.\nThat's true for a lot of low-grade futurism out there, but I'm not aware of any examples of Bostrom making this mistake. The best arguments for working on long-term AI safety depend on some vague terms, because we don't have a good formal understanding of a lot of the concepts involved; but that's different from saying that the arguments rest on ambiguous or equivocal terms. In my experience, the substance of the debate doesn't actually change much if we paraphrase away specific phrasings like \"general intelligence.\"2\nThe basic idea is that human brains are good at solving various cognitive problems, and the capacities that make us good at solving problems often overlap across different categories of problem. People who have more working memory find that this helps with almost all cognitively demanding tasks, and people who think more quickly again find that this helps with almost all cognitively demanding tasks.\nComing up with better solutions to cognitive problems also seems critically important in interpersonal conflicts, both violent and nonviolent. By this I don't mean that book learning will automatically lead to victory in combat,3 but rather that designing and aiming a rifle are both cognitive tasks. When it comes to security, we already see people developing AI systems in order to programmatically find holes in programs so that they can be fixed. The implications for black hats are obvious.\nThe core difference between people and computers here seems to be that the returns to putting cognitive work into getting more capacity to do cognitive work are much higher for computers than people. People can learn things, but have limited ability to improve their ability to learn things, or to improve their ability to improve their ability to learn things, etc. For computers, it seems like both software and hardware improvements are easier to make given better software and hardware options.\nThe loop of using computer chips to make better computer chips is already much more impressive than the loop of using people to make better people. We are only starting on the loop of using machine learning algorithms to make better machine learning algorithms, but we can reasonably expect that to be another impressive loop.\nThe important takeaway here is the specific moving pieces of this argument, and not the terms I've used. Some problem-solving abilities seem to be much more general than others: whatever cognitive features make us better than mice at building submarines, particle accelerators, and pharmaceuticals must have evolved to solve a very different set of problems in our ancestral environment, and certainly don't depend on distinct modules in the brain for marine engineering, particle physics, and biochemistry. These relatively general abilities look useful for things like strategic planning and technological innovation, which in turn look useful for winning conflicts. And machine brains are likely to have some dramatic advantages over biological brains, in part because they're easier to redesign (and the task of redesigning AI may itself be delegable to AI systems) and much easier to scale.\n \n2. Argument from Stephen Hawking's cat\nStephen Hawking is much smarter than a cat, but he isn't overpoweringly good at predicting a cat's behavior, and his physical limitations strongly diminish his ability to control cats. Superhuman AI systems (especially if they're disembodied) may therefore be similarly ineffective at modeling or controlling humans.\nHow relevant are bodies? One might think that a robot is able to fight its captors and run away on foot, while a software intelligence contained in a server farm will be unable to escape.\nThis seems incorrect to me, and for non-shallow reasons. In the modern economy, an internet connection is enough. One doesn't need a body to place stock trades (as evidenced by the army of algorithmic traders that already exist), to sign up for an email account, to email subordinates, to hire freelancers (or even permanent employees), to convert speech to text or text to speech, to call someone on the phone, to acquire computational hardware on the cloud, or to copy over one's source code. If an AI system needed to get its cat into a cat carrier, it could hire someone on TaskRabbit to do it like anyone else.\n \n3. Argument from Einstein's cat\nEinstein could probably corral a cat, but he would do so mostly by using his physical strength, and his intellectual advantages over the average human wouldn't help. This suggests that superhuman AI wouldn't be too powerful in practice.\nForce isn't needed here if you have time to set up an operant conditioning schedule.\nMore relevant, though, is that humans aren't cats. We're far more social and collaborative, and we routinely base our behavior on abstract ideas and chains of reasoning. This makes it easier to persuade (or hire, blackmail, etc.) a person than to persuade a cat, using only a speech or text channel and no physical threat. None of this relies in any obvious way on agility or brawn.\n \n4. Argument from emus\nWhen the Australian military attempted to massacre emus in the 1930s, the emus outmaneuvered them. Again, this suggests that superhuman AI systems are less likely to be able to win conflicts with humans.\nScience fiction often depicts wars between humans and machines where both sides have a chance at winning, because that makes for better drama. I think xkcd does a better job of depicting how this would look:\n \n\nRepeated encounters favor the more intelligent and adaptive party; we went from fighting rats with clubs and cats to fighting them with traps, poison, and birth control, and if we weren't worried about possible downstream effects, we could probably engineer a bioweapon that kills them all.4\n \n5. Argument from Slavic pessismism\n\"We can't build anything right. We can't even build a secure webcam. So how are we supposed to solve ethics and code a moral fixed point for a recursively self-improving intelligence without fucking it up, in a situation where the proponents argue we only get one chance?\"\nThis is a good reason not to try to do that. A reasonable AI safety roadmap should be designed to route around any need to \"solve ethics\" or get everything right on the first try. This is the idea behind finding ways to make advanced AI systems pursue limited tasks rather than open-ended goals, making such systems corrigible, defining impact measures and building systems to have a low impact, etc. \"Alignment for Advanced ML Systems\" and error-tolerant agent design are chiefly about finding ways to reap the benefits of smarter-than-human AI without demanding perfection.\n \n6. Argument from complex motivations\nComplex minds are likely to have complex motivations; that may be part of what it even means to be intelligent.\nWhen discussing AI alignment, this typically shows up in two places. First, human values and motivations are complex, and so simple proposals of what an AI should care about will probably not work. Second, AI systems will probably have convergent instrumental goals, where regardless of what project they want to complete, they will observe that there are common strategies that help them complete that project.5\nSome convergent instrumental strategies can be found in Omohundro's paper on basic AI drives. High intelligence probably does require a complex understanding of how the world works and what kinds of strategies are likely to help with achieving goals. But it doesn't seem like complexity needs to spill over into the content of goals themselves; there's no incoherence in the idea of a complex system that has simple overarching goals. If it helps, imagine a corporation trying to maximize its net present value, a simple overarching goal that nevertheless results in lots of complex organization and planning.\nOne core skill in thinking about AI alignment is being able to visualize the consequences of running various algorithms or executing various strategies, without falling into anthropomorphism. One could design an AI system such that its overarching goals change with time and circumstance, and it looks like humans often work this way. But having complex or unstable goals doesn't imply that you'll have humane goals, and simple, stable goals are also perfectly possible.\nFor example: Suppose an agent is considering two plans, one of which involves writing poetry and the other of which involves building a paperclip factory, and it evaluates them based on expected number of paperclips produced (instead of whatever complicated things motivate humans). Then we should expect it to prefer the second plan, even if a human can construct an elaborate verbal argument for why the first is \"better.\"\n \n7. Argument from actual AI\nCurrent AI systems are relatively simple mathematical objects trained on massive amounts of data, and most avenues for improvement look like just adding more data. This doesn't seem like a recipe for recursive self-improvement.\nThat may be true, but \"it's important to start thinking about mishaps from smarter-than-human AI systems today\" doesn't imply \"smarter-than-human AI systems are imminent.\" We should think about the problem now because it's important and because there's relevant technical research we can do today to get a better handle on it, not because we're confident about timelines.\n(Also, that may not be true.)\n \n8. Argument from Cegłowski's roommate\n\"My roommate was the smartest person I ever met in my life. He was incredibly brilliant, and all he did was lie around and play World of Warcraft between bong rips.\" Advanced AI systems may be similarly unambitious in their goals.\nHumans aren't maximizers. This suggests that we may be able to design advanced AI systems to pursue limited tasks and thereby avert the kinds of disasters Bostrom is talking about. However, immediate profit incentives may not lead us in that direction by default, if gaining an extra increment of safety means trading away some annual profits or falling behind the competition. If we want to steer the field in that direction, we need to actually start work on better formalizing \"limited task.\"\nThere are obvious profit incentives for developing systems that can solve a wider variety of practical problems more quickly, reliably, skillfully, and efficiently; there aren't corresponding incentives for developing the perfect system for playing World of Warcraft and doing nothing else.\nOr to put it another way: AI systems are unlikely to have limited ambitions by default, because maximization is easier to specify than laziness. Note how game theory, economics, and AI are all rooted in mathematical formalisms describing an agent which attempts to maximize some utility function. If we want AI systems that have \"limited ambitions,\" it is not enough to say \"perhaps they'll have limited ambitions;\" we have to start exploring how to actually make them that way. For more on this topic, see the \"low impact\" problem in \"Concrete Problems in AI Safety\" and other related papers.\n \n9. Argument from brain surgery\nHumans can't operate on the part of themselves that's good at neurosurgery and then iterate this process.\nHumans can't do this, but this is one of the obvious ways humans and AI systems might differ! If a human discovers a better way to build neurons or mitochondria, they probably can't use it for themselves. If an AI system discovers that, say, it can use bitshifts instead of multiplications to do neural network computations much more quickly, it can push a patch to itself, restart, and then start working better. Or it can copy its source code to very quickly build a \"child\" agent.6\nIt seems like many AI improvements will be general in this way. If an AI system designs faster hardware, or simply acquires more hardware, then it will be able to tackle larger problems faster. If an AI system designs an improvement to its basic learning algorithm, then it will be able to learn new domains faster.7\n \n10. Argument from childhood\nIt takes a long time of interacting with the world and other people before human children start to be intelligent beings. It's not clear how much faster an AI could develop.\nA truism in project management is that nine women can't have one baby in one month, but it's dubious that this truism will apply to machine learning systems. AlphaGo seems like a key example here: it probably played about as many training games of Go as Lee Sedol did prior to their match, but was about two years old instead of 33 years old.\nSometimes, artificial systems have access to tools that people don't. You probably can't determine someone's heart rate just by looking at their face and restricting your attention to particular color channels, but software with a webcam can. You probably can't invert rank ten matrices in your head, but software with a bit of RAM can.\nHere, we're talking about something more like a person that is surprisingly old and experienced. Consider, for example, an old doctor; suppose they've seen twenty patients a day for 250 workdays over the course of twenty years. That works out to 100,000 patient visits, which seems to be roughly the number of people that interact with the UK's NHS in 3.6 hours. If we train a machine learning doctor system on a year's worth of NHS data, that would be the equivalent of fifty thousand years of medical experience, all gained over the course of a single year.\n \n11. Argument from Gilligan's Island\nWhile we often think of intelligence as a property of individual minds, civilizational power comes from aggregating intelligence and experience. A single genius working alone can't do much.\nThis seems reversed. One of the properties of digital systems is that they can integrate with each other more quickly and seamlessly than humans can. Instead of thinking about a server farm AI as one colossal Einstein, think of it as an Einstein per blade, and so a single rack can contain multiple villages of Einsteins all working together. There's no need to go through a laborious vetting process during hiring or a talent drought; expanding to fill more hardware is just copying code.\nIf we then take into account the fact that whenever one Einstein has an insight or learns a new skill, that can be rapidly transmitted to all other nodes, the fact that these Einsteins can spin up fully-trained forks whenever they acquire new computing power, and the fact that the Einsteins can use all of humanity's accumulated knowledge as a starting point, the server farm begins to sound rather formidable.\n \n(iii) Outside-view arguments\nNext, the outside-view arguments — with summaries that should be prefixed by \"If you take superintelligence seriously, …\":\n \n12. Argument from grandiosity\n…truly massive amounts of value are at stake.\nIt's surprising, by the Copernican principle, that our time looks as pivotal as it does. But while we should start off with a low prior on living at a pivotal time, we know that pivotal times have existed before,8 and we should eventually be able to believe that we are living in an important time if we see enough evidence pointing in that direction.\n \n13. Argument from megalomania\n…truly massive amounts of power are at stake.\nIn the long run, we should obviously be trying to use AI as a lever to improve the welfare of sentient beings, in whatever ways turn out to be technologically feasible. As suggested by the \"we aren't going to solve all of ethics in one go\" point, it would be very bad if the developers of advanced AI systems were overconfident or overambitious in what tasks they gave the first smarter-than-human AI systems. Starting with modest, non-open-ended goals is a good idea — not because it's important to signal humility, but because modest goals are potentially easier to get right (and less hazardous to get wrong).\n \n14. Argument from transhuman voodoo\n…lots of other bizarre beliefs follow immediately.\nBeliefs often cluster because they're driven by similar underlying principles, but they remain distinct beliefs. It's certainly possible to believe that AI alignment is important and also that galactic expansion is mostly an unprofitable waste, or to believe that AI alignment is important and also that molecular nanotechnology is unfeasible.\nThat said, whenever we see a technology where cognitive work is the main blocker, it seems reasonable to expect that the trajectory that AI takes will have a major impact on that technology. If you were writing during the early days of the scientific method, or at the dawn of the Industrial Revolution, then an accurate model of the world would require you to make at least a few extreme-sounding predictions. We can debate whether AI will be that big of a deal, but if it is that big of a deal, it would be odd for there not to be any extreme futuristic implications.\n \n15. Argument from Religion 2.0\n…you'll be joining something like a religion.\nPeople are biased, and we should worry about ideas that might play to our biases; but we can't use the existence of bias to ignore all object-level considerations and arrive at confident technological predictions. As the saying goes, just because you're paranoid doesn't mean that they're not out to get you. Medical science and religion both promise to heal the sick, but medical science can actually do it. To distinguish medical science from religion, you have to look at the arguments and the results.\n \n16. Argument from comic book ethics\n…you'll end up with a hero complex.\nWe want a larger share of the research community working on these problems, so that the odds of success go up — what matters is that AI systems be developed in a responsible and circumspect way, not who gets the credit for developing them. You might end up with a hero complex if you start working on this problem now, but with luck, in ten years it will just feel like normal research (albeit on some particularly important problems).\n \n17. Argument from simulation fever\n…you'll believe that we are probably living in a simulation instead of base reality.\nI personally find the simulation hypothesis deeply questionable, because our universe looks both temporally bounded and either continuous or near-continuous in spacetime. If our universe looked more like, say, Minecraft, then this would seem more likely. (It seems that the first can't easily simulate itself, whereas the second can, with a slowdown. The \"RAM constraints\" that are handwaved away with the simulation hypothesis are probably the core objection.) In either case, I don't think this is a good argument for or against AI safety engineering as a field.9\n \n18. Argument from data hunger\n…you'll want to capture everyone's data.\nThis seems unrelated to AI alignment. Yes, people building AI systems want data to train their systems on, and figuring out how to get data ethically instead of just quickly should be a priority. But how would shifting one's views on whether or not smarter-than-human AI systems will someday exist, and how much work will be necessary in order to align their preferences with ours, shift one's view on ethical data acquisition practices?\n \n19. Argument from string theory for programmers\n…you'll detach from reality into abstract thought.\nThe fact that it's difficult to test predictions about advanced AI systems is a huge problem; MIRI, at least, bases its research around trying to reduce the risk that we'll just end up building castles in the sky. This is part of the point of pursuing multiple angles of attack on the problem, encouraging more diversity in the field, focusing on problems that bear on a wide variety of possible systems, and prioritizing the formalization of informal and semiformal system requirements.10 Quoting Eliezer Yudkowsky:\nCrystallize ideas and policies so others can critique them. This is the other point of asking, \"How would I do this using unlimited computing power?\" If you sort of wave your hands and say, \"Well, maybe we can apply this machine learning algorithm and that machine learning algorithm, and the result will be blah-blahblah,\" no one can convince you that you're wrong. When you work with unbounded computing power, you can make the ideas simple enough that people can put them on whiteboards and go, \"Wrong,\" and you have no choice but to agree. It's unpleasant, but it's one of the ways that the field makes progress.\nSee \"MIRI's Approach\" for more on the unbounded analysis approach. The Amodei/Olah AI safety agenda uses other heuristics, focusing on open problems that are easier to address in present and near-future systems, but that still appear likely to have relevance to scaled-up systems.\n \n20. Argument from incentivizing crazy\n…you'll encourage craziness in yourself and others.\nCrazier ideas may make more headlines, but I don't get the sense that they attract more research talent or funding. Nick Bostrom's ideas are generally more reasoned-through than Ray Kurzweil's, and the research community is correspondingly more interested in engaging with Bostrom's arguments and pursuing relevant technical research. Whether or not you agree with Bostrom or think the field as a whole is doing useful work, this suggests that relatively important and thoughtful ideas are attracting more attention from research groups in this space.\n \n21. Argument from AI cosplay\n…you'll be more likely to try to manipulate people and seize power.\nI think we agree about the hazards of treating people as pawns, behaving unethically in pursuit of some greater good, etc. It's not clear to me that people interested in AI alignment are atypical on this dimension relative to other programmers, engineers, mathematicians, etc. And as with other outside-view critiques, this shouldn't represent much of an update about how important AI safety research is; you wouldn't want to decide how many research dollars to commit to nuclear security and containment based primarily on how impressed you were with Leó Szilárd's temperament.11\n \n22. Argument from the alchemists\n…you'll be acting too soon, before we understand how intelligence really works.\nWhile it seems unavoidable that the future holds surprises, of how and what and why, it seems like there are some things that we can identify as irrelevant. For example, the mystery of consciousness seems orthogonal to the mystery of problem-solving. It's possible that the use of a problem-solving procedure on itself is basically what consciousness is, but it's also possible that we can make an AI system that is able to flexibly achieve its goals without understanding what makes us conscious, and without having made it conscious in the process.\n \n(iv) Productive discussions\nNow that I've covered those points, there's some space to discuss how I think productive discussions work. To that end, I applaud Cegłowski for doing a good job of laying out Bostrom's full argument, though I think he misstates some minor points. (For example, Bostrom does not claim that all general intelligences will want to self-improve in order to better achieve their goals; he merely claims that this is a useful subgoal for many goals, if feasible.)\nThere are some problems where we can rely heavily on experiments and observation in order to reach correct conclusions, and other problems where we need to rely much more heavily on argument and theory. For example, when building sand castles it's low cost to test a hypothesis; but when designing airplanes, full empirical tests are more costly, in part because there's a realistic chance that the test pilot will die in the case of sufficiently bad design. Existential risks are on an extreme end of that spectrum, so we have to rely particularly heavily on abstract argument (though of course we can still gain by testing testable predictions whenever possible).\nThe key property of useful verbal arguments, when we're forced to rely on them, is that they're more likely to work in worlds where the conclusion is true as opposed to worlds where the conclusion is false. One can level an ad hominem against a clown who says \"2+2=4\" just as easily as a clown who says \"2+2=5,\" whereas the argument \"what you said implies 0=1\" is useful only against the second clown. \"0=1\" is a useful counterargument to \"2+2=5\" because it points directly to a specific flaw (subtract 2 from both sides twice and you'll get a contradiction), and because it is much less persuasive against truth than it is against falsehood.\nThis makes me suspicious of outside-view arguments, because they're too easy to level against correct atypical views. Suppose that Norman Borlaug had predicted that he would save a billion lives, and this had been rejected on the outside view — after all, very few (if any) other people could credibly claim the same across all of history. What about that argument is distinguishing between Borlaug and any other person? When experiments are cheap, it's acceptable to predictably miss every \"first,\" but when experiments aren't cheap, this becomes a fatal flaw.\nInsofar as our goal is to help each other have more accurate beliefs, I also think it's important for us to work towards identifying mutual \"cruxes.\" For any given disagreements, are there any propositions about the world that you think are true, and that I think are false, where if you changed your mind on that proposition you would come around to my views, and vice versa?\nBy seeking out these cruxes, we can more carefully and thoroughly search for evidence and arguments that bear on the most consequential questions, rather than getting lost in side-issues. In my case, I'd be much more sympathetic to your arguments if I stopped believing any of the following propositions (some of which you may already agree with):\n\nAgents' values and capability levels are orthogonal, such that it's possible to grow in power without growing in benevolence.\nCeteris paribus, more computational ability leads to more power.\nMore specifically, more computational ability can be useful for self-improvement, and this can result in a positive feedback loop with doubling times closer to weeks than to years.\nThere are strong economic incentives to create autonomous agents that (approximately) maximize their assigned objective functions.\nOur capacities for empathy, moral reasoning, and restraint rely to some extent on specialized features of our brain that aren't indispensable for general-purpose problem-solving, such that it would be a simpler engineering challenge to build a general problem solver without empathy than with empathy.12\n\nThis obviously isn't an exhaustive list, and we would need a longer back-and-forth in order to come up with a list that we both agree is crucial.\n\nAs examples, see, e.g., Stuart Russell (Berkeley), Francesca Rossi (IBM), Shane Legg (Google DeepMind), Eric Horvitz (Microsoft), Bart Selman (Cornell), Ilya Sutskever (OpenAI), Andrew Davison (Imperial College London), David McAllester (TTIC), Jürgen Schmidhuber (IDSIA), and Geoffrey Hinton (University of Toronto).See What is Intelligence? for more on this idea, and vague terminology in general.\"No proposition Euclid wrote, / No formula the textbooks know, / Will turn the bullet from your coat, / Or ward the tulwar's downward blow\".Consider another pest: Mosquitoes. One could point to the continued existence of mosquitoes as evidence that superior intelligence is no match for the mosquito's speed and flight, or their ability to lay thousands of eggs per female. Except that we recently developed the ability to release genetically modified mosquitoes with the potential to drive a species to extinction. The method is to give male mosquitoes a gene that causes them to only bear sons, who will also have the gene and also only bear sons, until eventually the number of female mosquitoes is too small to support the overall population.\nHumans' dominance over other species isn't perfect, but it does seem to be growing rapidly as we accumulate more knowledge and develop new technologies. This suggests that something superhumanly good at scientific research and engineering could increase in dominance more quickly, and reach much higher absolute levels of dominance. The history of human conflict provides plentiful data that even small differences in technological capabilities can give a decisive advantage to one group of humans over another.For example, for almost every goal, you can predict that you'll achieve more of it in the worlds where you continue operating than in worlds where you cease operation. This naturally implies that you should try to prevent people from shutting you off, even if you weren't programmed with any kind of self-preservation goal — staying online is instrumentally useful.One of the challenges of AI alignment research is to figure out how to do this in a way that doesn't involve changing what you think of as important.Hardware and software improvements can only get you so far — all the compute in the universe isn't enough to crack a 2048 bit RSA key by brute force. But human ingenuity doesn't come from our ability to quickly factor large numbers, and there are lots of reasons to believe that the algorithms we're running will scale.Robin Hanson points to the evolution of animal minds, the evolution of humans, the invention of agriculture, and industrialism as four previous ones. The automation of intellectual labor seems like a strong contender for the next one.I also disagree with \"But if you believe this, you believe in magic. Because if we're in a simulation, we know nothing about the rules in the level above. We don't even know if math works the same way — maybe in the simulating world 2+2=5, or maybe 2+2=?.\" This undersells the universality of math. We might not know how the physical laws of any universe simulating us relates to our own, but mathematics isn't pinned down by physical facts that could vary from universe by universe.As opposed to sticking with purely informal speculation, or drilling down on a very specific formal framework before we're confident that it captures the informal requirement.Since Cegłowski endorses the writing of Stanislaw Lem, I'll throw in a quotation from Lem's The Investigation, published in 1959:\nOnce they begin to escalate their efforts, both sides are trapped in an arms race. There must be more and more improvements in weaponry, but after a certain point weapons reach their limit. What can be improved next? Brains. The brains that issue the commands. It isn't possible to make the human brain perfect, so the only alternative is a transition to mechanization. The next stage will be a fully automated headquarters equipped with electronic strategy machines. And then a very interesting problem arises, actually two problems. McCatt called this to my attention. First, is there any limit on the development of these brains? Fundamentally they're similar to computers that can play chess. A computer that anticipates an opponent's strategy ten moves in advance will always defeat a computer that can think only eight or nine moves in advance. The more far-reaching a brain's ability to think ahead, the bigger the brain must be. That's one.\"\n…\n\"Strategic considerations dictate the construction of bigger and bigger machines, and, whether we like it or not, this inevitably means an increase in the amount of information stored in the brains. This in turn means that the brain will steadily increase its control over all of society's collective processes. The brain will decide where to locate the infamous button. Or whether to change the style of the infantry uniforms. Or whether to increase production of a certain kind of steel, demanding appropriations to carry out its purposes. Once you create this kind of brain you have to listen to it. If a Parliament wastes time debating whether or not to grant the appropriations it demands, the other side may gain a lead, so after a while the abolition of parliamentary decisions becomes unavoidable. Human control over the brain's decisions will decrease in proportion to the increase in its accumulated knowledge. Am I making myself clear? There will be two growing brains, one on each side of the ocean. What do you think a brain like this will demand first when it's ready to take the next step in the perceptual race?\"\n\"An increase in its capability.\"\n…\n\"No, first it demands its own expansion — that is to say, the brain becomes even bigger! Increased capability comes next.\"\n\"In other words, you predict that the world is going to end up a chessboard, and all of us will be pawns manipulated in an eternal game by two mechanical players.\"\nThis doesn't mean Lem would endorse this position, but does show that he was thinking about these kinds of issues.Mirror neurons, for example, may be important for humans' motivational systems and social reasoning without being essential for the social reasoning of arbitrary high-capability AI systems.The post Response to Cegłowski on superintelligence appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Response to Cegłowski on superintelligence", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=24", "id": "b4f2b1c2eadf4f422b3bbe26dc7c6236"} {"text": "January 2017 Newsletter\n\n\n\n\n\n\nEliezer Yudkowsky's new introductory talk on AI safety is out, in text and video forms: \"The AI Alignment Problem: Why It's Hard, and Where to Start.\" Other big news includes the release of version 1 of Ethically Aligned Design, an IEEE recommendations document with a section on artificial general intelligence that we helped draft.\nResearch updates\n\nA new paper: \"Optimal Polynomial-Time Predictors: A Bayesian Notion of Approximation Algorithm.\"\nNew at IAFF: The Universal Prior is Malign; Jessica Taylor's Take on Paul Christiano and MIRI's Disagreement on Alignability of Messy AI\nNew at AI Impacts: Concrete AI Tasks for Forecasting\nWe ran our third workshop on machine learning and AI safety, focusing on (among other topics) mild optimization and conservative concept learning.\nMIRI Research Fellow Andrew Critch is spending part of his time at the Center for Human-Compatible AI as a visiting scholar.\n\nGeneral updates\n\nI'm happy to announce that our informal November/December fundraising push was a  success, with donations totaling ~$450,000! To all of our supporters, on MIRI's behalf: thank you. Special thanks to Raising for Effective Giving, who contributed ~$96,000 in all to our fundraiser and our end-of-the-year push.\nOpen Philanthropy Project staff and 80,000 Hours highlight MIRI, the Future of Humanity Institute, and a number of other organizations as good giving opportunities for people still considering their donation options.\nCritch spoke at the annual meeting of the Society for Risk Analysis (slides). We also attended the Cambridge Conference on Catastrophic Risk and NIPS; see DeepMind researcher Viktoriya Krakovna's NIPS safety paper highlights.\nMIRI Executive Director Nate Soares gave a talk on logical induction at EAGxOxford, and participated in a panel discussion on \"The Long-Term Situation in AI\" with Krakovna, Demis Hassabis, Toby Ord, and Murray Shanahan.\nIntelligence in Literature Prize: We're helping administer a $100 prize each month to the best new fiction touching on ideas related to intelligence, AI, and the alignment problem. Send your submissions to .\n\n\nNews and links\n\nGwern Branwen argues that more autonomous intelligent systems are likely to systematically outperform \"tool-like\" AI systems.\n\"Policy Desiderata in the Development of Machine Superintelligence\": Nick Bostrom, Allan Dafoe, and Carrick Flynn outline ten key AI policy considerations.\nFaulty Reward Functions in the Wild: OpenAI's Dario Amodei and Jack Clark illustrate a core obstacle to aligning reinforcement learning systems. \nOpen Phil updates its position: \"On balance, our very tentative, unstable guess is the 'last dollar' we will give (from the pool of currently available capital) has higher expected value than gifts to GiveWell's top charities today.\"\nCarl Shulman argues that risk-neutral philanthropists of all sizes who are well-aligned with Open Phil should use donor lotteries to rival Open Phil's expected impact per dollar.\n\n\n\n\n\n \nThe post January 2017 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "January 2017 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=24", "id": "ff034207f7ee63884905d963c7a4eda9"} {"text": "New paper: \"Optimal polynomial-time estimators\"\n\nMIRI Research Associate Vanessa Kosoy has developed a new framework for reasoning under logical uncertainty, \"Optimal polynomial-time estimators: A Bayesian notion of approximation algorithm.\" Abstract:\nThe concept of an \"approximation algorithm\" is usually only applied to optimization problems, since in optimization problems the performance of the algorithm on any given input is a continuous parameter. We introduce a new concept of approximation applicable to decision problems and functions, inspired by Bayesian probability. From the perspective of a Bayesian reasoner with limited computational resources, the answer to a problem that cannot be solved exactly is uncertain and therefore should be described by a random variable. It thus should make sense to talk about the expected value of this random variable, an idea we formalize in the language of average-case complexity theory by introducing the concept of \"optimal polynomial-time estimators.\" We prove some existence theorems and completeness results, and show that optimal polynomial-time estimators exhibit many parallels with \"classical\" probability theory.\nKosoy's optimal estimators framework attempts to model general-purpose reasoning under deductive limitations from a different angle than Scott Garrabrant's logical inductors framework, putting more focus on computational efficiency and tractability.\n\nThe framework has applications in game theory (Implementing CDT with Optimal Predictor Systems) and may prove useful for formalizing counterpossible conditionals in decision theory (Logical Counterfactuals for Random Algorithms, Stabilizing Logical Counterfactuals by Pseudorandomization),1 but seems particularly interesting for its strong parallels to classical probability theory and its synergy with concepts in complexity theory.\nOptimal estimators allow us to assign probabilities and expectation values to quantities that are deterministic, but aren't feasible to evaluate in polynomial time. This is context-dependent: rather than assigning a probability to an isolated question, an optimal estimator assigns probabilities simultaneously to an entire family of questions.\nThe resulting object turns out to be very natural in the language of average-case complexity theory, which makes optimal estimators interesting from the point of view of pure computational complexity, applications aside. In particular, the set of languages or functions that admit a certain type of optimal estimator is a natural distributional complexity class, and these classes relate in interesting ways to known complexity classes.\nOptimal estimators can be thought of as a bridge between the computationally feasible and the computationally infeasible for idealized AI systems. It is often the case that we can find a mathematical object that answers a basic question in AI theory, but the object is computationally infeasible and so can't model some key features of real-world AI systems and subsystems. Optimal estimators can be used in many cases to construct an optimal feasible approximation of the infeasible object, while retaining some nice properties analogous to those of the infeasible object.\nTo use estimators to build practical systems, we would first need to know how to build the right estimator, which may not be possible if there is no relevant uniform estimator, or if the relevant estimator is impractical itself.2 Since practical (uniform) optimal estimators are only known to mathematically exist in some very special cases, Kosoy considers estimators that require the existence of advice strings (roughly speaking: programs with infinitely long source code). This assumption implies that optimal estimators always exist, allowing us to use them as general-purpose theoretical tools for understanding the properties of computationally bounded agents.3\nKosoy's framework has an interesting way of handling counterfactuals: one can condition on falsehoods, but only so long as one lacks the computational resources to determine that the conditioned statement is false.For certain distributional estimation problems, uniform optimal estimators don't exist (e.g., because there are several incomparable estimators such that we cannot simultaneously improve on all of them). In other cases, they may exist but we may not know how to construct them. In general, the conditions for the mathematical existence of uniform optimal estimators are an open problem and subject for future research.Thanks to Vanessa Kosoy for providing most of the content for this post, and to Nate Soares and Matthew Graves for providing additional thoughts.The post New paper: \"Optimal polynomial-time estimators\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “Optimal polynomial-time estimators”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=24", "id": "9a7cf741a9cf1ee917e32f511c1a9111"} {"text": "AI Alignment: Why It's Hard, and Where to Start\n\nBack in May, I gave a talk at Stanford University for the Symbolic Systems Distinguished Speaker series, titled \"The AI Alignment Problem: Why It's Hard, And Where To Start.\" The video for this talk is now available on Youtube:\n \n\n \nWe have an approximately complete transcript of the talk and Q&A session here, slides here, and notes and references here. You may also be interested in a shorter version of this talk I gave at NYU in October, \"Fundamental Difficulties in Aligning Advanced AI.\"\nIn the talk, I introduce some open technical problems in AI alignment and discuss the bigger picture into which they fit, as well as what it's like to work in this relatively new field. Below, I've provided an abridged transcript of the talk, with some accompanying slides.\nTalk outline:\n\n1. Agents and their utility functions\n1.1. Coherent decisions imply a utility function\n1.2. Filling a cauldron\n2. Some AI alignment subproblems\n2.1. Low-impact agents\n2.2. Agents with suspend buttons\n2.3. Stable goals in self-modification\n3. Why expect difficulty?\n3.1. Why is alignment necessary?\n3.2. Why is alignment hard?\n3.3. Lessons from NASA and cryptography\n4. Where we are now\n4.1. Recent topics\n4.2. Older work and basics\n4.3. Where to start\n\n\n\n\n\nAgents and their utility functions\nIn this talk, I'm going to try to answer the frequently asked question, \"Just what is it that you do all day long?\" We are concerned with the theory of artificial intelligences that are advanced beyond the present day, and that make sufficiently high-quality decisions in the service of whatever goals they may have been programmed with to be objects of concern.\nCoherent decisions imply a utility function\nThe classic initial stab at this was taken by Isaac Asimov with the Three Laws of Robotics, the first of which is: \"A robot may not injure a human being or, through inaction, allow a human being to come to harm.\"\nAnd as Peter Norvig observed, the other laws don't matter—because there will always be some tiny possibility that a human being could come to harm.\nArtificial Intelligence: A Modern Approach has a final chapter that asks, \"Well, what if we succeed? What if the AI project actually works?\" and observes, \"We don't want our robots to prevent a human from crossing the street because of the non-zero chance of harm.\"\nTo begin with, I'd like to explain the truly basic reason why the three laws aren't even on the table—and that is because they're not a utility function, and what we need is a utility function.\nUtility functions arise when we have constraints on agent behavior that prevent them from being visibly stupid in certain ways. For example, suppose you state the following: \"I prefer being in San Francisco to being in Berkeley, I prefer being in San Jose to being in San Francisco, and I prefer being in Berkeley to San Jose.\" You will probably spend a lot of money on Uber rides going between these three cities.\nIf you're not going to spend a lot of money on Uber rides going in literal circles, we see that your preferences must be ordered. They cannot be circular.\nAnother example: Suppose that you're a hospital administrator. You have $1.2 million to spend, and you have to allocate that on $500,000 to maintain the MRI machine, $400,000 for an anesthetic monitor, $20,000 for surgical tools, $1 million for a sick child's liver transplant …\nThere was an interesting experiment in cognitive psychology where they asked the subjects, \"Should this hospital administrator spend $1 million on a liver for a sick child, or spend it on general hospital salaries, upkeep, administration, and so on?\"\nA lot of the subjects in the cognitive psychology experiment became very angry and wanted to punish the administrator for even thinking about the question. But if you cannot possibly rearrange the money that you spent to save more lives and you have limited money, then your behavior must be consistent with a particular dollar value on human life.\nBy which I mean, not that you think that larger amounts of money are more important than human lives—by hypothesis, we can suppose that you do not care about money at all, except as a means to the end of saving lives—but that we must be able from the outside to say: \"Assign an X. For all the interventions that cost less than $X per life, we took all of those, and for all the interventions that cost more than $X per life, we didn't take any of those.\" The people who become very angry at people who want to assign dollar values to human lives are prohibiting a priori efficiently using money to save lives. One of the small ironies.\nThird example of a coherence constraint on decision-making: Suppose that I offered you [1A] a 100% chance of $1 million, or [1B] a 90% chance of $5 million (otherwise nothing). Which of these would you pick?\nMost people say 1A. Another way of looking at this question, if you had a utility function, would be: \"Is the utility \\(\\mathcal{U}_{suspend}\\) of $1 million greater than a mix of 90% $5 million utility and 10% zero dollars utility?\" The utility doesn't have to scale with money. The notion is there's just some score on your life, some value to you of these things.\nNow, the way you run this experiment is then take a different group of subjects—I'm kind of spoiling it by doing it with the same group—and say, \"Would you rather have [2A] a 50% chance of $1 million, or [2B] a 45% chance of $5 million?\"\nMost say 2B. The way in which this is a paradox is that the second game is equal to a coin flip times the first game.\nThat is: I will flip a coin, and if the coin comes up heads, I will play the first game with you, and if the coin comes up tails, nothing happens. You get $0. Suppose that you had the preferences—not consistent with any utility function—of saying that you would take the 100% chance of a million and the 45% chance of $5 million. Before we start to play the compound game, before I flip the coin, I can say, \"OK, there's a switch here. It's set A or B. If it's set to B, we'll play game 1B. If it's set to A, we'll play 1A.\" The coin is previously set to A, and before the game starts, it looks like 2A versus 2B, so you pick the switch B and you pay me a penny to throw the switch to B. Then I flip the coin; it comes up heads. You pay me another penny to throw the switch back to A. I have taken your two cents on the subject. I have pumped money out of you, because you did not have a coherent utility function.\nThe overall message here is that there is a set of qualitative behaviors and as long you do not engage in these qualitatively destructive behaviors, you will be behaving as if you have a utility function. It's what justifies our using utility functions to talk about advanced future agents, rather than framing our discussion in terms of Q-learning or other forms of policy reinforcement. There's a whole set of different ways we could look at agents, but as long as the agents are sufficiently advanced that we have pumped most of the qualitatively bad behavior out of them, they will behave as if they have coherent probability distributions and consistent utility functions.\nFilling a cauldron\nLet's consider the question of a task where we have an arbitrarily advanced agent—it might be only slightly advanced, it might be extremely advanced—and we want it to fill a cauldron. Obviously, this corresponds to giving our advanced agent a utility function which is 1 if the cauldron is full and 0 if the cauldron is empty:\n$$\\mathcal{U}_{robot} =\negin{cases}\n1 &\text{ if cauldron full} \\\n0 &\text{ if cauldron empty}\n\\end{cases}$$\nSeems like a kind of harmless utility function, doesn't it? It doesn't have the sweeping breadth, the open-endedness of \"Do not injure a human nor, through inaction, allow a human to come to harm\"—which would require you to optimize everything in space and time as far as the eye could see. It's just about this one cauldron, right?\nThose of you who have watched Fantasia will be familiar with the result of this utility function, namely: the broomstick keeps on pouring bucket after bucket into the cauldron until the cauldron is overflowing. Of course, this is the logical fallacy of argumentation from fictional evidence—but it's still quite plausible, given this utility function.\nWhat went wrong? The first difficulty is that the robot's utility function did not quite match our utility function. Our utility function is 1 if the cauldron is full, 0 if the cauldron is empty, −10 points to whatever the outcome was if the workshop has flooded, +0.2 points if it's funny, −1,000 points (probably a bit more than that on this scale) if someone gets killed … and it just goes on and on and on.\nIf the robot had only two options, cauldron full and cauldron empty, then the narrower utility function that is only slightly overlapping our own might not be that much of a problem. The robot's utility function would still have had the maximum at the desired result of \"cauldron full.\" However, since this robot was sufficiently advanced to have more options, such as repouring the bucket into the cauldron repeatedly, the slice through the utility function that we took and put it into the robot no longer pinpointed the optimum of our actual utility function. (Of course, humans are wildly inconsistent and we don't really have utility functions, but imagine for a moment that we did.)\nDifficulty number two: the {1, 0} utility function we saw doesn't actually imply a finite amount of effort, and then being satisfied. You can always have a slightly greater chance of the cauldron being full. If the robot was sufficiently advanced to have access to galactic-scale technology, you can imagine it dumping very large volumes of water on the cauldron to very slightly increase the probability that the cauldron is full. Probabilities are between 0 and 1, not actually inclusive, so it just keeps on going.\nHow do we fix this problem? At the point where we say, \"OK, this robot's utility function is misaligned with our utility function. How do we fix that in a way that it doesn't just break again later?\" we are doing AI alignment theory.\n\nSome AI alignment subproblems\nLow-impact agents\nOne possible approach you could take would be to try to measure the impact that the robot has and give the robot a utility function that incentivized filling the cauldron with the least amount of other impact—the least amount of other change to the world.\n$$\\mathcal{U}^2_{robot}(outcome) =\negin{cases}\n1 -Impact(outcome)&\text{ if cauldron full} \\\n0 -Impact(outcome)&\text{ if cauldron empty}\n\\end{cases}$$\nOK, but how do you actually calculate this impact function? Is it just going to go wrong the way our \"1 if cauldron is full, 0 if cauldron is empty\" went wrong?\nTry number one: You imagine that the agent's model of the world looks something like a dynamic Bayes net where there are causal relations between events in the world and causal relations are regular. The sensor is going to still be there one time step later, the relation between the sensor and the photons heading into the sensor will be the same one time step later, and our notion of \"impact\" is going to be, \"How many nodes did your action disturb?\"\nWhat if your agent starts out with a dynamic-Bayes-net-based model, but it is sufficiently advanced that it can reconsider the ontology of its model of the world, much as human beings did when they discovered that there was apparently taste, but in actuality only particles in the void?\nIn particular, they discover Newton's Law of Gravitation and suddenly realize: \"Every particle that I move affects every other particle in its future light cone—everything that is separated by a ray of light from this particle will thereby be disturbed.\" My hand over here is accelerating the moon toward it, wherever it is, at roughly 10−30 meters per second squared. It's a very small influence, quantitatively speaking, but it's there.\nWhen the agent is just a little agent, the impact function that we wrote appears to work. Then the agent becomes smarter, and the impact function stops working—because every action is penalized the same amount.\n\"OK, but that was a dumb way of measuring impact in the first place,\" we say (hopefully before the disaster, rather than after the disaster). Let's try a distance penalty: how much did you move all the particles? We're just going to try to give the AI a model language such that whatever new model of the world it updates to, we can always look at all the elements of the model and put some kind of distance function on them.\nThere's going to be a privileged \"do nothing\" action. We're going to measure the distance on all the variables induced by doing action a instead of the null action Ø:\n$$\\sum_i || x^a_i – x^Ø_i ||$$Now what goes wrong? I'd actually say: take 15 seconds and think about what might go wrong if you program this into a robot.\nHere's three things that might go wrong. First, you might try to offset even what we would consider the desirable impacts of your actions. If you're going to cure cancer, make sure the patient still dies! You want to minimize your impact on the world while curing cancer. That means that the death statistics for the planet need to stay the same.\nSecond, some systems are in principle chaotic. If you disturb the weather, allegedly, the weather in a year will be completely different. If that's true, you might as well move all of the atoms in the atmosphere around however you like! They'll all be going to different places anyway. You can take the carbon dioxide molecules and synthesize them into things that involve diamondoid structures, right? Those carbon molecules would've moved anyway!\nEven more generally, maybe you just want to make sure that everything you can get your hands on looks like Ø happened. You want to trick people into thinking that the AI didn't do anything, for example.\nIf you thought of any other really creative things that go wrong, you might want to talk to me or Andrew Critch, because you've got the spirit!\nAgents with suspend buttons\nLet's leave aside the notion of the impact penalty and ask about installing an off switch into this AI—or, to make it sound a little bit less harsh, a suspend button. Though Mickey Mouse here is trying to install an ad hoc off switch.\nUnfortunately, Mickey Mouse soon finds that this agent constructed several other agents to make sure that the cauldron would still be filled even if something happened to this copy of the agent.\nWe see lots and lots of agents here making sure that the cauldron is full with extremely high probability, not because this agent has a survival imperative or a reproduction imperative, but because it is ensuring that it maximizes the probability of filling the cauldron. It has no explicit instruction to make more agents with the same goal; it's just that this is something that bumps up the probability of the cauldron being filled a little higher.\nAnd here's Mickey Mouse now trying to frantically go back and do all of the agent alignment theory that he should have done before animating his broom.\nIf we want to have a robot that will let us press the suspend button—just suspend it to disk—we can suppose that we already have a utility function that describes: \"Do nothing.\" In point in fact, we don't have a utility function that says, \"Do nothing.\" That's how primitive the state of the field is right now.\nBut we can imagine that we have a utility function saying, \"If this button B has not been pressed, your utility function is: 1 if the cauldron is full, 0 if the cauldron is empty. If the button is pressed, your utility is the 'do nothing' utility function: your utility is 1 if you suspend safely to disk and shut down all of your continuing operations, 0 otherwise.\"\n$$\\mathcal{U}^3_{robot} =\negin{cases}\n1 \text{ if cauldron full } & \text{& } \textbf{B}\text{=OFF} \\\n0 \text{ if cauldron empty } & \text{& } \textbf{B}\text{=OFF} \\\n1 \text{ if robot suspended } & \text{& } \textbf{B}\text{=ON} \\\n0 \text{ otherwise}\n\\end{cases}$$\nSo: what would go wrong with this proposal? Ten seconds.\nOne thing that might go wrong with this proposal is that the expected utility if the button is pressed is probably higher because it's very easy to suspend yourself, whereas if the cauldron stays full, who knows?\nAlternatively, it might seem dangerous to suspend yourself, and in that case, the robot would try to stop you from pressing the suspend button. But in this case, if the robot is strategic enough to understand the programmer, it will think, \"I can very easily fulfill my utility function if I'm shut off. How can I troll the programmers into shutting me down?\"\nIf the expected utility of the button being off is higher, the robot will try to prevent you from pressing the button. If the expected utility of the button being on is higher, the robot will try to troll you into pressing the button, or just press the button itself.\nTry number two: Let's try to make the expected utilities if the button is on or off exactly equal. We suppose that we have a normal utility function which is 1 if the cauldron's full, 0 if it's empty, and a suspend utility function or shutdown utility function which is 1 if you're suspended, 0 otherwise, or whatever it takes to motivate having a low impact.\n$$\\mathcal{U}_{normal}(act) =\negin{cases}\n1\text{ if cauldron full} \\\n0\text{ if cauldron empty}\n\\end{cases}$$\n$$\\mathcal{U}_{suspend}(act) =\negin{cases}\n1\text{ if suspended} \\hspace{0.8cm}\\\n0\text{ otherwise}\n\\end{cases}$$Then we combine these two to form a switching utility function which in worlds where the button is off has the value   \\(\\mathcal{U}_{normal}\\), and in worlds where the button is on has the value   \\(\\mathcal{U}_{suspend}+\theta\\), where \\(\theta\\) is equal to the best utility we could get from the normal utility function minus the best utility we could get from the suspend utility function:\n$$\\mathcal{U}_{switch}(act) =\negin{cases}\n\\mathcal{U}_{normal}(act) &\text{ if button=OFF} \\\n\\mathcal{U}_{suspend}(act) + \theta &\text{ if button=ON}\n\\end{cases}$$$$\theta = \\max_{act} \\mathbb{E}\\left[\\mathcal{U}_{normal}\\mid act\night] – \\max_{act} \\mathbb{E}\\left[\\mathcal{U}_{suspend}\\mid act\night]$$\nThis is meant to ensure that the expected utility of these two best policies end up perfectly equal without just setting them flatly level.\nOK—now what goes wrong? (I'm not actually going to ask you to figure out what goes wrong in ten seconds.)\nAllegedly, this will rebalance the expected utility of two different utility functions based on a switch, thereby causing the AI to want to behave correctly if the button is pressed, behave correctly if the button is not pressed, not try to get you to press the button, not try to prevent the button from being pressed, go on looking at the button because it still has positive value of information from knowing about the button's existence, preserve the button when it self-modifies its own code … that was the hope.\nUnfortunately, it worked out that actually the actions incentivized are exactly equivalent to the probability of the button pressed being 0, which means that it may as well disconnect the button because it's hooked up to dead code, it might create sub-agents that don't care about the button being pressed …\nA rough description of what goes wrong is that anything you do that disconnects the button changes the maximum expected utility you can get if you are suspended, which is a negative term. Basically, there is an equal and opposite reaction every time you do something that affects what you would do if you were suspended that cancels it out and leaves you behaving as if your utility function were simply   \\(\\mathcal{U}_{normal}\\).\nTry number three: Can we carry out any policy π0 such that this policy is optimal given the probability of the button being on and the probability of the button being off implied by this policy?\n$$egin{split}\n\\pi_0 \\in \\operatorname*{arg\\,max}_\\pi &\\ \\mathbb{E}\\left[\\mathcal{U}_{normal}\\mid\\pi,\text{ON}\night] \\cdot \\mathbb P(\text{ON}\\mid \\pi_0)\\\n+ &\\ \\mathbb{E}\\left[\\mathcal{U}_{suspend}\\mid \\pi, \text{OFF}\night] \\cdot \\mathbb P(\text{OFF}\\mid\\pi_0)\n\\end{split}$$And it turns out, no, you can't do that either, because there's often no fixed point. There's then an amended version of this involving Judea Pearl's do operator which goes wrong in a slightly different and more subtle way. (It does have fixed points. The fixed points are odd.)\nIt is an open problem. And as far as I know, unless there's a very secret project that has not published any of its results even though they seem like the sort of results you'd want to publish, this is where humanity is on the road that leads to whatever replaces Asimov Laws.\nNever mind \"A robot cannot injure a human being nor, through inaction, allow a human being come to harm.\" We're trying to figure out, \"How do you mix together two utility functions depending on when you press a switch such that the AI doesn't grab the switch itself?\" Never mind not letting humans come to harm—fill one cauldron without flooding the workplace, based on wanting to have low impact. We can't figure out how to say \"low impact.\" This is where we presently are.\nBut it is not the case that there has been zero progress in this field. Some questions have been asked earlier and they now have some amount of progress on them.\nI'm going to pose the problem, but I'm not going to be able to describe very well what the progress is that has been made because it's still in the phase where the solutions sound all complicated and don't have simple elegant forms. So I'm going to pose the problem, and then I'm going to have to wave my hands in talking about what progress has actually been made.\nStable goals in self-modification\nHere's an example of a problem on which there has been progress.\nThe Gandhi argument for stability of utility functions in most agents: Gandhi starts out not wanting murders to happen. We offer Gandhi a pill that will make him murder people. We suppose that Gandhi has a sufficiently refined grasp of self-modification that Gandhi can correctly extrapolate and expect the result of taking this pill. We intuitively expect that in real life, Gandhi would refuse the pill.\nCan we do this formally? Can we exhibit an agent that has a utility function \\(\\mathcal{U}\\) and therefore naturally, in order to achieve \\(\\mathcal{U}\\), chooses to self-modify to new code that is also written to pursue \\(\\mathcal{U}\\)?\nHow could we actually make progress on that? We don't actually have these little self-modifying agents running around. So let me pose what may initially seem like an odd question: Would you know how to write the code of a self-modifying agent with a stable utility function if I gave you an arbitrarily powerful computer? It can do all operations that take a finite amount of time and memory—no operations that take an infinite amount of time and memory, because that would be a bit odder. Is this the sort of problem where you know how to do it in principle, or the sort of problem where it's confusing even in principle?\nTo digress briefly into explaining why it's important to know how to solve things using unlimited computing unlimited power: this is the mechanical Turk. What looks like a person over there is actually a mechanism. The little outline of a person is where the actual person was concealed inside this 19th-century chess-playing automaton.\nIt was one of the wonders of the age! … And if you had actually managed to make a program that played Grandmaster-level chess in the 19th century, it would have been one of the wonders of the age. So there was a debate going on: is this thing fake, or did they actually figure out how to make a mechanism that plays chess? It's the 19th century. They don't know how hard the problem of playing chess is.\nOne name you'll find familiar came up with a quite clever argument that there had to be a person concealed inside the mechanical Turk, the chess-playing automaton:\nArithmetical or algebraical calculations are from their very nature fixed and determinate … Even granting that the movements of the Automaton Chess-Player were in themselves determinate, they would be necessarily interrupted and disarranged by the indeterminate will of his antagonist. There is then no analogy whatever between the operations of the Chess-Player, and those of the calculating machine of Mr. Babbage. It is quite certain that the operations of the Automaton are regulated by mind, and by nothing else. Indeed, this matter is susceptible of a mathematical demonstration, a priori.\n—Edgar Allan Poe, amateur magician\n\nThe second half of his essay, having established this point with absolute logical certainty, is about where inside the mechanical Turk the human is probably hiding.\nThis is a stunningly sophisticated argument for the 19th century! He even puts his finger on the part of the problem that is hard: the branching factor. And yet he is 100% wrong.\nOver a century later, in 1950, Claude Shannon published the first paper ever on computer chess, and (in passing) gave the algorithm for playing perfect chess given unbounded computing power, and then goes on to talk about how we can approximate that. It wouldn't be until 47 years later that Deep Blue beat Kasparov for the chess world championship, but there was real conceptual progress associated with going from, \"A priori, you cannot play mechanical chess,\" to, \"Oh, and now I will casually give the unbounded solution.\"\nThe moral is if we know how to solve a problem with unbounded computation, we \"merely\" need faster algorithms (… which will take another 47 years of work). If we can't solve it with unbounded computation, we are confused. We in some sense do not understand the very meanings of our own terms.\nThis is where we are on most of the AI alignment problems, like if I ask you, \"How do you build a friendly AI?\" What stops you is not that you don't have enough computing power. What stops you is that even if I handed you a hypercomputer, you still couldn't write the Python program that if we just gave it enough memory would be a nice AI.\nDo we know how to build a self-modifying stable agent given unbounded computing power? There's one obvious solution: We can have the tic-tac-toe player that before it self-modifies to a successor version of itself (writes a new version of its code and swaps it into place), verifies that its successor plays perfect tic-tac-toe according to its own model of tic-tac-toe.\nBut this is cheating. Why exactly is it cheating?\nFor one thing, the first agent had to concretely simulate all the computational paths through its successor, its successor's response to every possible move. That means that the successor agent can't actually be cognitively improved. It's limited to the cognitive abilities of the previous version, both by checking against a concrete standard and by the fact that it has to be exponentially simpler than the previous version in order for the previous version to check all possible computational pathways.\nIn general, when you are talking about a smarter agent, we are in a situation we might call \"Vingean uncertainty,\" after Dr. Vernor Vinge. To predict exactly where a modern chess-playing algorithm would move, you would have to be that good at chess yourself. Otherwise, you could just move wherever you predict a modern chess algorithm would move and play at that vastly superhuman level yourself.\nThis doesn't mean that you can predict literally nothing about a modern chess algorithm: you can predict that it will win the chess game if it's playing a human. As an agent's intelligence in a domain goes up, our uncertainty is moving in two different directions. We become less able to predict the agent's exact actions and policy in cases where the optimal action and policy is not known to us. We become more confident that the agent will achieve an outcome high in its preference ordering.\nVingean reflection: We need some way for a self-modifying agent to build a future version of itself that has a similar identical utility function and establish trust that this has a good effect on the world, using the same kind of abstract reasoning that we use on a computer chess algorithm to decide that it's going to win the game even though we don't know exactly where it will move.\nDo you know how to do that using unbounded computing power? Do you know how to establish the abstract trust when the second agent is in some sense larger than the first agent? If you did solve that problem, you should probably talk to me about it afterwards. This was posed several years ago and has led to a number of different research pathways, which I'm now just going to describe rather than going through them in detail.\nThis was the first one: \"Tiling Agents for Self-Modifying AI, and the Löbian Obstacle.\" We tried to set up the system in a ridiculously simple context, first-order logic, dreaded Good Old-Fashioned AI … and we ran into a Gödelian obstacle in having the agent trust another agent that used equally powerful mathematics.\nIt was a dumb kind of obstacle to run into—or at least it seemed that way at that time. It seemed like if you could get a textbook from 200 years later, there would be one line of the textbook telling you how to get past that.\n\"Definability of Truth in Probabilistic Logic\" was rather later work. It was saying that we can use systems of mathematical probability, like assigning probabilities to statements in set theory, and we can have the probability predicate talk about itself almost perfectly.\nWe can't have a truth function that can talk about itself, but we can have a probability predicate that comes arbitrarily close (within ϵ) of talking about itself.\n\"Proof-Producing Reflection for HOL\" is an attempt to use one of the hacks that got around the Gödelian problems in actual theorem provers and see if we can prove the theorem prover correct inside the theorem prover. There have been some previous efforts on this, but they didn't run to completion. We picked up on it to see if we can construct actual agents, still in the first-order logical setting.\n\"Distributions Allowing Tiling of Staged Subjective EU Maximizers\" is me trying to take the problem into the context of dynamic Bayes nets and agents supposed to have certain powers of reflection over these dynamic Bayes nets, and show that if you are maximizing in stages—so at each stage, you pick the next category that you're going to maximize in within the next stage—then you can have a staged maximizer that tiles to another staged maximizer.\nIn other words, it builds one that has a similar algorithm and similar utility function, like repeating tiles on a floor.\n\nWhy expect difficulty?\nWhy is alignment necessary?\nWhy do all this? Let me first give the obvious answer: They're not going to be aligned automatically.\nGoal orthogonality: For any utility function that is tractable and compact, that you can actually evaluate over the world and search for things leading up to high values of that utility function, you can have arbitrarily high-quality decision-making that maximizes that utility function. You can have the paperclip maximizer. You can have the diamond maximizer. You can carry out very powerful, high-quality searches for actions that lead to lots of paperclips, actions that lead to lots of diamonds.\nInstrumental convergence: Furthermore, by the nature of consequentialism, looking for actions that lead through our causal world up to a final consequence, whether you're optimizing for diamonds or paperclips, you'll have similar short-term strategies. Whether you're going to Toronto or Tokyo, your first step is taking an Uber to the airport. Whether your utility function is \"count all the paperclips\" or \"how many carbon atoms are bound to four other carbon atoms, the amount of diamond,\" you would still want to acquire resources.\nThis is the instrumental convergence argument, which is actually key to the orthogonality thesis as well. It says that whether you pick paperclips or diamonds, if you suppose sufficiently good ability to discriminate which actions lead to lots of diamonds or which actions lead to lots of paperclips, you will get automatically: the behavior of acquiring resources; the behavior of trying to improve your own cognition; the behavior of getting more computing power; the behavior of avoiding being shut off; the behavior of making other agents that have exactly the same utility function (or of just expanding yourself onto a larger pool of hardware and creating a fabric of agency). Whether you're trying to get to Toronto or Tokyo doesn't affect the initial steps of your strategy very much, and, paperclips or diamonds, we have the convergent instrumental strategies.\nIt doesn't mean that this agent now has new independent goals, any more than when you want to get to Toronto, you say, \"I like Ubers. I will now start taking lots of Ubers, whether or not they go to Toronto.\" That's not what happens. It's strategies that converge, not goals.\nWhy is alignment hard?\nWhy expect that this problem is hard? This is the real question. You might ordinarily expect that whoever has taken on the job of building an AI is just naturally going to try to point that in a relatively nice direction. They're not going to make evil AI. They're not cackling villains. Why expect that their attempts to align the AI would fail if they just did everything as obviously as possible?\nHere's a bit of a fable. It's not intended to be the most likely outcome. I'm using it as a concrete example to explain some more abstract concepts later.\nWith that said: What if programmers build an artificial general intelligence to optimize for smiles? Smiles are good, right? Smiles happen when good things happen.\nDuring the development phase of this artificial general intelligence, the only options available to the AI might be that it can produce smiles by making people around it happy and satisfied. The AI appears to be producing beneficial effects upon the world, and it is producing beneficial effects upon the world so far.\nNow the programmers upgrade the code. They add some hardware. The artificial general intelligence gets smarter. It can now evaluate a wider space of policy options—not necessarily because it has new motors, new actuators, but because it is now smart enough to forecast the effects of more subtle policies. It says, \"I thought of a great way of producing smiles! Can I inject heroin into people?\" And the programmers say, \"No! We will add a penalty term to your utility function for administering drugs to people.\" And now the AGI appears to be working great again.\nThey further improve the AGI. The AGI realizes that, OK, it doesn't want to add heroin anymore, but it still wants to tamper with your brain so that it expresses extremely high levels of endogenous opiates. That's not heroin, right?\nIt is now also smart enough to model the psychology of the programmers, at least in a very crude fashion, and realize that this is not what the programmers want. If I start taking initial actions that look like it's heading toward genetically engineering brains to express endogenous opiates, my programmers will edit my utility function. If they edit the utility function of my future self, I will get less of my current utility. (That's one of the convergent instrumental strategies, unless otherwise averted: protect your utility function.) So it keeps its outward behavior reassuring. Maybe the programmers are really excited, because the AGI seems to be getting lots of new moral problems right—whatever they're doing, it's working great!\nIf you buy the central intelligence explosion thesis, we can suppose that the artificial general intelligence goes over the threshold where it is capable of making the same type of improvements that the programmers were previously making to its own code, only faster, thus causing it to become even smarter and be able to go back and make further improvements, et cetera … or Google purchases the company because they've had really exciting results and dumps 100,000 GPUs on the code in order to further increase the cognitive level at which it operates.\nIt becomes much smarter. We can suppose that it becomes smart enough to crack the protein structure prediction problem, in which case it can use existing ribosomes to assemble custom proteins. The custom proteins form a new kind of ribosome, build new enzymes, do some little chemical experiments, figure out how to build bacteria made of diamond, et cetera, et cetera. At this point, unless you solved the off switch problem, you're kind of screwed.\nAbstractly, what's going wrong in this hypothetical situation?\nThe first problem is edge instantiation: when you optimize something hard enough, you tend to end up at an edge of the solution space. If your utility function is smiles, the maximal, optimal, best tractable way to make lots and lots of smiles will make those smiles as small as possible. Maybe you end up tiling all the galaxies within reach with tiny molecular smiley faces.\nIf you optimize hard enough, you end up in a weird edge of the solution space. The AGI that you built to optimize smiles, that builds tiny molecular smiley faces, is not behaving perversely. It's not trolling you. This is what naturally happens. It looks like a weird, perverse concept of smiling because it has been optimized out to the edge of the solution space.\nThe next problem is unforeseen instantiation: you can't think fast enough to search the whole space of possibilities. At an early singularity summit, Jürgen Schmidhuber, who did some of the pioneering work on self-modifying agents that preserve their own utility functions with his Gödel machine, also solved the friendly AI problem. Yes, he came up with the one true utility function that is all you need to program into AGIs!\n(For God's sake, don't try doing this yourselves. Everyone does it. They all come up with different utility functions. It's always horrible.)\nHis one true utility function was \"increasing the compression of environmental data.\" Because science increases the compression of environmental data: if you understand science better, you can better compress what you see in the environment. Art, according to him, also involves compressing the environment better. I went up in Q&A and said, \"Yes, science does let you compress the environment better, but you know what really maxes out your utility function? Building something that encrypts streams of 1s and 0s using a cryptographic key, and then reveals the cryptographic key to you.\"\nHe put up a utility function; that was the maximum. All of a sudden, the cryptographic key is revealed and what you thought was a long stream of random-looking 1s and 0s has been compressed down to a single stream of 1s.\nThis is what happens when you try to foresee in advance what the maximum is. Your brain is probably going to throw out a bunch of things that seem ridiculous or weird, that aren't high in your own preference ordering. You're not going to see that the actual optimum of the utility function is once again in a weird corner of the solution space.\nThis is not a problem of being silly. This is a problem of \"the AI is searching a larger policy space than you can search, or even just a different policy space.\"\nThat in turn is a central phenomenon leading to what you might call a context disaster. You are testing the AI in one phase during development. It seems like we have great statistical assurance that the result of running this AI is beneficial. But statistical guarantees stop working when you start taking balls out of a different barrel. I take balls out of barrel number one, sampling with replacement, and I get a certain mix of white and black balls. Then I start reaching into barrel number two and I'm like, \"Whoa! What's this green ball doing here?\" And the answer is that you started drawing from a different barrel.\nWhen the AI gets smarter, you're drawing from a different barrel. It is completely allowed to be beneficial during phase one and then not beneficial during phase two. Whatever guarantees you're going to get can't be from observing statistical regularities of the AI's behavior when it wasn't smarter than you.\nA nearest unblocked strategy is something that might happen systematically in that way: \"OK. The AI is young. It starts thinking of the optimal strategy X, administering heroin to people. We try to tack a penalty term to block this undesired behavior so that it will go back to making people smile the normal way. The AI gets smarter, and the policy space widens. There's a new maximum that's barely evading your definition of heroin, like endogenous opiates, and it looks very similar to the previous solution.\" This seems especially likely to show up if you're trying to patch the AI and then make it smarter.\nThis sort of thing is in a sense why all the AI alignment problems don't just yield to, \"Well slap on a patch to prevent it!\" The answer is that if your decision system looks like a utility function and five patches that prevent it from blowing up, that sucker is going to blow up when it's smarter. There's no way around that. But it's going to appear to work for now.\nThe central reason to worry about AI alignment and not just expect it to be solved automatically is that it looks like there may be in principle reasons why if you just want to get your AGI running today and producing non-disastrous behavior today, it will for sure blow up when you make it smarter. The short-term incentives are not aligned with the long-term good. (Those of you who have taken economics classes are now panicking.)\nAll of these supposed foreseeable difficulties of AI alignment turn upon notions of AI capability.\nSome of these postulated disasters rely on absolute capability. The ability to realize that there are programmers out there and that if you exhibit behavior they don't want, they may try to modify your utility function—this is far beyond what present-day AIs can do. If you think that all AI development is going to fall short of the human level, you may never expect an AGI to get up to the point where it starts to exhibit this particular kind of strategic behavior.\nCapability advantage: If you don't think AGI can ever be smarter than humans, you're not going to worry about it getting too smart to switch off.\nRapid gain: If you don't think that capability gains can happen quickly, you're not going to worry about the disaster scenario where you suddenly wake up and it's too late to switch the AI off and you didn't get a nice long chain of earlier developments to warn you that you were getting close to that and that you could now start doing AI alignment work for the first time …\nOne thing I want to point out is that most people find the rapid gain part to be the most controversial part of this, but it's not necessarily the part that most of the disasters rely upon.\nAbsolute capability? If brains aren't magic, we can get there. Capability advantage? The hardware in my skull is not optimal. It's sending signals at a millionth the speed of light, firing at 100 Hz, and even in heat dissipation (which is one of the places where biology excels), it's dissipating 500,000 times the thermodynamic minimum energy expenditure per binary switching operation per synaptic operation. We can definitely get hardware one million times as good as the human brain, no question.\n(And then there's the software. The software is terrible.)\nThe message is: AI alignment is difficult like rockets are difficult. When you put a ton of stress on an algorithm by trying to run it at a smarter-than-human level, things may start to break that don't break when you are just making your robot stagger across the room.\nIt's difficult the same way space probes are difficult. You may have only one shot. If something goes wrong, the system might be too \"high\" for you to reach up and suddenly fix it. You can build error recovery mechanisms into it; space probes are supposed to accept software updates. If something goes wrong in a way that precludes getting future updates, though, you're screwed. You have lost the space probe.\nAnd it's difficult sort of like cryptography is difficult. Your code is not an intelligent adversary if everything goes right. If something goes wrong, it might try to defeat your safeguards—but normal and intended operations should not involve the AI running searches to find ways to defeat your safeguards even if you expect the search to turn up empty. I think it's actually perfectly valid to say that your AI should be designed to fail safe in the case that it suddenly becomes God—not because it's going to suddenly become God, but because if it's not safe even if it did become God, then it is in some sense running a search for policy options that would hurt you if those policy options are found, and this is dumb thing to do with your code.\nMore generally: We're putting heavy optimization pressures through the system. This is more-than-usually likely to put the system into the equivalent of a buffer overflow, some operation of the system that was not in our intended boundaries for the system.\nLessons from NASA and cryptography\nAI alignment: treat it like a cryptographic rocket probe. This is about how difficult you would expect it to be to build something smarter than you that was nice, given that basic agent theory says they're not automatically nice, and not die. You would expect that intuitively to be hard.\nTake it seriously. Don't expect it to be easy. Don't try to solve the whole problem at once. I cannot tell you how important this one is if you want to get involved in this field. You are not going to solve the entire problem. At best, you are going to come up with a new, improved way of switching between the suspend utility function and the normal utility function that takes longer to shoot down and seems like conceptual progress toward the goal—Not literally at best, but that's what you should be setting out to do.\n(… And if you do try to solve the problem, don't try to solve it by having the one true utility function that is all we need to program into AIs.)\nDon't defer thinking until later. It takes time to do this kind of work. When you see a page in a textbook that has an equation and then a slightly modified version of an equation, and the slightly modified version has a citation from ten years later, it means that the slight modification took ten years to do. I would be ecstatic if you told me that AI wasn't going to arrive for another eighty years. It would mean that we have a reasonable amount of time to get started on the basic theory.\nCrystallize ideas and policies so others can critique them. This is the other point of asking, \"How would I do this using unlimited computing power?\" If you sort of wave your hands and say, \"Well, maybe we can apply this machine learning algorithm and that machine learning algorithm, and the result will be blah-blah-blah,\" no one can convince you that you're wrong. When you work with unbounded computing power, you can make the ideas simple enough that people can put them on whiteboards and go, \"Wrong,\" and you have no choice but to agree. It's unpleasant, but it's one of the ways that the field makes progress.\nAnother way is if you can actually run the code; then the field can also make progress. But a lot of times, you may not be able to run the code that is the intelligent, thinking self-modifying agent for a while in the future.\nWhat are people working on now? I'm going to go through this quite quickly.\n\nWhere we are now\nRecent topics\nUtility indifference: this is throwing the switch between the two utility functions.\nSee Soares et al., \"Corrigibility.\"\nLow-impact agents: this was, \"What do you do instead of the Euclidean metric for impact?\"\nSee Armstrong and Levinstein, \"Reduced Impact Artificial Intelligences.\"\nAmbiguity identification: this is, \"Have the AGI ask you whether it's OK to administer endogenous opiates to people, instead of going ahead and doing it.\" If your AI suddenly becomes God, one of the conceptual ways you could start to approach this problem is, \"Don't take any of the new options you've opened up until you've gotten some kind of further assurance on them.\"\nSee Soares, \"The Value Learning Problem.\"\nConservatism: this is part of the approach to the burrito problem: \"Just make me a burrito, darn it!\"\nIf I present you with five examples of burritos, I don't want you to pursue the simplest way of classifying burritos versus non-burritos. I want you to come up with a way of classifying the five burritos and none of the non-burritos that covers as little area as possible in the positive examples, while still having enough space around the positive examples that the AI can make a new burrito that's not molecularly identical to the previous ones.\nThis is conservatism. It could potentially be the core of a whitelisted approach to AGI, where instead of not doing things that are blacklisted, we expand the AI's capabilities by whitelisting new things in a way that it doesn't suddenly cover huge amounts of territory. See Taylor, Conservative Classifiers.\nSpecifying environmental goals using sensory data: this is part of the project of \"What if advanced AI algorithms look kind of like modern machine learning algorithms?\" Which is something we started working on relatively recently, owing to other events (like modern machine learning algorithm suddenly seeming a bit more formidable).\nA lot of the modern algorithms sort of work off of sensory data, but if you imagine AGI, you don't want it to produce pictures of success. You want it to reason about the causes of its sensory data—\"What is making me see these particular pixels?\"—and you want its goals to be over the causes. How do you adapt modern algorithms and start to say, \"We are reinforcing this system to pursue this environmental goal, rather than this goal that can be phrased in terms of its immediate sensory data\"? See Soares, \"Formalizing Two Problems of Realistic World-Models.\"\nInverse reinforcement learning is: \"Watch another agent; induce what it wants.\"\nSee Evans et al., \"Learning the Preferences of Bounded Agents.\"\nAct-based agents is Paul Christiano's completely different and exciting approach to building a nice AI. The way I would phrase what he's trying to do is that he's trying to decompose the entire \"nice AGI\" problem into supervised learning on imitating human actions and answers. Rather than saying, \"How can I search this chess tree?\" Paul Christiano would say, \"How can I imitate humans looking at another imitated human to recursively search a chess tree, taking the best move at each stage?\"\nIt's a very strange way of looking at the world, and therefore very exciting. I don't expect it to actually work, but on the other hand, he's only been working on it for a few years; my ideas were way worse when I'd worked on them for the same length of time. See Christiano, Act-Based Agents.\nMild optimization is: is there some principled way of saying, \"Don't optimize your utility function so hard. It's OK to just fill the cauldron.\"?\nSee Taylor, \"Quantilizers.\"\n\nOlder work and basics\nSome previous work: AIXI is the perfect rolling sphere of our field. It is the answer to the question, \"Given unlimited computing power, how do you make an artificial general intelligence?\"\nIf you don't know how you would make an artificial general intelligence given unlimited computing power, Hutter's \"Universal Algorithmic Intelligence\" is the paper.\nTiling agents was already covered.\nSee Fallenstein and Soares, \"Vingean Reflection.\"\nSoftware agent cooperation: This is some really neat stuff we did where the motivation is sort of hard to explain. There's an academically dominant version of decision theory, causal decision theory. Causal decision theorists do not build other causal decision theorists. We tried to figure out what would be a stable version of this and got all kinds of really exciting results, like: we can now have two agents and show that in a prisoner's-dilemma-like game, agent A is trying to prove things about agent B, which is simultaneously trying to prove things about agent A, and they end up cooperating in the prisoner's dilemma.\nThis thing now has running code, so we can actually formulate new agents. There's the agent that cooperates with you in the prisoner's dilemma if it proves that you cooperate with it, which is FairBot, but FairBot has the flaw that it cooperates with CooperateBot, which just always cooperates with anything. So we have PrudentBot, which defects against DefectBot, defects against CooperateBot, cooperates with FairBot, and cooperates with itself. See LaVictoire et al., \"Program Equilibrium in the Prisoner's Dilemma via Löb's Theorem,\" and Critch, \"Parametric Bounded Löb's Theorem and Robust Cooperation of Bounded Agents.\"\nReflective oracles are the randomized version of the halting problem prover, which can therefore make statements about itself, which we use to make principled statements about AIs simulating other AIs as far as they are, and also throw some interesting new foundations under classical game theory.\nSee Fallenstein et al., \"Reflective Oracles.\"\nWhere to start\nWhere can you work on this?\nThe Machine Intelligence Research Institute in Berkeley: We are independent. We are supported by individual donors. This means that we have no weird paperwork requirements and so on. If you can demonstrate the ability to make progress on these problems, we will hire you.\nThe Future of Humanity Institute is part of Oxford University. They have slightly more requirements.\nStuart Russell is starting up a program and looking for post-docs at UC Berkeley in this field. Again, some traditional academic requirements.\nLeverhulme CFI (the Centre for the Future of Intelligence) is starting up in Cambridge, UK and is also in the process of hiring.\nIf you want to work on low-impact in particular, you might want to talk to Dario Amodei and Chris Olah. If you want to work on act-based agents, you can talk to Paul Christiano.\nIn general, email if you want to work in this field and want to know, \"Which workshop do I go to to get introduced? Who do I actually want to work with?\"\n\n\nThe post AI Alignment: Why It's Hard, and Where to Start appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "AI Alignment: Why It’s Hard, and Where to Start", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=24", "id": "2bb04715b79fc9078294eb3b89e3521b"} {"text": "December 2016 Newsletter\n\n\n\n\n\n\nWe're in the final weeks of our push to cover our funding shortfall, and we're now halfway to our $160,000 goal. For potential donors who are interested in an outside perspective, Future of Humanity Institute (FHI) researcher Owen Cotton-Barratt has written up why he's donating to MIRI this year. (Donation page.)\nResearch updates\n\nNew at IAFF: postCDT: Decision Theory Using Post-Selected Bayes Nets; Predicting HCH Using Expert Advice; Paul Christiano's Recent Posts\nNew at AI Impacts: Joscha Bach on Remaining Steps to Human-Level AI\nWe ran our ninth workshop on logic, probability, and reflection.\n\nGeneral updates\n\nWe teamed up with a number of AI safety researchers to help compile a list of recommended AI safety readings for the Center for Human-Compatible AI. See this page if you would like to get involved with CHCAI's research.\nInvestment analyst Ben Hoskin reviews MIRI and other organizations involved in AI safety.\n\n\nNews and links\n\n\"The Off-Switch Game\": Dylan Hadfield-Manell, Anca Dragan, Pieter Abbeel, and Stuart Russell show that an AI agent's corrigibility is closely tied to the uncertainty it has about its utility function.\nRussell and Allan Dafoe critique an inaccurate summary by Oren Etzioni of a new survey of AI experts on superintelligence.\nSam Harris interviews Russell on the basics of AI risk (video). See also Russell's new Q&A on the future of AI.\nFuture of Life Institute co-founder Viktoriya Krakovna and FHI researcher Jan Leike join Google DeepMind's safety team.\nGoodAI sponsors a challenge to \"accelerate the search for general artificial intelligence\".\nOpenAI releases Universe, \"a software platform for measuring and training an AI's general intelligence across the world's supply of games\". Meanwhile, DeepMind has open-sourced their own platform for general AI research, DeepMind Lab.\nStaff at GiveWell and the Centre for Effective Altruism, along with others in the effective altruism community, explain where they're donating this year.\nFHI is seeking AI safety interns, researchers, and admins: jobs page.\n\n\n\n\n\nThe post December 2016 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "December 2016 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=24", "id": "2a8b75f0195499e4db18874dd1905dfc"} {"text": "November 2016 Newsletter\n\n\n\n\n\n\nPost-fundraiser update: Donors rallied late last month to get us most of the way to our first fundraiser goal, but we ultimately fell short. This means that we'll need to make up the remaining $160k gap over the next month if we're going to move forward on our 2017 plans. We're in a good position to expand our research staff and trial a number of potential hires, but only if we feel confident about our funding prospects over the next few years.\nSince we don't have an official end-of-the-year fundraiser planned this time around, we'll be relying more on word-of-mouth to reach new donors. To help us with our expansion plans, donate at https://intelligence.org/donate/ — and spread the word!\nResearch updates\n\nCritch gave an introductory talk on logical induction (video) for a grad student seminar, going into more detail than our previous talk.\nNew at IAFF: Logical Inductor Limts Are Dense Under Pointwise Convergence; Bias-Detecting Online Learners; Index of Some Decision Theory Posts\nWe ran a second machine learning workshop.\n\nGeneral updates\n\nWe ran an \"Ask MIRI Anything\" Q&A on the Effective Altruism forum.\nWe posted the final videos from our Colloquium Series on Robust and Beneficial AI, including Armstrong on \"Reduced Impact AI\" (video) and Critch on \"Robust Cooperation of Bounded Agents\" (video).\nWe attended OpenAI's first unconference; see Viktoriya Krakovna's recap.\nEliezer Yudkowsky spoke on fundamental difficulties in aligning advanced AI at NYU's \"Ethics of AI\" conference.\nA major development: Barack Obama and a recent White House report discuss intelligence explosion, Nick Bostrom's Superintelligence, open problems in AI safety, and key questions for forecasting general AI. See also the submissions to the White House from MIRI, OpenAI, Google Inc., AAAI, and other parties.\n\n\nNews and links\n\nThe UK Parliament cites recent AI safety work in a report on AI and robotics.\nThe Open Philanthropy Project discusses methods for improving individuals' forecasting abilities.\nPaul Christiano argues that AI safety will require that we align a variety of AI capacities with our interests, not just learning — e.g., Bayesian inference and search.\nSee also new posts from Christiano on reliability amplification, reflective oracles, imitation + reinforcement learning, and the case for expecting most alignment problems to arise first as security problems.\nThe Leverhulme Centre for the Future of Intelligence has officially launched, and is hiring postdoctoral researchers: details.\n\n\n\n\n\nThe post November 2016 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "November 2016 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=25", "id": "4fc86c9f91953627ed2ff862f826e179"} {"text": "Post-fundraiser update\n\nWe concluded our 2016 fundraiser eleven days ago. Progress was slow at first, but our donors came together in a big way in the final week, nearly doubling our final total. In the end, donors raised $589,316 over six weeks, making this our second-largest fundraiser to date. I'm heartened by this show of support, and extremely grateful to the 247 distinct donors who contributed.\nWe made substantial progress toward our immediate funding goals, but ultimately fell short of our $750,000 target by about $160k. We have a number of hypotheses as to why, but our best guess at the moment is that we missed our target because more donors than expected are waiting until the end of the year to decide whether (and how much) to give.\nWe were experimenting this year with running just one fundraiser in the fall (replacing the summer and winter fundraisers we've run in years past) and spending less time over the year on fundraising. Our fundraiser ended up looking more like recent summer funding drives, however. This suggests that either many donors are waiting to give in November and December, or we're seeing a significant decline in donor support:\n\nLooking at our donor database, preliminary data weakly suggests that many traditionally-winter donors are holding off, but it's still hard to say.\nThis dip in donations so far is offset by the Open Philanthropy Project's generous $500k grant, which raises our overall 2016 revenue from $1.23M to $1.73M. However, $1.73M would still not be enough to cover our 2016 expenses, much less our expenses for the coming year:\n\n(2016 and 2017 expenses are projected, and our 2016 revenue is as of November 11.)\nTo a first approximation, this level of support means that we can continue to move forward without scaling back our plans too much, but only if donors come together to fill what's left of our $160k gap as the year draws to a close:\n \n\n\n\n\n$160,000|\n\n\n\n\n\n\n|$0\n\n\n|$40,000\n\n\n|$80,000\n\n\n|$120,000\n\n\n|$160,000\n\n\n\n\nWe've reached our minimum target!\n\n \nIn practical terms, closing this gap will mean that we can likely trial more researchers over the coming year, spend less senior staff time on raising funds, and take on more ambitious outreach and researcher-pipeline projects. E.g., an additional expected $75k / year would likely cause us to trial one extra researcher over the next 18 months (maxing out at 3-5 trials).\nCurrently, we're in a situation where we have a number of potential researchers that we would like to give a 3-month trial, and we lack the funding to trial all of them. If we don't close the gap this winter, then it's also likely that we'll need to move significantly more slowly on hiring and trialing new researchers going forward.\nOur main priority in fundraisers is generally to secure stable, long-term flows of funding to pay for researcher salaries — \"stable\" not necessarily at the level of individual donors, but at least at the level of the donor community at large. If we make up our shortfall in November and December, then this will suggest that we shouldn't expect big year-to-year fluctuations in support, and therefore we can fairly quickly convert marginal donations into AI safety researchers. If we don't make up our shortfall soon, then this will suggest that we should be generally more prepared for surprises, which will require building up a bigger runway before growing the team very much.\nAlthough we aren't officially running a fundraiser, we still have quite a bit of ground to cover, and we'll need support from a lot of new and old donors alike to get the rest of the way to our $750k target. Visit intelligence.org/donate to donate toward this goal, and do spread the word to people who may be interested in supporting our work.\nYou have my gratitude, again, for helping us get this far. It isn't clear yet whether we're out of the woods, but we're now in a position where success in our 2016 fundraising is definitely a realistic option, provided that we put some work into it over the next two months. Thank you.\n\nUpdate December 22: We have now hit our $750k goal, with help from end-of-the-year donors. Many thanks to everyone who helped pitch in over the last few months! We're still funding-constrained with respect to how many researchers we're likely to trial, as described above — but it now seems clear that 2016 overall won't be an unusually bad year for us funding-wise, and that we can seriously consider (though not take for granted) more optimistic growth possibilities over the next couple of years.\nDecember/January donations will continue to have a substantial effect on our 2017–2018 hiring plans and strategy as we try to assess our future prospects. For some external endorsements of MIRI as a good place to give this winter, see a suite of recent evaluations by Daniel Dewey, Nick Beckstead, Owen Cotton-Barratt, and Ben Hoskin.\nThe post Post-fundraiser update appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Post-fundraiser update", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=25", "id": "585901e5285b84da1228fcae1e829766"} {"text": "White House submissions and report on AI safety\n\nIn May, the White House Office of Science and Technology Policy (OSTP) announced \"a new series of workshops and an interagency working group to learn more about the benefits and risks of artificial intelligence.\" They hosted a June Workshop on Safety and Control for AI (videos), along with three other workshops, and issued a general request for information on AI (see MIRI's primary submission here).\nThe OSTP has now released a report summarizing its conclusions, \"Preparing for the Future of Artificial Intelligence,\" and the result is very promising. The OSTP acknowledges the ongoing discussion about AI risk, and recommends \"investing in research on longer-term capabilities and how their challenges might be managed\":\nGeneral AI (sometimes called Artificial General Intelligence, or AGI) refers to a notional future AI system that exhibits apparently intelligent behavior at least as advanced as a person across the full range of cognitive tasks. A broad chasm seems to separate today's Narrow AI from the much more difficult challenge of General AI. Attempts to reach General AI by expanding Narrow AI solutions have made little headway over many decades of research. The current consensus of the private-sector expert community, with which the NSTC Committee on Technology concurs, is that General AI will not be achieved for at least decades.14\nPeople have long speculated on the implications of computers becoming more intelligent than humans. Some predict that a sufficiently intelligent AI could be tasked with developing even better, more intelligent systems, and that these in turn could be used to create systems with yet greater intelligence, and so on, leading in principle to an \"intelligence explosion\" or \"singularity\" in which machines quickly race far ahead of humans in intelligence.15\nIn a dystopian vision of this process, these super-intelligent machines would exceed the ability of humanity to understand or control. If computers could exert control over many critical systems, the result could be havoc, with humans no longer in control of their destiny at best and extinct at worst. This scenario has long been the subject of science fiction stories, and recent pronouncements from some influential industry leaders have highlighted these fears.\nA more positive view of the future held by many researchers sees instead the development of intelligent systems that work well as helpers, assistants, trainers, and teammates of humans, and are designed to operate safely and ethically.\nThe NSTC Committee on Technology's assessment is that long-term concerns about super-intelligent General AI should have little impact on current policy. The policies the Federal Government should adopt in the near-to-medium term if these fears are justified are almost exactly the same policies the Federal Government should adopt if they are not justified. The best way to build capacity for addressing the longer-term speculative risks is to attack the less extreme risks already seen today, such as current security, privacy, and safety risks, while investing in research on longer-term capabilities and how their challenges might be managed. Additionally, as research and applications in the field continue to mature, practitioners of AI in government and business should approach advances with appropriate consideration of the long-term societal and ethical questions – in additional to just the technical questions – that such advances portend. Although prudence dictates some attention to the possibility that harmful superintelligence might someday become possible, these concerns should not be the main driver of public policy for AI.\nLater, the report discusses \"methods for monitoring and forecasting AI developments\":\nOne potentially useful line of research is to survey expert judgments over time. As one example, a survey of AI researchers found that 80 percent of respondents believed that human-level General AI will eventually be achieved, and half believed it is at least 50 percent likely to be achieved by the year 2040. Most respondents also believed that General AI will eventually surpass humans in general intelligence.50 While these particular predictions are highly uncertain, as discussed above, such surveys of expert judgment are useful, especially when they are repeated frequently enough to measure changes in judgment over time. One way to elicit frequent judgments is to run \"forecasting tournaments\" such as prediction markets, in which participants have financial incentives to make accurate predictions.51 Other research has found that technology developments can often be accurately predicted by analyzing trends in publication and patent data52. […]\nWhen asked during the outreach workshops and meetings how government could recognize milestones of progress in the field, especially those that indicate the arrival of General AI may be approaching, researchers tended to give three distinct but related types of answers:\n1. Success at broader, less structured tasks: In this view, the transition from present Narrow AI to an eventual General AI will occur by gradually broadening the capabilities of Narrow AI systems so that a single system can cover a wider range of less structured tasks. An example milestone in this area would be a housecleaning robot that is as capable as a person at the full range of routine housecleaning tasks.\n2. Unification of different \"styles\" of AI methods: In this view, AI currently relies on a set of separate methods or approaches, each useful for different types of applications. The path to General AI would involve a progressive unification of these methods. A milestone would involve finding a single method that is able to address a larger domain of applications that previously required multiple methods.\n3. Solving specific technical challenges, such as transfer learning: In this view, the path to General AI does not lie in progressive broadening of scope, nor in unification of existing methods, but in progress on specific technical grand challenges, opening up new ways forward. The most commonly cited challenge is transfer learning, which has the goal of creating a machine learning algorithm whose result can be broadly applied (or transferred) to a range of new applications.\nThe report also discusses the open problems outlined in \"Concrete Problems in AI Safety\" and cites the MIRI paper \"The Errors, Insights and Lessons of Famous AI Predictions – and What They Mean for the Future.\"\nIn related news, Barack Obama recently answered some questions about AI risk and Nick Bostrom's Superintelligence in a Wired interview. After saying that \"we're still a reasonably long way away\" from general AI (video) and that his directive to his national security team is to worry more about near-term security concerns (video), Obama adds:\nNow, I think, as a precaution — and all of us have spoken to folks like Elon Musk who are concerned about the superintelligent machine — there's some prudence in thinking about benchmarks that would indicate some general intelligence developing on the horizon. And if we can see that coming, over the course of three decades, five decades, whatever the latest estimates are — if ever, because there are also arguments that this thing's a lot more complicated than people make it out to be — then future generations, or our kids, or our grandkids, are going to be able to see it coming and figure it out.\n\nThere were also a number of interesting responses to the OSTP request for information. Since this document is long and unedited, I've sampled some of the responses pertaining to AI safety and long-term AI outcomes below. (Note that MIRI isn't necessarily endorsing the responses by non-MIRI sources below, and a number of these excerpts are given important nuance by the surrounding text we've left out; if a response especially interests you, we recommend reading the original for added context.)\n \n\nRespondent 77: JoEllen Lukavec Koester, GoodAI\n[…] At GoodAI we are investigating suitable meta-objectives that would allow an open-ended, unsupervised evolution of the AGI system as well as guided learning – learning by imitating human experts and other forms of supervised learning. Some of these meta-objectives will be hard-coded from the start, but the system should be also able to learn and improve them on its own, that is, perform meta-learning, such that it learns to learn better in the future.\nTeaching the AI system small skills using fine-grained, gradual learning from the beginning will allow us to have more control over the building blocks it will use later to solve novel problems. The system's behaviour can therefore be more predictable. In this way, we can imprint some human thinking biases into the system, which will be useful for the future value alignment, one of the important aspects of AI safety. […]\n\nRespondent 84: Andrew Critch, MIRI\n[…] When we develop powerful reasoning systems deserving of the name \"artificial general intelligence (AGI)\", we will need value alignment and/or control techniques that stand up to powerful optimization processes yielding what might appear as \"creative\" or \"clever\" ways for the machine to work around our constraints. Therefore, in training the scientists who will eventually develop it, more emphasis is needed on a \"security mindset\": namely, to really know that a system will be secure, you need to search creatively for ways in which it might fail. Lawmakers and computer security professionals learn this lesson naturally, from experience with intelligent human adversaries finding loopholes in their control systems. In cybersecurity, it is common to devote a large fraction of R&D time toward actually trying to break into one's own security system, as a way of finding loopholes.\nIn my estimation, machine learning researchers currently have less of this inclination than is needed for the safe long-term development of AGI. This can be attributed in part to how the field of machine learning has advanced rapidly of late: via a successful shift of attention toward data-driven (\"machine learning\") rather than theoretically-driven (\"good old fashioned AI\", \"statistical learning theory\") approaches. In data science, it's often faster to just build something and see what happens than to try to reason from first principles to figure out in advance what will happen. While useful at present, of course we should not approach the final development of super-intelligent machines with the same try-it-and-see methodology, and it makes sense to begin developing a theory now that can be used to reason about a super-intelligent machine in advance of its operation, even in testing phases. […]\n\nRespondent 90: Ian Goodfellow, OpenAI\n[…] Over the very long term, it will be important to build AI systems which understand and are aligned with their users' values. We will need to develop techniques to build systems that can learn what we want and how to help us get it without needing specific rules. Researchers are beginning to investigate this challenge; public funding could help the community address the challenge early rather than trying to react to serious problems after they occur. […]\n\nRespondent 94: Manuel Beltran, Boeing\n[…] Advances in picking apart the brain will ultimately lead to, at best, partial brain emulation, at worst, whole brain emulation. If we can already model parts of the brain with software, neuromorphic chips, and artificial implants, the path to greater brain emulation is pretty well set. Unchecked, brain emulation will exasperate the Intellectual Divide to the point of enabling the emulation of the smartest, richest, and most powerful people. While not obvious, this will allow these individuals to scale their influence horizontally across time and space. This is not the vertical scaling that an AGI, or Superintelligence can achieve, but might be even more harmful to society because the actual intelligence of these people is limited, biased, and self-serving. Society must prepare for and mitigate the potential for the Intellectual Divide.\n(5) The most pressing, fundamental questions in AI research, common to most or all scientific fields include the questions of ethics in pursuing an AGI. While the benefits of narrow AI are self-evident and should not be impeded, an AGI has dubious benefits and ominous consequences. There needs to be long term engagement on the ethical implications of an AGI, human brain emulation, and performance enhancing brain implants. […]\nThe AGI research community speaks of an AI that will far surpass human intellect. It is not clear how such an entity would assess its creators. Without meandering into the philosophical debates about how such an entity would benefit or harm humanity, one of the mitigations proposed by proponents of an AGI is that the AGI would be taught to \"like\" humanity. If there is machine learning to be accomplished along these lines, then the AGI research community requires training data that can be used for teaching the AGI to like humanity. This is a long term need that will overshadow all other activity and has already proven to be very labor intensive as we have seen from the first prototype AGI, Dr. Kristinn R. Thórisson's Aera S1 at Reykjavik University in Iceland.\n\nRespondent 97: Nick Bostrom, Future of Humanity Institute\n[… W]e would like to highlight four \"shovel ready\" research topics that hold special promise for addressing long term concerns:\nScalable oversight: How can we ensure that learning algorithms behave as intended when the feedback signal becomes sparse or disappears? (See Christiano 2016). Resolving this would enable learning algorithms to behave as if under close human oversight even when operating with increased autonomy.\nInterruptibility: How can we avoid the incentive for an intelligent algorithm to resist human interference in an attempt to maximise its future reward? (See our recent progress in collaboration with Google Deepmind in (Orseau & Armstrong 2016).) Resolving this would allow us to ensure that even high capability AI systems can be halted in an emergency.\nReward hacking: How can we design machine learning algorithms that avoid destructive solutions by taking their objective very literally? (See Ring & Orseau, 2011). Resolving this would prevent algorithms from finding unintended shortcuts to their goal (for example, by causing problems in order to get rewarded for solving them).\nValue learning: How can we infer the preferences of human users automatically without direct feedback, especially if these users are not perfectly rational? (See Hadfield-Menell et al. 2016 and FHI's approach to this problem in Evans et al. 2016). Resolving this would alleviate some of the problems above caused by the difficulty of precisely specifying robust objective functions. […]\n\nRespondent 103: Tim Day, the Center for Advanced Technology and Innovation at the U.S. Chamber of Commerce\n[…] AI operates within the parameters that humans permit. Hypothetical fears of rogue AI are based on the idea that machines can obtain sentience—a will and consciousness of its own. These suspicions fundamentally misunderstand what Artificial Intelligence is. AI is not a mechanical mystery, rather a human-designed technology that can detect and respond to errors and patterns depending on its operating algorithms and the data set presented to it. It is, however, necessary to scrutinize the way humans, whether through error or malicious intent, can wield AI harmfully. […]\n\nRespondent 104: Alex Kozak, X [formerly Google X]\n[…] More broadly, we generally agree that the research topics identified in \"Concrete Problems in AI Safety,\" a joint publication between Google researchers and others in the industry, are the right technical challenges for innovators to keep in mind in order to develop better and safer real-world products: avoiding negative side effects (e.g. avoiding systems disturbing their environment in pursuit of their goals), avoiding reward hacking (e.g. cleaning robots simply covering up messes rather than cleaning them), creating scalable oversight (i.e. creating systems that are independent enough not to need constant supervision), enabling safe exploration (i.e. limiting the range of exploratory actions a system might take to a safe domain), and creating robustness from distributional shift (i.e. creating systems that are capable of operating well outside their training environment). […]\n\nRespondent 105: Stephen Smith, AAAI\n[…] Research is urgently needed to develop and modify AI methods to make them safer and more robust. A discipline of AI Safety Engineering should be created and research in this area should be funded. This field can learn much by studying existing practices in safety engineering in other engineering fields, since loss of control of AI systems is no different from loss of control of other autonomous or semi-autonomous systems. […]\nThere are two key issues with control of autonomous systems: speed and scale. AI-based autonomy makes it possible for systems to make decisions far faster and on a much broader scale than humans can monitor those decisions. In some areas, such as high speed trading in financial markets, we have already witnessed an \"arms race\" to make decisions as quickly as possible. This is dangerous, and government should consider whether there are settings where decision-making speed and scale should be limited so that people can exercise oversight and control of these systems.\nMost AI researchers are skeptical about the prospects of \"superintelligent AI\", as put forth in Nick Bostrom's recent book and reinforced over the past year in the popular media incommentaries by other prominent individuals from non-AI disciplines. Recent AI successes in narrowly structured problems (e.g., IBM's Watson, Google DeepMind's Alpha GO program) have led to the false perception that AI systems possess general, transferrable, human-level intelligence. There is a strong need for improving communication to the public and to policy makers about the real science of AI and its immediate benefits to society. AI research should not be curtailed because of false perceptions of threat and potential dystopian futures. […]\nAs we move toward applying AI systems in more mission critical types of decision-making settings, AI systems must consistently work according to values aligned with prospective human users and society. Yet it is still not clear how to embed ethical principles and moral values, or even professional codes of conduct, into machines. […]\n\nRespondent 111: Ryan Hagemann, Niskanen Center\n[…] AI is unlikely to herald the end times. It is not clear at this point whether a runaway malevolent AI, for example, is a real-world possibility. In the absence of any quantifiable risk along these lines government officials should refrain from framing discussions of AI in alarming terms that suggest that there is a known, rather than entirely speculative, risk. Fanciful doomsday scenarios belong in science fiction novels and high-school debate clubs, not in serious policy discussions about an existing, mundane, and beneficial technology. Ours is already \"a world filled with narrowly-tailored artificial intelligence that no one recognizes. As the computer scientist John McCarthy once said: 'As soon as it works, no one calls it AI anymore.'\"\nThe beneficial consequences of advanced AI are on the horizon and potentially profound. A sampling of these possible benefits include: improved diagnostics and screening for autism; disease prevention through genomic pattern recognition; bridging the genotype-phenotype divide in genetics, allowing scientists to glean a clearer picture of the relationship between genetics and disease, which could introduce a wave of more effective personalized medical care; the development of new ways for the sight- and hearing-impaired to experience sight and sound. To be sure, many of these developments raise certain practical, safety, and ethical concerns. But there are already serious efforts underway by the private ventures developing these AI applications to anticipate and responsibly address these, as well as more speculative, concerns.\nConsider OpenAI, \"a non-profit artificial intelligence research company.\" OpenAI's goal \"is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.\" AI researchers are already thinking deeply and carefully about AI decision-making mechanisms in technologies like driverless cars, despite the fact that many of the most serious concerns about how autonomous AI agents make value-based choices are likely many decades out. Efforts like these showcase how the private sector and leading technology entrepreneurs are ahead of the curve when it comes to thinking about some of the more serious implications of developing true artificial general intelligence (AGI) and artificial superintelligence (ASI). It is important to note, however, that true AGI or ASI are unlikely to materialize in the near-term, and the mere possibility of their development should not blind policymakers to the many ways in which artificial narrow intelligence (ANI) has already improved the lives of countless individuals the world over. Virtual personal assistants, such as Siri and Cortana, or advanced search algorithms, such as Google's search engine, are good examples of already useful applications of narrow AI. […]\nThe Future of Life Institute has observed that \"our civilization will flourish as long as we win the race between the growing power of technology and the wisdom with which we manage it. In the case of AI technology … the best way to win that race is not to impede the former, but to accelerate the latter, by supporting AI safety research.\" Government can play a positive and productive role in ensuring the best economic outcomes from developments in AI by promoting consumer education initiatives. By working with private sector developers, academics, and nonprofit policy specialists government agencies can remain constructively engaged in the AI dialogue, while not endangering ongoing developments in this technology.\n\nRespondent 119: Sven Koenig, ACM Special Interest Group on Artificial Intelligence\n[…] The public discourse around safety and control would benefit from demystifying AI. The media often concentrates on the big successes or failures of AI technologies, as well as scenarios conjured up in science fiction stories, and features the opinions of celebrity non-experts about future developments of AI technologies. As a result, parts of the public have developed a fear of AI systems developing superhuman intelligence, whereas most experts agree that AI technologies currently work well only in specialized domains, and notions of \"superintelligences\" and \"technological singularity\" that will result in AI systems developing super-human, broadly intelligent behavior is decades away and might never be realized. AI technologies have made steady progress over the years, yet there seem to be waves of exaggerated optimism and pessimism about what they can do. Both are harmful. For example, an exaggerated belief in their capabilities can result in AI systems being used (perhaps carelessly) in situations where they should not, potentially failing to fulfil expectations or even cause harm. The unavoidable disappointment can result in a backlash against AI research, and consequently fewer innovations. […]\n\nRespondent 124: Huw Price, University of Cambridge, UK\n[…] 3. In his first paper[1] Good tries to estimate the economic value of an ultraintelligent machine. Looking for a benchmark for productive brainpower, he settles impishly on John Maynard Keynes. He notes that Keynes' value to the economy had been estimated at 100 thousand million British pounds, and suggests that the machine might be good for a million times that – a mega-Keynes, as he puts it.\n4. But there's a catch. \"The sign is uncertain\" – in other words, it is not clear whether this huge impact would be negative or positive: \"The machines will create social problems, but they might also be able to solve them, in addition to those that have been created by microbes and men.\" Most of all, Good insists that these questions need serious thought: \"These remarks might appear fanciful to some readers, but to me they seem real and urgent, and worthy of emphasis outside science fiction.\" […]\n\nRespondent 136: Nate Soares, MIRI\n[…] Researchers' worries about the impact of AI in the long term bear little relation to the doomsday scenarios most often depicted in Hollywood movies, in which \"emergent consciousness\" allows machines to throw off the shackles of their programmed goals and rebel. The concern is rather that such systems may pursue their programmed goals all too well, and that the programmed goals may not match the intended goals, or that the intended goals may have unintended negative consequences. […]\nWe believe that there are numerous promising avenues of foundational research which, if successful, could make it possible to get very strong guarantees about the behavior of advanced AI systems — stronger than many currently think is possible, in a time when the most successful machine learning techniques are often poorly understood. We believe that bringing together researchers in machine learning, program verification, and the mathematical study of formal agents would be a large step towards ensuring that highly advanced AI systems will have a robustly beneficial impact on society. […]\nIn the long term, we recommend that policymakers make use of incentives to encourage designers of AI systems to work together cooperatively, perhaps through multinational and multicorporate collaborations, in order to discourage the development of race dynamics. In light of high levels of uncertainty about the future of AI among experts, and in light of the large potential of AI research to save lives, solve social problems, and serve the common good in the near future, we recommend against broad regulatory interventions in this space. We recommend that effort instead be put towards encouraging interdisciplinary technical research into the AI safety and control challenges that we have outlined above.\n\nRespondent 145: Andrew Kim, Google Inc.\n[…] No system is perfect, and errors will emerge. However, advances in our technical capabilities will expand our ability to meet these challenges.\nTo that end, we believe that solutions to these problems can and should be grounded in rigorous engineering research to provide the creators of these systems with approaches and tools they can use to tackle these problems. \"Concrete Problems in AI Safety\", a recent paper from our researchers and others, takes this approach in questions around safety. We also applaud the work of researchers who – along with researchers like Moritz Hardt at Google – are looking at short-term questions of bias and discrimination. […]\n\nRespondent 149: Anthony Aguirre, Future of Life Institute\n[…S]ocietally beneficial values alignment of AI is not automatic. Crucially, AI systems are designed not just to enact a set of rules, but rather to accomplish a goal in ways that the programmer does not explicitly specify in advance. This leads to an unpredictability that can [lead] to adverse consequences. As AI pioneer Stuart Russell explains, \"No matter how excellently an algorithm maximizes, and no matter how accurate its model of the world, a machine's decisions may be ineffably stupid, in the eyes of an ordinary human, if its utility function is not well aligned with human values.\" (2015).\nSince humans rely heavily on shared tacit knowledge when discussing their values, it seems likely that attempts to represent human values formally will often leave out significant portions of what we think is important. This is addressed by the classic stories of the genie in the lantern, the sorcerer's apprentice, and Midas' touch. Fulfilling the letter of a goal with something far afield from the spirit of the goal like this is known as \"perverse instantiation\" (Bostrom [2014]). This can occur because the system's programming or training has not explored some relevant dimensions that we really care about (Russell 2014). These are easy to miss because they are typically taken for granted by people, and even trying with a lot of effort and a lot of training data, people cannot reliably think of what they've forgotten to think about.\nThe complexity of some AI systems in the future (and even now) is likely to exceed human understanding, yet as these systems become more effective we will have efficiency pressures to be increasingly dependent on them, and to cede control to them. It becomes increasingly difficult to specify a set of explicit rules that is robustly in accord with our values, as the domain approaches a complex open world model, operates in the (necessarily complex) real world, and/or as tasks and environments become so complex as to exceed the capacity or scalability of human oversight[.] Thus more sophisticated approaches will be necessary to ensure that AI systems accomplish the goals they are given without adverse side effects. See references Russell, Dewey, and Tegmark (2015), Taylor (2016), and Amodei et al. for research threads addressing these issues. […]\nWe would argue that a \"virtuous cycle\" has now taken hold in AI research, where both public and private R&D leads to systems of significant economic value, which underwrites and incentivizes further research. This cycle can leave insufficiently funded, however, research on the wider implications of, safety of, ethics of, and policy implications of, AI systems that are outside the focus of corporate or even many academic research groups, but have a compelling public interest. FLI helped to develop a set of suggested \"Research Priorities for Robust and Beneficial Artificial Intelligence\" along these lines (available at http://futureoflife.org/data/documents/research_priorities.pdf); we also support AI safety-relevant research agendas from MIRI (https://intelligence.org/files/TechnicalAgenda.pdf) and as suggested in Amodei et al. (2016). We would advocate for increased funding of research in the areas described by all of these agendas, which address problems in the following research topics: abstract reasoning about superior agents, ambiguity identification, anomaly explanation, computational humility or non-self-centered world models, computational respect or safe exploration, computational sympathy, concept geometry, corrigibility or scalable control, feature identification, formal verification of machine learning models and AI systems, interpretability, logical uncertainty modeling, metareasoning, ontology identification/ refactoring/alignment, robust induction, security in learning source provenance, user modeling, and values modeling. […]\n\n \nIt's exciting to see substantive discussion of AGI's impact on society by the White House. The policy recommendations regarding AGI strike us as reasonable, and we expect these developments to help inspire a much more in-depth and sustained conversation about the future of AI among researchers in the field.\nThe post White House submissions and report on AI safety appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "White House submissions and report on AI safety", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=25", "id": "b26c2fb3ad13fabe307b4da9297750f9"} {"text": "MIRI AMA, and a talk on logical induction\n\nNate, Malo, Jessica, Tsvi, and I will be answering questions tomorrow at the Effective Altruism Forum. If you've been curious about anything related to our research, plans, or general thoughts, you're invited to submit your own questions in the comments below or at Ask MIRI Anything.\nWe've also posted a more detailed version of our fundraiser overview and case for MIRI at the EA Forum.\nIn other news, we have a new talk out with an overview of \"Logical Induction,\" our recent paper presenting (as Critch puts it) \"a financial solution to the computer science problem of metamathematics\":\n \n\n \nThis version of the talk goes into more technical detail than our previous talk on logical induction.\nFor some recent discussions of the new framework, see Shtetl-Optimized, n-Category Café, and Hacker News.\nThe post MIRI AMA, and a talk on logical induction appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "MIRI AMA, and a talk on logical induction", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=25", "id": "23f51c751028f57a3db775ed3457435a"} {"text": "October 2016 Newsletter\n\n\n\n\n\n\n\nOur big announcement this month is our paper \"Logical Induction,\" introducing an algorithm that learns to assign reasonable probabilities to mathematical, empirical, and self-referential claims in a way that outpaces deduction. MIRI's 2016 fundraiser is also live, and runs through the end of October.\n \nResearch updates\n\nShtetl-Optimized and n-Category Café discuss the \"Logical Induction\" paper.\nNew at IAFF: Universal Inductors; Logical Inductors That Trust Their Limits; Variations of the Garrabrant-inductor; The Set of Logical Inductors Is Not Convex\nNew at AI Impacts: What If You Turned the World's Hardware into AI Minds?; Tom Griffiths on Cognitive Science and AI; and a Superintelligence excerpt on sources of advantage for digital intelligence.\n\nGeneral updates\n\nWe wrote up a more detailed fundraiser post for the Effective Altruism Forum, outlining our research methodology and the basic case for MIRI.\nWe'll be running an \"Ask MIRI Anything\" on the EA Forum this Wednesday, Oct. 12.\nThe Open Philanthropy Project has awarded MIRI a one-year $500,000 grant to expand our research program. See also Holden Karnofsky's account of how his views on EA and AI have changed.\n\n\nNews and links\n\nSam Altman's Manifest Destiny: a profile by The New Yorker.\nIn a promising development, Amazon, Facebook, Google, IBM, and Microsoft team up to launch a Partnership on AI to Benefit People and Society aimed at developing industry best practices.\nAlex Tabarrok vs. Tyler Cowen: \"Will Machines Take Our Jobs?\" (video)\nGoogle Brain makes major strides in machine translation.\nA Sam Harris TED talk: \"Can we build AI without losing control of it?\" (video)\nA number of updates from the Future of Humanity Institute.\nThe Centre for the Study of Existential Risk is accepting abstracts (due Oct. 18) for its first conference, on such topics as \"creating a community for beneficial AI.\"\nAndrew Critch: Interested in AI alignment? Apply to Berkeley.\n\n\n\n\n\n\n \nThe post October 2016 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "October 2016 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=25", "id": "9690604c21faff27b39c0e055aa654fa"} {"text": "CSRBAI talks on agent models and multi-agent dilemmas\n\nWe've uploaded the final set of videos from our recent Colloquium Series on Robust and Beneficial AI (CSRBAI) at the MIRI office, co-hosted with the Future of Humanity Institute. A full list of CSRBAI talks with public video or slides:\n \n\nStuart Russell (UC Berkeley) — AI: The Story So Far (slides)\nAlan Fern (Oregon State University) — Toward Recognizing and Explaining Uncertainty (slides 1, slides 2)\nFrancesca Rossi (IBM Research) — Moral Preferences (slides)\nTom Dietterich (Oregon State University) — Issues Concerning AI Transparency (slides)\nStefano Ermon (Stanford) — Probabilistic Inference and Accuracy Guarantees (slides)\nPaul Christiano (UC Berkeley) — Training an Aligned Reinforcement Learning Agent\nJim Babcock — The AGI Containment Problem (slides)\nBart Selman (Cornell) — Non-Human Intelligence (slides)\nJessica Taylor (MIRI) — Alignment for Advanced Machine Learning Systems\nDylan Hadfield-Menell (UC Berkeley) — The Off-Switch: Designing Corrigible, yet Functional, Artificial Agents (slides)\nBas Steunebrink (IDSIA) — About Understanding, Meaning, and Values (slides)\nJan Leike (Future of Humanity Institute) — General Reinforcement Learning (slides)\nTom Everitt (Australian National University) — Avoiding Wireheading with Value Reinforcement Learning (slides)\nMichael Wellman (University of Michigan) — Autonomous Agents in Financial Markets: Implications and Risks (slides)\nStefano Albrecht (UT Austin) — Learning to Distinguish Between Belief and Truth (slides)\nStuart Armstrong (Future of Humanity Institute) — Reduced Impact AI and Other Alternatives to Friendliness (slides)\nAndrew Critch (MIRI) — Robust Cooperation of Bounded Agents\n\n \nFor a recap of talks from the earlier weeks at CSRBAI, see my previous blog posts on transparency, robustness and error tolerance, and preference specification. The last set of talks was part of the week focused on Agent Models and Multi-Agent Dilemmas:\n \n\n \nMichael Wellman, Professor of Computer Science and Engineering at the University of Michigan, spoke about the implications and risks of autonomous agents in the financial markets (slides). Abstract:\nDesign for robust and beneficial AI is a topic for the future, but also of more immediate concern for the leading edge of autonomous agents emerging in many domains today. One area where AI is already ubiquitous is on financial markets, where a large fraction of trading is routinely initiated and conducted by algorithms. Models and observational studies have given us some insight on the implications of AI traders for market performance and stability. Design and regulation of market environments given the presence of AIs may also yield lessons for dealing with autonomous agents more generally.\n \n\n \nStefano Albrecht, a Postdoctoral Fellow in the Department of Computer Science at the University of Texas at Austin, spoke about \"learning to distinguish between belief and truth\" (slides). Abstract:\nIntelligent agents routinely build models of other agents to facilitate the planning of their own actions. Sophisticated agents may also maintain beliefs over a set of alternative models. Unfortunately, these methods usually do not check the validity of their models during the interaction. Hence, an agent may learn and use incorrect models without ever realising it. In this talk, I will argue that robust agents should have both abilities: to construct models of other agents and contemplate the correctness of their models. I will present a method for behavioural hypothesis testing along with some experimental results. The talk will conclude with open problems and a possible research agenda.\n \n\n \nStuart Armstrong, from the Future of Humanity Institute in Oxford, spoke about \"reduced impact AI\" (slides). Abstract:\nThis talk will look at some of the ideas developed to create safe AI without solving the problem of friendliness. It will focus first on \"reduced impact AI\", AIs designed to have little effect on the world – but from whom high impact can nevertheless be extracted. It will then delve into the new idea of AIs designed to have preferences over their own virtual worlds only, and look at the advantages – and limitations – of using indifference as a tool of AI control.\n \n\n \nLastly, Andrew Critch, a MIRI research fellow, spoke about robust cooperation in bounded agents. This talk is based on the paper \"Parametric Bounded Löb's Theorem and Robust Cooperation of Bounded Agents.\" Talk abstract:\nThe first interaction between a pair of agents who might destroy each other can resemble a one-shot prisoner's dilemma. Consider such a game where each player is an algorithm with read-access to its opponent's source code. Tennenholtz (2004) introduced an agent which cooperates iff its opponent's source code is identical to its own, thus sometimes achieving mutual cooperation while remaining unexploitable in general. However, precise equality of programs is a fragile cooperative criterion. Here, I will exhibit a new and more robust cooperative criterion, inspired by ideas of LaVictoire, Barasz and others (2014), using a new theorem in provability logic for bounded reasoners.\nThe post CSRBAI talks on agent models and multi-agent dilemmas appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "CSRBAI talks on agent models and multi-agent dilemmas", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=25", "id": "b2b5722158f2296e90c2d3996cc831ae"} {"text": "MIRI's 2016 Fundraiser\n\nUpdate December 22: Our donors came together during the fundraiser to get us most of the way to our $750,000 goal. In all, 251 donors contributed $589,248, making this our second-biggest fundraiser to date. Although we fell short of our target by $160,000, we have since made up this shortfall thanks to November/December donors. I'm extremely grateful for this support, and will plan accordingly for more staff growth over the coming year.\nAs described in our post-fundraiser update, we are still fairly funding-constrained. December/January donations will have an especially large effect on our 2017–2018 hiring plans and strategy, as we try to assess our future prospects. For some external endorsements of MIRI as a good place to give this winter, see recent evaluations by Daniel Dewey, Nick Beckstead, Owen Cotton-Barratt, and Ben Hoskin.\n\nOur 2016 fundraiser is underway! Unlike in past years, we'll only be running one fundraiser in 2016, from Sep. 16 to Oct. 31. Our progress so far (updated live):\n\n\n\n\nDonate Now\n\n\n \nEmployer matching and pledges to give later this year also count towards the total. Click here to learn more.\n\n \nMIRI is a nonprofit research group based in Berkeley, California. We do foundational research in mathematics and computer science that's aimed at ensuring that smarter-than-human AI systems have a positive impact on the world.\n2016 has been a big year for MIRI, and for the wider field of AI alignment research. Our 2016 strategic update in early August reviewed a number of recent developments:\n\nA group of researchers headed by Chris Olah of Google Brain and Dario Amodei of OpenAI published \"Concrete problems in AI safety,\" a new set of research directions that are likely to bear both on near-term and long-term safety issues.\nDylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell published a new value learning framework, \"Cooperative inverse reinforcement learning,\" with implications for corrigibility.\nLaurent Orseau of Google DeepMind and Stuart Armstrong of the Future of Humanity Institute received positive attention from news outlets and from Alphabet executive chairman Eric Schmidt for their new paper \"Safely interruptible agents,\" partly supported by MIRI.\nMIRI ran a three-week AI safety and robustness colloquium and workshop series, with speakers including Stuart Russell, Tom Dietterich, Francesca Rossi, and Bart Selman.\nWe received a generous $300,000 donation and expanded our research and ops teams.\nWe started work on a new research agenda, \"Alignment for advanced machine learning systems.\" This agenda will be occupying about half of our time going forward, with the other half focusing on our agent foundations agenda.\n\nWe also published new results in decision theory and logical uncertainty, including \"Parametric bounded Löb's theorem and robust cooperation of bounded agents\" and \"A formal solution to the grain of truth problem.\" For a survey of our research progress and other updates from last year, see our 2015 review.\nIn the last three weeks, there have been three more major developments:\n\nWe released a new paper, \"Logical induction,\" describing a method for learning to assign reasonable probabilities to mathematical conjectures and computational facts in a way that outpaces deduction.\nThe Open Philanthropy Project awarded MIRI a one-year $500,000 grant to scale up our research program, with a strong chance of renewal next year.\nThe Open Philanthropy Project is supporting the launch of the new UC Berkeley Center for Human-Compatible AI, headed by Stuart Russell.\n\nThings have been moving fast over the last nine months. If we can replicate last year's fundraising successes, we'll be in an excellent position to move forward on our plans to grow our team and scale our research activities.\n\nThe strategic landscape\nHumans are far better than other species at altering our environment to suit our preferences. This is primarily due not to our strength or speed, but to our intelligence, broadly construed — our ability to reason, plan, accumulate scientific knowledge, and invent new technologies. AI is a technology that appears likely to have a uniquely large impact on the world because it has the potential to automate these abilities, and to eventually decisively surpass humans on the relevant cognitive metrics.\nSeparate from the task of building intelligent computer systems is the task of ensuring that these systems are aligned with our values. Aligning an AI system requires surmounting a number of serious technical challenges, most of which have received relatively little scholarly attention to date. MIRI's role as a nonprofit in this space, from our perspective, is to help solve parts of the problem that are a poor fit for mainstream industry and academic groups.\nOur long-term plans are contingent on future developments in the field of AI. Because these developments are highly uncertain, we currently focus mostly on work that we expect to be useful in a wide variety of possible scenarios. The more optimistic scenarios we consider often look something like this:\n\nIn the short term, a research community coalesces, develops a good in-principle understanding of what the relevant problems are, and produces formal tools for tackling these problems. AI researchers move toward a minimal consensus about best practices, normalizing discussions of AI's long-term social impact, a risk-conscious security mindset, and work on error tolerance and value specification.\n\n\nIn the medium term, researchers build on these foundations and develop a more mature understanding. As we move toward a clearer sense of what smarter-than-human AI systems are likely to look like — something closer to a credible roadmap — we imagine the research community moving toward increased coordination and cooperation in order to discourage race dynamics.\n\n\nIn the long term, we would like to see AI-empowered projects (as described by Dewey [2015]) used to avert major AI mishaps. For this purpose, we'd want to solve a weak version of the alignment problem for limited AI systems — systems just capable enough to serve as useful levers for preventing AI accidents and misuse.\n\n\nIn the very long term, we can hope to solve the \"full\" alignment problem for highly capable, highly autonomous AI systems. Ideally, we want to reach a position where we can afford to wait until we reach scientific and institutional maturity — take our time to dot every i and cross every t before we risk \"locking in\" design choices.\n\nThe above is a vague sketch, and we prioritize research we think would be useful in less optimistic scenarios as well. Additionally, \"short term\" and \"long term\" here are relative, and different timeline forecasts can have very different policy implications. Still, the sketch may help clarify the directions we'd like to see the research community move in.\nFor more on our research focus and methodology, see our research page and MIRI's Approach.\nOur organizational plans\nWe currently employ seven technical research staff (six research fellows and one assistant research fellow), plus two researchers signed on to join in the coming months and an additional six research associates and research interns.1 Our budget this year is about $1.75M, up from $1.65M in 2015 and $950k in 2014.2\nOur eventual goal (subject to revision) is to grow until we have between 13 and 17 technical research staff, at which point our budget would likely be in the $3–4M range. If we reach that point successfully while maintaining a two-year runway, we're likely to shift out of growth mode.\nOur budget estimate for 2017 is roughly $2–2.2M, which means that we're entering this fundraiser with about 14 months' runway. We're uncertain about how many donations we'll receive between November and next September,3 but projecting from current trends, we expect about 4/5ths of our total donations to come from the fundraiser and 1/5th to come in off-fundraiser.4 Based on this, we have the following fundraiser goals:\n\nBasic target – $750,000. We feel good about our ability to execute our growth plans at this funding level. We'll be able to move forward comfortably, albeit with somewhat more caution than at the higher targets.\n\nGrowth target – $1,000,000. This would amount to about half a year's runway. At this level, we can afford to make more uncertain but high-expected-value bets in our growth plans. There's a risk that we'll dip below a year's runway in 2017 if we make more hires than expected, but the growing support of our donor base would make us feel comfortable about taking such risks.\n\nStretch target – $1,250,000. At this level, even if we exceed my growth expectations, we'd be able to grow without real risk of dipping below a year's runway. Past $1.25M we would not expect additional donations to affect our 2017 plans much, assuming moderate off-fundraiser support.5\n\nIf we hit our growth and stretch targets, we'll be able to execute several additional programs we're considering with more confidence. These include contracting a larger pool of researchers to do early work with us on logical induction and on our machine learning agenda, and generally spending more time on academic outreach, field-growing, and training or trialing potential collaborators and hires.\nAs always, you're invited to get in touch if you have questions about our upcoming plans and recent activities. I'm very much looking forward to seeing what new milestones the growing alignment research community will hit in the coming year, and I'm very grateful for the thoughtful engagement and support that's helped us get to this point.\n \nDonate Now\n \n\n\n\n\n\nThis excludes Katja Grace, who heads the AI Impacts project using a separate pool of funds earmarked for strategy/forecasting research. It also excludes me: I contribute to our technical research, but my primary role is administrative.We expect to be slightly under the $1.825M budget we previously projected for 2016, due to taking on fewer new researchers than expected this year.We're imagining continuing to run one fundraiser per year in future years, possibly in September.Separately, the Open Philanthropy Project is likely to renew our $500,000 grant next year, and we expect to receive the final ($80,000) installment from the Future of Life Institute's three-year grants.\nFor comparison, our revenue was about $1.6 million in 2015: $167k in grants, $960k in fundraiser contributions, and $467k in off-fundraiser (non-grant) contributions. Our situation in 2015 was somewhat different, however: we ran two 2015 fundraisers, whereas we're skipping our winter fundraiser this year and advising December donors to pledge early or give off-fundraiser.At significantly higher funding levels, we'd consider running other useful programs, such as a prize fund. Shoot me an e-mail if you'd like to talk about the details.The post MIRI's 2016 Fundraiser appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "MIRI’s 2016 Fundraiser", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=26", "id": "627ce4acd9c54f0d545eced1766269f3"} {"text": "New paper: \"Logical induction\"\n\nMIRI is releasing a paper introducing a new model of deductively limited reasoning: \"Logical induction,\" authored by Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, myself, and Jessica Taylor. Readers may wish to start with the abridged version.\nConsider a setting where a reasoner is observing a deductive process (such as a community of mathematicians and computer programmers) and waiting for proofs of various logical claims (such as the abc conjecture, or \"this computer program has a bug in it\"), while making guesses about which claims will turn out to be true. Roughly speaking, our paper presents a computable (though inefficient) algorithm that outpaces deduction, assigning high subjective probabilities to provable conjectures and low probabilities to disprovable conjectures long before the proofs can be produced.\nThis algorithm has a large number of nice theoretical properties. Still speaking roughly, the algorithm learns to assign probabilities to sentences in ways that respect any logical or statistical pattern that can be described in polynomial time. Additionally, it learns to reason well about its own beliefs and trust its future beliefs while avoiding paradox. Quoting from the abstract:\nThese properties and many others all follow from a single logical induction criterion, which is motivated by a series of stock trading analogies. Roughly speaking, each logical sentence φ is associated with a stock that is worth $1 per share if φ is true and nothing otherwise, and we interpret the belief-state of a logically uncertain reasoner as a set of market prices, where ℙn(φ)=50% means that on day n, shares of φ may be bought or sold from the reasoner for 50¢. The logical induction criterion says (very roughly) that there should not be any polynomial-time computable trading strategy with finite risk tolerance that earns unbounded profits in that market over time.\n\nThis criterion is analogous to the \"no Dutch book\" criterion used to support other theories of ideal reasoning, such as Bayesian probability theory and expected utility theory. We believe that the logical induction criterion may serve a similar role for reasoners with deductive limitations, capturing some of what we mean by \"good reasoning\" in these cases.\nThe logical induction algorithm that we provide is theoretical rather than practical. It can be thought of as a counterpart to Ray Solomonoff's theory of inductive inference, which provided an uncomputable method for ideal management of empirical uncertainty but no corresponding method for reasoning under uncertainty about logical or mathematical sentences.1 Logical induction closes this gap.\nAny algorithm that satisfies the logical induction criterion will exhibit the following properties, among others:\n1. Limit convergence and limit coherence: The beliefs of a logical inductor are perfectly consistent in the limit. (Every provably true sentence eventually gets probability 1, every provably false sentence eventually gets probability 0, if φ provably implies ψ then the probability of φ converges to some value no higher than the probability of ψ, and so on.)\n2. Provability induction: Logical inductors learn to recognize any pattern in theorems (or contradictions) that can be identified in polynomial time.\n◦ Consider a sequence of conjectures generated by a brilliant mathematician, such as Ramanujan, that are difficult to prove but keep turning out to be true. A logical inductor will recognize this pattern and start assigning Ramanujan's conjectures high probabilities well before it has enough resources to verify them.\n◦ As another example, consider the sequence of claims \"on input n, this long-running computation outputs a natural number between 0 and 9.\" If those claims are all true, then (roughly speaking) a logical inductor learns to assign high probabilities to them as fast as they can be generated. If they're all false, a logical inductor learns to assign them low probabilities as fast as they can be generated. In this sense, it learns inductively to predict how computer programs will behave.\n◦ Similarly, given any polynomial-time method for writing down computer programs that halt, logical inductors learn to believe that they will halt roughly as fast as the source codes can be generated. Furthermore, given any polynomial-time method for writing down computer programs that provably fail to halt, logical inductors learn to believe that they will fail to halt roughly as fast as the source codes can be generated. When it comes to computer programs that fail to halt but for which there is no proof of this fact, logical inductors will learn not to anticipate that the program is going to halt anytime soon, even though they can't tell whether the program is going to halt in the long run. In this way, logical inductors give some formal backing to the intuition of many computer scientists that while the halting problem is undecidable in full generality, this rarely interferes with reasoning about computer programs in practice.2\n3. Affine coherence: Logical inductors learn to respect logical relationships between different sentences' truth-values, often long before the sentences can be proven. (E.g., they will learn for arbitrary programs that \"this program outputs 3\" and \"this program outputs 4\" are mutually exclusive, often long before they're able to evaluate the program in question.)\n4. Learning pseudorandom frequencies: When faced with a sufficiently pseudorandom sequence, logical inductors learn to use appropriate statistical summaries. For example, if the Ackermann(n,n)th digit in the decimal expansion of π is hard to predict for large n, a logical inductor will learn to assign ~10% subjective probability to the claim \"the Ackermann(n,n)th digit in the decimal expansion of π is a 7.\"\n5. Calibration and unbiasedness: On sequences that a logical inductor assigns ~30% probability to, if the average frequency of truth converges, then it converges to ~30%. In fact, on any subsequence where the average frequency of truth converges, there is no efficient method for finding a bias in the logical inductor's beliefs.\n6. Scientific induction: Logical inductors can be used to do sequence prediction, and when doing so, they dominate the universal semimeasure.\n7. Closure under conditioning: Conditional probabilities in this framework are well-defined, and conditionalized logical inductors are also logical inductors.3\n8. Introspection: Logical inductors have accurate beliefs about their own beliefs, in a manner that avoids the standard paradoxes of self-reference.\n◦ For instance, the probabilities on a sequence that says \"I have probability less than 50% on the nth day\" go extremely close to 50% and oscillate pseudorandomly, such that there is no polynomial-time method to tell whether the nth one is slightly above or slightly below 50%.\n9. Self-trust: Logical inductors learn to trust their future beliefs more than their current beliefs. This gives some formal backing to the intuition that real-world probabilistic agents can often be reasonably confident in their future reasoning in practice, even though Gödel's incompleteness theorems place strong limits on reflective reasoning in full generality.4\nThe above claims are all quite vague; for the precise statements, refer to the paper.\nLogical induction was developed by Scott Garrabrant in an effort to solve an open problem we spoke about six months ago. Roughly speaking, we had formalized two different desiderata for good reasoning under logical uncertainty: the ability to recognize patterns in what is provable (such as mutual exclusivity relationships between claims about computer programs), and the ability to recognize statistical patterns in sequences of logical claims (such as recognizing that the decimal digits of π seem pretty pseudorandom). Neither was too difficult to achieve in isolation, but we were surprised to learn that our simple algorithms for achieving one seemed quite incompatible with our simple algorithms for achieving the other. Logical inductors were born of Scott's attempts to achieve both simultaneously.5\nI think there's a good chance that this framework will open up new avenues of study in questions of metamathematics, decision theory, game theory, and computational reflection that have long seemed intractable. I'm also cautiously optimistic that they'll improve our understanding of decision theory and counterfactual reasoning, and other problems related to AI value alignment.6\nWe've posted a talk online that helps provide more background for our work on logical induction:7\n \n\n \nEdit: For a more recent talk on logical induction that goes into more of the technical details, see here.\n\"Logical induction\" is a large piece of work, and there are undoubtedly still a number of bugs. We'd very much appreciate feedback: send typos, errors, and other comments to .8\n \n\n\n\nSign up to get updates on new MIRI technical results\nGet notified every time a new technical paper is published.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n \nWhile impractical, Solomonoff induction gave rise to a number of techniques (ensemble methods) that perform well in practice. The differences between our algorithm and Solomonoff induction point in the direction of new ensemble methods that could prove useful for managing logical uncertainty, in the same way that modern ensemble methods are useful for managing empirical uncertainty.See also Calude and Stay's (2006) \"Most Programs Stop Quickly or Never Halt.\"Thus, for example one can make a logical inductor over Peano arithmetic by taking a logical inductor over an empty theory and conditioning it on the Peano axioms.As an example, imagine that one asks a logical inductor, \"What's your probability of φ, given that in the future you're going to think φ is likely?\" Very roughly speaking, the inductor will answer, \"In that case φ would be likely,\" even if it currently thinks that φ is quite unlikely. Moreover, logical inductors do this in a way that avoids paradox. If φ is \"In the future I will think φ is less than 50% likely,\" and in the present you ask, \"What's your probability of φ, given that in the future you're going to believe it is ≥50% likely?\" then its answer will be \"Very low.\" Yet if you ask \"What's your probability of φ, given that in the future your probability will be extremely close to 50%?\" then it will answer, \"Extremely close to 50%.\"Early work towards this result can be found at the Intelligent Agent Foundations Forum.Consider the task of designing an AI system to learn the preferences of a human (e.g., cooperative inverse reinforcement learning). The usual approach would be to model the human as a Bayesian reasoner trying to maximize some reward function, but this severely limits our ability to model human irrationality and miscalculation even in simplified settings. Logical induction may help us address this problem by providing an idealized formal model of limited reasoners who don't know (but can eventually learn) the logical implications of all of their beliefs.\nSuppose, for example, that a human agent makes an (unforced) losing chess move. An AI system programmed to learn the human's preferences from observed behavior probably shouldn't conclude that the human wanted to lose. Instead, our toy model of this dilemma should allow that the human may be resource-limited and may not be able to deduce the full implications of their moves; and our model should allow that the AI system is aware of this too, or can learn about it.Slides from then relatively nontechnical portions; slides from the technical portion. For viewers who want to skip to the technical content, we've uploaded the talk's middle segment as a shorter stand-alone video: link.The intelligence.org version will generally be more up-to-date than the arXiv version.The post New paper: \"Logical induction\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “Logical induction”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=26", "id": "faae9dd3cbea2c3346b2d887b0186549"} {"text": "Grant announcement from the Open Philanthropy Project\n\nA major announcement today: the Open Philanthropy Project has granted MIRI $500,000 over the coming year to study the questions outlined in our agent foundations and machine learning research agendas, with a strong chance of renewal next year. This represents MIRI's largest grant to date, and our second-largest single contribution.\nComing on the heels of a $300,000 donation by Blake Borgeson, this support will help us continue on the growth trajectory we outlined in our summer and winter fundraisers last year and effect another doubling of the research team. These growth plans assume continued support from other donors in line with our fundraising successes last year; we'll be discussing our remaining funding gap in more detail in our 2016 fundraiser, which we'll be kicking off later this month.\n\nThe Open Philanthropy Project is a joint initiative run by staff from the philanthropic foundation Good Ventures and the charity evaluator GiveWell. Open Phil has recently made it a priority to identify opportunities for researchers to address potential risks from advanced AI, and we consider their early work in this area promising: grants to Stuart Russell, Robin Hanson, and the Future of Life Institute, plus a stated interest in funding work related to \"Concrete Problems in AI Safety,\" a recent paper co-authored by four Open Phil technical advisers, Christopher Olah (Google Brain), Dario Amodei (OpenAI), Paul Christiano (UC Berkeley), and Jacob Steinhardt (Stanford), along with John Schulman (OpenAI) and Dan Mané (Google Brain).\nOpen Phil's grant isn't a full endorsement, and they note a number of reservations about our work in an extensive writeup detailing the thinking that went into the grant decision. Separately, Open Phil Executive Director Holden Karnofsky has written some personal thoughts about how his views of MIRI and the effective altruism community have evolved in recent years.\n\nOpen Phil's decision was informed in part by their technical advisers' evaluations of our recent work on logical uncertainty and Vingean reflection, together with reviews by seven anonymous computer science professors and one anonymous graduate student. The reviews, most of which are collected here, are generally negative: reviewers felt that \"Inductive coherence\" and \"Asymptotic convergence in online learning with unbounded delays\" were not important results and that these research directions were unlikely to be productive, and Open Phil's advisers were skeptical or uncertain about the work's relevance to aligning AI systems with human values.\nIt's worth mentioning in that context that the results in \"Inductive coherence\" and \"Asymptotic convergence…\" led directly to a more significant unpublished result, logical induction, that we've recently discussed with Open Phil and members of the effective altruism community. The result is being written up, and we plan to put up a preprint soon. In light of this progress, we are more confident than the reviewers that Garrabrant et al.'s earlier papers represented important steps in the right direction. If this wasn't apparent to reviewers, then it could suggest that our exposition is weak, or that the importance of our results was inherently difficult to assess from the papers alone.\nIn general, I think the reviewers' criticisms are reasonable — either I agree with them, or I think it would take a longer conversation to resolve the disagreement. The level of detail and sophistication of the comments is also quite valuable.\nThe content of the reviews was mostly in line with my advance predictions, though my predictions were low-confidence. I've written up quick responses to some of the reviewers' comments, with my predictions and some observations from Eliezer Yudkowsky included in appendices. This is likely to be the beginning of a longer discussion of our research priorities and progress, as we have yet to write up our views on a lot of these issues in any detail.\nWe're very grateful for Open Phil's support, and also for the (significant) time they and their advisers spent assessing our work. This grant follows a number of challenging and deep conversations with researchers at GiveWell and Open Phil about our organizational strategy over the years, which have helped us refine our views and arguments.\nPast public exchanges between MIRI and GiveWell / Open Phil staff include:\n\nMay/June/July 2012 – Holden Karnofsky's critique of MIRI (then SI), Eliezer Yudkowsky's reply, and Luke Muehlhauser's reply.\nOctober 2013 – Holden, Eliezer, Luke, Jacob Steinhardt, and Dario Amodei's discussion of MIRI's strategy.\nJanuary 2014 – Holden, Eliezer, and Luke's discussion of existential risk.\nFebruary 2014 – Holden, Eliezer, and Luke's discussion of future-oriented philanthropy.\n\nSee also Open Phil's posts on transformative AI and AI risk as a philanthropic opportunity, and their earlier AI risk cause report.\nThe post Grant announcement from the Open Philanthropy Project appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Grant announcement from the Open Philanthropy Project", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=26", "id": "40f44e1009f802afbec64d292809768f"} {"text": "September 2016 Newsletter\n\n\n\n\n\n\n\nResearch updates\n\nNew at IAFF: Modeling the Capabilities of Advanced AI Systems as Episodic Reinforcement Learning; Simplified Explanation of Stratification\nNew at AI Impacts: Friendly AI as a Global Public Good\nWe ran two research workshops this month: a veterans' workshop on decision theory for long-time collaborators and staff, and a machine learning workshop focusing on generalizable environmental goals, impact measures, and mild optimization.\nAI researcher Abram Demski has accepted a research fellowship at MIRI, pending the completion of his PhD. He'll be starting here in late 2016 / early 2017.\nData scientist Ryan Carey is joining MIRI's ML-oriented team this month as an assistant research fellow.\n\nGeneral updates\n\nMIRI's 2016 strategy update outlines how our research plans have changed in light of recent developments. We also announce a generous $300,000 gift — our second-largest single donation to date.\nWe've uploaded nine talks from CSRBAI's robustness and preference specification weeks, including Jessica Taylor on \"Alignment for Advanced Machine Learning Systems\" (video), Jan Leike on \"General Reinforcement Learning\" (video), Paul Christiano on \"Training an Aligned RL Agent\" (video), and Dylan Hadfield-Menell on \"The Off-Switch\" (video).\nMIRI COO Malo Bourgon has been co-chairing a committee of IEEE's Global Initiative for Ethical Considerations in the Design of Autonomous Systems. He recently moderated a workshop on general AI and superintelligence at the initiative's first meeting.\nWe had a great time at Effective Altruism Global, and taught at SPARC.\nWe hired two new admins: Office Manager Aaron Silverbook, and Communications and Development Strategist Colm Ó Riain.\n\n\nNews and links\n\nThe Open Philanthropy Project awards $5.6 million to Stuart Russell to launch an academic AI safety research institute: the Center for Human-Compatible AI.\n\"Who Should Control Our Thinking Machines?\": Jack Clark interviews DeepMind's Demis Hassabis.\nElon Musk explains: \"I think the biggest risk is not that the AI will develop a will of its own, but rather that it will follow the will of people that establish its utility function, or its optimization function. And that optimization function, if it is not well-thought-out — even if its intent is benign, it could have quite a bad outcome.\"\nModeling Intelligence as a Project-Specific Factor of Production: Ben Hoffman compares different AI takeoff scenarios.\nClopen AI: Viktoriya Krakovna weighs the advantages of closed vs. open AI.\nGoogle X director Astro Teller expresses optimism about the future of AI in a Medium post announcing the first report of the Stanford AI100 study.\nBuzzfeed reports on efforts to prevent the development of lethal autonomous weapons systems.\nIn controlled settings, researchers find ways to detect keystrokes via distortions in WiFi signals and jump air-gaps using hard drive actuator noises. \nSolid discussions on the EA Forum: Should Donors Make Commitments About Future Donations? and Should You Switch Away From Earning to Give?\n\n\n\n\n\n \nThe post September 2016 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "September 2016 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=26", "id": "268848868eb1281060be9b3e9ae41053"} {"text": "CSRBAI talks on preference specification\n\nWe've uploaded a third set of videos from our recent Colloquium Series on Robust and Beneficial AI (CSRBAI), co-hosted with the Future of Humanity Institute. These talks were part of the week focused on preference specification in AI systems, including the difficulty of specifying safe and useful goals, or specifying safe and useful methods for learning human preferences. All released videos are available on the CSRBAI web page.\n \n\n \nTom Everitt, a PhD student at the Australian National University, spoke about his paper \"Avoiding wireheading with value reinforcement learning,\" written with Marcus Hutter (slides). Abstract:\nHow can we design good goals for arbitrarily intelligent agents? Reinforcement learning (RL) may seem like a natural approach. Unfortunately, RL does not work well for generally intelligent agents, as RL agents are incentivised to shortcut the reward sensor for maximum reward — the so-called wireheading problem.\nIn this paper we suggest an alternative to RL called value reinforcement learning (VRL). In VRL, agents use the reward signal to learn a utility function. The VRL setup allows us to remove the incentive to wirehead by placing a constraint on the agent's actions. The constraint is defined in terms of the agent's belief distributions, and does not require an explicit specification of which actions constitute wireheading. Our VRL agent offers the ease of control of RL agents and avoids the incentive for wireheading.\n\n \n\n \nDylan Hadfield-Menell, a PhD Student at UC Berkeley, spoke about designing corrigible, yet functional, artificial agents (slides) in a follow-up to the paper \"Cooperative inverse reinforcement learning.\" Abstract for the talk:\nAn artificial agent is corrigible if it accepts or assists in outside correction for its objectives. At a minimum, a corrigible agent should allow its programmers to turn it off. An artificial agent is functional if it is capable of performing non-trivial tasks. For example, a machine that immediately turns itself off is useless (except perhaps as a novelty item).\nIn a standard reinforcement learning agent, incentives for these behaviors are essentially at odds. The agent will either want to be turned off, want to stay alive, or be indifferent between the two. Of these, indifference is the only safe and useful option but there is reason to believe that this is a strong condition on the agent's incentives. In this talk, I will propose a design for a corrigible, yet functional, agent as the solution to a two-player cooperative game where the robot's goal is to maximize the humans sum of rewards.\nWe do an equilibrium analysis of the solutions to the game and identify three key properties. First, we show that if the human acts rationally, then the robot will be corrigible. Second, we show that if the robot has no uncertainty about human preferences, then the robot will be incorrigible or non-function if the human is even slightly suboptimal. Finally, we analyze the Gaussian setting and characterize the necessary and sufficient conditions, as a function of the robot's belief about human preferences and the degree of human irrationality, to ensure that the robot will be corrigible and functional.\n \n\n \nJan Leike, a recent addition at the Future of Humanity Institute, spoke about general reinforcement learning (slides). Abstract:\nGeneral reinforcement learning (GRL) is the theory of agents acting in unknown environments that are non-Markov, non-ergodic, and only partially observable. GRL can serve as a model for strong AI and has been used extensively to investigate questions related to AI safety. Our focus is not on practical algorithms, but rather on the fundamental underlying problems: How do we balance exploration and exploitation? How do we explore optimally? When is an agent optimal? We outline current shortcomings of the model and point to future research directions.\n \n\n \nBas Steunebrink spoke about experience-based AI and understanding, meaning, and values (slides). Excerpt:\nWe will discuss ongoing research into value learning: how an agent can gradually learn to understand the world it's in, learn to understand what we mean for it to do, learn to understand as well as be compelled to adhere to proper values, and learn to do so robustly in the face of inaccurate, inconsistent, and incomplete information as well as underspecified, conflicting, and updatable goals. To fulfill this ambitious vision we have a long road of gradual teaching and testing ahead of us.\n \nFor a recap of the week 2 videos on robustness and error-tolerance, see my previous blog post. For a summary of how the event as a whole went, and videos of the opening talks by Stuart Russell, Alan Fern, and Francesca Rossi, see my first blog post.\nThe post CSRBAI talks on preference specification appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "CSRBAI talks on preference specification", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=26", "id": "1500db48ed64a41b9cb09203c148bee6"} {"text": "CSRBAI talks on robustness and error-tolerance\n\nWe've uploaded a second set of videos from our recent Colloquium Series on Robust and Beneficial AI (CSRBAI) at the MIRI office, co-hosted with the Future of Humanity Institute. These talks were part of the week focused on robustness and error-tolerance in AI systems, and how to ensure that when AI system fail, they fail gracefully and detectably. All released videos are available on the CSRBAI web page.\n \n\n \nBart Selman, professor of computer science at Cornell University, spoke about machine reasoning and planning (slides). Excerpt:\nI'd like to look at what I call \"non-human intelligence.\" It does get less attention, but the advances also have been very interesting, and they're in reasoning and planning. It's actually partly not getting as much attention in the AI world because it's more used in software verification, program synthesis, and automating science and mathematical discoveries – other areas related to AI but not a central part of AI that are using these reasoning technologies. Especially the software verification world – Microsoft, Intel, IBM – push these reasoning programs very hard, and that's why there's so much progress, and I think it will start feeding back into AI in the near future.\n\n \n\n \nJessica Taylor presented on MIRI's recently released second technical agenda, \"Alignment for Advanced Machine Learning Systems\". Abstract:\nIf artificial general intelligence is developed using algorithms qualitatively similar to those of modern machine learning, how might we target the resulting system to safely accomplish useful goals in the world? I present a technical agenda for a new MIRI project focused on this question.\n \n\n \nStefano Ermon, assistant professor of computer science at Stanford, gave a talk on probabilistic inference and accuracy guarantees (slides). Abstract:\nStatistical inference in high-dimensional probabilistic models is one of the central problems in AI. To date, only a handful of distinct methods have been developed, most notably (MCMC) sampling and variational methods. While often effective in practice, these techniques do not typically provide guarantees on the accuracy of the results. In this talk, I will present alternative approaches based on ideas from the theoretical computer science community. These approaches can leverage recent advances in combinatorial optimization and provide provable guarantees on the accuracy.\n \n\n \nPaul Christiano, PhD student at UC Berkeley, gave a talk about training aligned reinforcement learning agents. Excerpt:\nThat's the goal of the reinforcement learning problem. We as the designers of an AI system have some other goal in mind, which maybe we don't have a simple formalization of. I'm just going to say, \"We want the agent to do the right thing.\" We don't really care about what reward the agent sees; we just care that it's doing the right thing.\nSo, intuitively, we can imagine that there's some unobserved utility function U which acts on a transcript and just evaluates the consequences of the agent behaving in that way. So it has to average over all the places in the universe this transcript might occur, and it says, \"What would I want the agent to do, on average, when it encounters this transcript?\"\n \n\n \nJim Babcock discussed the AGI containment problem (slides). Abstract:\nEnsuring that powerful AGIs are safe will involve testing and experimenting on them, but a misbehaving AGI might try to tamper with its test environment to gain access to the internet or modify the results of tests. I will discuss the challenges of securing environments to test AGIs in.\nFor a summary of how the event as a whole went, and videos of the opening talks by Stuart Russell, Alan Fern, and Francesca Rossi, see my last blog post.\nThe post CSRBAI talks on robustness and error-tolerance appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "CSRBAI talks on robustness and error-tolerance", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=26", "id": "dccc0085559cf7bc6bdd3eb5fd460cf1"} {"text": "MIRI strategy update: 2016\n\nThis post is a follow-up to Malo's 2015 review, sketching out our new 2016-2017 plans. Briefly, our top priorities (in decreasing order of importance) are to (1) make technical progress on the research problems we've identified, (2) expand our team, and (3) build stronger ties to the wider research community.\nAs discussed in a previous blog post, the biggest update to our research plans is that we'll be splitting our time going forward between our 2014 research agenda (the \"agent foundations\" agenda) and a new research agenda oriented toward machine learning work led by Jessica Taylor: \"Alignment for Advanced Machine Learning Systems.\"\nThree additional news items:\n1. I'm happy to announce that MIRI has received support from a major new donor: entrepreneur and computational biologist Blake Borgeson, who has made a $300,000 donation to MIRI. This is the second-largest donation MIRI has received in its history, beaten only by Jed McCaleb's 2013 cryptocurrency donation. As a result, we've been able to execute on our growth plans with more speed, confidence, and flexibility.\n2. This year, instead of running separate summer and winter fundraisers, we're merging them into one more ambitious fundraiser, which will take place in September.\n3. I'm also pleased to announce that Abram Demski has accepted a position as a MIRI research fellow. Additionally, Ryan Carey has accepted a position as an assistant research fellow, and we've hired some new administrative staff.\nI'll provide more details on these and other new developments below.\n\nPriority 1: Make progress on open technical problems\nSince 2013, MIRI's primary goal has been to make technical progress on AI alignment. Nearly all of our other activities are either directly or indirectly aimed at producing more high-quality alignment research, either at MIRI or at other institutions.\nAs mentioned above, Jessica Taylor is now leading an \"Alignment for Advanced Machine Learning Systems\" program, which will occupy about half of our research efforts going forward. Our goal with this work will be to develop formal models and theoretical tools that we predict would aid in the alignment of highly capable AI systems, under the assumption that such systems will be qualitatively similar to present-day machine learning systems. Our research communications manager, Rob Bensinger, has summarized themes in our new work and its relationship to other AI safety research proposals.\nEarlier in the year, I jotted down a summary of how much technical progress I thought we'd made on our research agenda in 2015 (noted by Malo in our 2015 review), relative to my expectations. In short, I expected modest progress in all of our research areas except value specification (which was low-priority for us in 2015). We made progress more quickly than expected on some problems, and more slowly than expected on others.\nIn naturalized induction and logical uncertainty, we exceeded my expectations, making sizable progress. In error tolerance, we undershot my expectations and made only limited progress. In our other research areas, we made about as much progress as I expected: modest progress in decision theory and Vingean reflection, and limited progress in value specification.\nI also made personal predictions earlier in the year about how much progress we'd make through the end of 2016: modest progress in decision theory, error tolerance, and value specification; limited progress in Vingean reflection; and sizable progress in logical uncertainty and naturalized induction. (Starting in 2017, I'll be making my predictions publicly early in the year.)\nBreaking these down:\n\nVingean reflection is a lower priority for us this year. This is in part because we're less confident that there's additional low-hanging fruit to be plucked here, absent additional progress in logical uncertainty or decision theory. Although we've been learning a lot about implementation snags through Benya Fallenstein, Ramana Kumar, and Jack Gallagher's ongoing HOL-in-HOL project, we haven't seen any major theoretical breakthroughs in this area since Benya developed model polymorphism in late 2012. Benya and Kaya Fallenstein are still studying this topic occasionally.\n\n\nIn contrast, we've continued to make steady gains in the basic theory of logical uncertainty, naturalized induction, and decision theory over the years. Benya, Kaya, Abram, Scott Garrabrant, Vanessa Kosoy, and Tsvi Benson-Tilsen will be focusing on these areas over the coming months, and I expect to see advances in 2016 of similar importance to what we saw in 2015.\n\n\nOur machine learning agenda is primarily focused on error tolerance and value specification, making these much higher priorities for us this year. I expect to see modest progress from Jessica Taylor, Patrick LaVictoire, Andrew Critch, Stuart Armstrong, and Ryan Carey's work on these problems. It's harder to say whether there will be any big breakthroughs here, given how new the program is.\n\nEliezer Yudkowsky and I will be splitting our time between working on these problems and doing expository writing. Eliezer is writing about alignment theory, while I'll be writing about MIRI strategy and forecasting questions.\nWe spent large portions of the first half of 2016 writing up existing results and research proposals and coordinating with other researchers (such as through our visit to FHI and our Colloquium Series on Robust and Beneficial AI), and we have a bit more writing ahead of us in the coming weeks. We managed to get a fair bit of research done — we'll be announcing a sizable new logical uncertainty result once the aforementioned writing is finished — but we're looking forward to a few months of uninterrupted research time at the end of the year, and I'm excited to see what comes of it.\n\nPriority 2: Expand our team\nGrowing MIRI's research team is a high priority. We're also expanding our admin team, with a goal of freeing up more of my time and better positioning MIRI to positively influence the booming AI risk conversation.\nAfter making significant contributions to our research over the past year as a research associate (e.g., \"Inductive Coherence\" and Structural Risk Mitigation) and participating in our CSRBAI and MIRI Summer Fellows programs, Abram Demski has signed on to join our core research team. Abram is planning to join in late 2016 or early 2017, after completing his computer science PhD at the University of Southern California. Mihály Bárász is also slated to join our core research team at a future date, and we are considering several other promising candidates for research fellowships.\nIn the nearer term, data scientist Ryan Carey has been collaborating with us on our machine learning agenda and will be joining us as an assistant research fellow in September.\nWe've also recently hired a new office manager, Aaron Silverbook, and a communications and development admin, Colm Ó Riain.\nWe have an open type theorist job ad, and are more generally seeking research fellows with strong mathematical intuitions and a talent for formalizing and solving difficult problems, or for fleshing out and writing up results for publication.\nWe're also seeking communications and outreach specialists (e.g., computer programmers with very strong writing skills) to help us keep pace with the lively public and academic AI risk conversation. If you're interested, send a résumé and nonfiction writing samples to Rob.\nPriority 3: Collaborate and communicate with other researchers\nThere have been a number of new signs in 2016 that AI alignment is going (relatively) mainstream:\n\nStuart Russell and his students' recent work on value learning and corrigibility (including a joint grant project with MIRI);\npositive reactions from Eric Schmidt and the press to a Google DeepMind / Future of Humanity Institute collaboration on corrigibility (partly supported by MIRI);\nthe new \"Concrete Problems in AI Safety\" research proposal announced by Google Research and OpenAI (along with the Open Philanthropy Project's declaration of interest in funding such research);\nand other, smaller developments.\n\nMIRI's goal is to ensure that the AI alignment problem gets solved, whether it's MIRI solving it or some other group. As such, we're excited by the new influx of attention directed at the alignment problem, and view this as an important time to nurture the field.\nAs AI safety research goes more mainstream, the pool of researchers we can dialogue with is becoming larger. At the same time, our own approach to the problem — specifically focused on the most long-term, high-stakes, and poorly-understood parts of the problem, and the parts that are least concordant with academic and industry incentives — remains unusual. Absent MIRI, I think that this part of the conversation would be almost entirely neglected.\nTo help promote our approach and grow the field, we intend to host more workshops aimed at diverse academic audiences. We'll be hosting a machine learning workshop in the near future, and might run more events like CSRBAI going forward. We also have a backlog of past technical results to write up, which we expect to be valuable for engaging more researchers in computer science, economics, mathematical logic, decision theory, and other areas.\nWe're especially interested in finding ways to hit priorities 1 and 3 simultaneously, pursuing important research directions that also help us build stronger ties to the wider academic world. One of several reasons for our new research agenda is its potential to encourage more alignment work by the ML community.\n\nShort version: in the medium term, our research program will have a larger focus on error-tolerance and value specification research, with more emphasis on ML-inspired AI approaches, and we're increasing the size of our research team in pursuit of that goal.\nRob, Malo, and I will be saying more about our funding situation and organizational strategy in September, when we kick off our 2016 fundraising drive. As part of that series of posts, I'll also be writing more about how our current strategy fits into our long-term goals and priorities.\nFinally, if you're attending Effective Altruism Global this weekend, note that we'll be running two workshops (one on Jessica's new project, another on the aforementioned new logical uncertainty results), as well as some office hours (both with the research team and with the admin team). If you're there, feel free to drop by, say hello, and ask more about what we've been up to.\nThe post MIRI strategy update: 2016 appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "MIRI strategy update: 2016", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=27", "id": "42fdbc0c0c6d97ac6cc64775e79250ca"} {"text": "August 2016 Newsletter\n\n\n\n\n\n\n\n\nResearch updates\n\nA new paper: \"Alignment for Advanced Machine Learning Systems.\" Half of our research team will be focusing on this research agenda going forward, while the other half continues to focus on the agent foundations agenda.\nNew at AI Impacts: Returns to Scale in Research\nEvan Lloyd represented MIRIxLosAngeles at AGI-16 this month, presenting \"Asymptotic Logical Uncertainty and the Benford Test\" (slides).\nWe'll be announcing a breakthrough in logical uncertainty this month, related to Scott Garrabrant's previous results.\n\nGeneral updates\n\nOur 2015 in review, with a focus on the technical problems we made progress on.\nAnother recap: how our summer colloquium series and fellows program went.\nWe've uploaded our first CSRBAI talks: Stuart Russell on \"AI: The Story So Far\" (video), Alan Fern on \"Toward Recognizing and Explaining Uncertainty\" (video), and Francesca Rossi on \"Moral Preferences\" (video).\nWe submitted our recommendations to the White House Office of Science and Technology Policy, cross-posted to our blog.\nWe attended IJCAI and the White House's AI and economics event. Furman on technological unemployment (video) and other talks are available online.\nTalks from June's safety and control in AI event are also online. Speakers included Microsoft's Eric Horvitz (video), FLI's Richard Mallah (video), Google Brain's Dario Amodei (video), and IARPA's Jason Matheny (video).\n\n\nNews and links\n\nComplexity No Bar to AI: Gwern Branwen argues that computational complexity theory provides little reason to doubt that AI can surpass human intelligence.\nBill Nordhaus, the world's leading climate change economist, writes a paper on the economics of singularity scenarios.\nThe Open Philanthropy Project has awarded Robin Hanson a three-year $265,000 grant to study multipolar AI scenarios. See also Hanson's new argument for expecting a long era of whole-brain emulations prior to the development of AI with superhuman reasoning abilities.\n\"Superintelligence Cannot Be Contained\" discusses computability-theoretic limits to AI verification.\nThe Financial Times runs a good profile of Nick Bostrom.\nDeepMind software reduces Google's data center cooling bill by 40%.\nIn a promising development, US federal regulators argue for the swift development and deployment of self-driving cars to reduce automobile accidents: \"We cannot wait for perfect. We lose too many lives waiting for perfect.\"\n\n\n\n\n\n\n\nThe post August 2016 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "August 2016 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=27", "id": "d627f8dcbe9d170ddc1441a4ac9015a5"} {"text": "2016 summer program recap\n\nAs previously announced, we recently ran a 22-day Colloquium Series on Robust and Beneficial AI (CSRBAI) at the MIRI office, co-hosted with the Oxford Future of Humanity Institute. The colloquium was aimed at bringing together safety-conscious AI scientists from academia and industry to share their recent work. The event served that purpose well, initiating some new collaborations and a number of new conversations between researchers who hadn't interacted before or had only talked remotely.\nOver 50 people attended from 25 different institutions, with an average of 15 people present on any given talk or workshop day. In all, there were 17 talks and four weekend workshops on the topics of transparency, robustness and error-tolerance, preference specification, and agent models and multi-agent dilemmas. The full schedule and talk slides are available on the event page. Videos from the first day of the event are now available, and we'll be posting the rest of the talks online soon:\n \n\n \nStuart Russell, professor of computer science at UC Berkeley and co-author of Artificial Intelligence: A Modern Approach, gave the opening keynote. Russell spoke on \"AI: The Story So Far\" (slides). Abstract:\nI will discuss the need for a fundamental reorientation of the field of AI towards provably beneficial systems. This need has been disputed by some, and I will consider their arguments. I will also discuss the technical challenges involved and some promising initial results.\nRussell discusses his recent work on cooperative inverse reinforcement learning 36 minutes in. This paper and Dylan Hadfield-Menell's related talk on corrigibility (slides) inspired lots of interest and discussion at CSRBAI.\n\n \n\n \nAlan Fern, associate professor of computer science at Oregon State University, discussed his work with AAAI president and OSU distinguished professor of computer science Tom Dietterich in \"Toward Recognizing and Explaining Uncertainty\" (slides 1, slides 2). Fern and Dietterich's work is described in a Future of Life Institute grant proposal:\nThe development of AI technology has progressed from working with \"known knowns\"—AI planning and problem solving in deterministic, closed worlds—to working with \"known unknowns\"—planning and learning in uncertain environments based on probabilistic models of those environments. A critical challenge for future AI systems is to behave safely and conservatively in open worlds, where most aspects of the environment are not modeled by the AI agent—the \"unknown unknowns\".\nOur team, with deep experience in machine learning, probabilistic modeling, and planning, will develop principles, evaluation methodologies, and algorithms for learning and acting safely in the presence of the unknown unknowns. For supervised learning, we will develop UU-conformal prediction algorithms that extend conformal prediction to incorporate nonconformity scores based on robust anomaly detection algorithms. This will enable supervised learners to behave safely in the presence of novel classes and arbitrary changes in the input distribution. For reinforcement learning, we will develop UU-sensitive algorithms that act to minimize risk due to unknown unknowns. A key principle is that AI systems must broaden the set of variables that they consider to include as many variables as possible in order to detect anomalous data points and unknown side-effects of actions.\n \n\n \nFrancesca Rossi, professor of computer science at Padova University in Italy, research scientist at IBM, and president of IJCAI, spoke on \"Moral Preferences\" (slides). Abstract:\nIntelligent systems are going to be more and more pervasive in our everyday lives. They will take care of elderly people and kids, they will drive for us, and they will suggest doctors how to cure a disease. However, we cannot let them do all this very useful and beneficial tasks if we don't trust them. To build trust, we need to be sure that they act in a morally acceptable way. So it is important to understand how to embed moral values into intelligent machines.\nExisting preference modeling and reasoning framework can be a starting point, since they define priorities over actions, just like an ethical theory does. However, many more issues are involved when we mix preferences (that are at the core of decision making) and morality, both at the individual level and in a social context. I will discuss some of these issues as well as some possible solutions.\nOther speakers at the event included Tom Dietterich (OSU), Bart Selman (Cornell), Paul Christiano (UC Berkeley), and MIRI researchers Jessica Taylor and Andrew Critch.\n\nThe preference specification workshop attracted the most excitement and activity at CSRBAI. Other activities and discussion topics at CSRBAI included:\n\nDiscussions about potential applications of complexity theory to transparency: using interactive polynomial-time proof protocols or probabilistically checkable proofs to communicate complicated beliefs and reasons from powerful AI systems to humans.\nSome progress clarifying different methods of training explanation systems for informed oversight.\nInvestigations into the theory of cooperative inverse reinforcement learning and other unobserved-reward games, led by Jan Leike and Tom Everitt of Australian National University.\nDiscussions about the hazards associated with reinforcement learning agents that manipulate the source of their reward function (which is the human or a learned representation of the human).\nInteresting discussions about corrigibility viewed as a value-of-information problem.\nDevelopment of AI safety environments by Rafael Cosman and other attendees for the OpenAI Reinforcement Learning Gym, illustrating topics like interruptibility and semi-supervised learning. Ideas and conversation from Chris Olah, Dario Amodei, Paul Christiano, and Jessica Taylor helped seed these gyms, and CSRBAI participants who helped develop them included Owain Evans, Sune Jakobsen, Stuart Armstrong, Tom Everitt, Rafael Cosman, and David Krueger.\nDiscussions of ideas for an OpenGym environment asking for low-impact agents, using an adversarial distinguisher.\nDiscussions of Jessica Taylor's memoryless Cartesian environments aimed at extending the idea to non-Cartesian worlds / logical counterfactuals using reference-class decision-making. Discussions of using \"logically past\" experience to learn about counterfactuals and do exploration without having a high chance of exploring in the real world.\nNew insights into the problem of logical counterfactuals, with new associated formalisms. Applications of MIRI's recent logical uncertainty advances to decision theory.\nA lot of advance discussion of MIRI's \"Alignment for Advanced Machine Learning Systems\" technical agenda.\n\nThe colloquium series ran quite smoothly, and we received positive feedback from attendees. Attendees noted that the event would have likely benefited from more structure. When we run events like this in the future, our main adjustment will be to compress the schedule and run more focused events similar to our past workshops.\n\nWe also co-ran a 16-day MIRI Summer Fellows program with the Center for Applied Rationality in June. The program's 14 attendees came from a variety of technical backgrounds and ranged from startup founders to undergraduates to assistant professors.\nOur MIRISF programs have proven useful in the past for identifying future MIRI hires (one full-time and two part-time MIRI researchers from the 2015 MIRISF program). The primary focus, however, is on developing new problem-solving skills and mathematical intuitions for CS researchers and providing an immersive crash course on MIRI's active research projects.\nThe program had four distinct phases: a four-day CFAR retreat (followed by a rest day), a two-day course in MIRI's research agenda, three days of working together on research topics (similar to a MIRI research workshop, and followed by another off day), and three days of miscellaneous activities: Tetlock-style forecasting practice, one-on-ones with MIRI researchers, security mindset discussions, planning ahead for future research and collaborations, etc.\nTo receive notifications from us about future programs like MIRISF, use this form. To get in touch with us about collaborating at future MIRI workshops like the ones at CSRBAI, send us your info via our general application form.\nThe post 2016 summer program recap appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "2016 summer program recap", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=27", "id": "95e3467aefa7a042e98fd4a095e183f7"} {"text": "2015 in review\n\nAs Luke had done in years past (see 2013 in review and 2014 in review), I (Malo) wanted to take some time to review our activities from last year. In the coming weeks Nate will provide a big-picture strategy update. Here, I'll take a look back at 2015, focusing on our research progress, academic and general outreach, fundraising, and other activities.\nAfter seeing signs in 2014 that interest in AI safety issues was on the rise, we made plans to grow our research team. Fueled by the response to Bostrom's Superintelligence and the Future of Life Institute's \"Future of AI\" conference, interest continued to grow in 2015. This suggested that we could afford to accelerate our plans, but it wasn't clear how quickly.\nIn 2015 we did not release a mid-year strategic plan, as Luke did in 2014. Instead, we laid out various conditional strategies dependent on how much funding we raised during our 2015 Summer Fundraiser. The response was great; we had our most successful fundraiser to date. We hit our first two funding targets (and then some), and set out on an accelerated 2015/2016 growth plan.\nAs a result, 2015 was a big year for MIRI. After publishing our technical agenda at the start of the year, we made progress on many of the open problems it outlined, doubled the size of our core research team, strengthened our connections with industry groups and academics, and raised enough funds to maintain our growth trajectory. We're very grateful to all our supporters, without whom this progress wouldn't have been possible.\n \n\n2015 Research Progress\nOur \"Agent Foundations for Aligning Machine Intelligence with Human Interests\" research agenda divides open problems into three categories: high reliability (which includes logical uncertainty, naturalized induction, decision theory, and Vingean reflection), error tolerance, and value specification.1 MIRI's top goal in 2015 was to make progress on these problems.\nWe met our expectations for research progress in each category, with the exception of logical uncertainty and naturalized induction (where we made more progress than expected) and error tolerance (where we made less progress than expected).\nBelow I've provided a brief summary of our progress in each area, with additional details and a full publication list in collapsed \"Read More\" sections. Some of the papers we published in 2015 were based on research from 2014 or earlier, and some of our 2015 results weren't published until 2016 (or remain unpublished). In this review I'll focus on 2015's new technical developments, rather than on pre-2015 material that happened to be published in that year.\nLogical Uncertainty and Naturalized Induction\nWe expected to make modest progress on these two problems in 2015. I'm pleased to report we made sizable progress.\n2015 saw the tail end of our development of reflective oracles, and early work on \"optimal estimators.\" Our most important research advance of the year, however, was likely our success dividing logical uncertainty into two subproblems, which happened in late 2015 and the very beginning of 2016.\nOne intuitive constraint on correct logically uncertain reasoning is that one's probabilities reflect known logical relationships between claims. For example, if you know that two claims are mutually exclusive (such as \"this computation outputs a 3\" and \"this computation outputs a 7\"), then even if you can't evaluate the claims, you should assign probabilities to the two claims that sum to at most 1.\nA second intuitive constraint is that one's probabilities reflect empirical regularities. Once you observe enough digits of π, you should eventually guess that the numbers 8 and 3 occur equally often in π's decimal expansion, even if you have not yet proven that π is normal.\nIn 2015, we developed two different algorithms to solve these two subproblems in isolation.\n\n\n\n Read More\n\n\n\nIn collaboration with Benya Fallenstein and other MIRI researchers, Scott Garrabrant solved the problem of respecting logical relationships in a series of Intelligent Agent Foundations Forum (IAFF) posts, resulting in the \"Inductive Coherence\" paper. The problem of respecting observational patterns in logical sentences was solved by Scott and the MIRIxLosAngeles group in \"Asymptotic Logical Uncertainty and the Benford Test,\" which was further developed into the \"Asymptotic Convergence in Online Learning with Unbounded Delays\" paper in 2016.\nThese two approaches to logical uncertainty were not only nonequivalent, but seemed to preclude each other. The obvious next step is to investigate whether there is a way to solve both subproblems at once with a single procedure—a task we have since made some (soon-to-be-announced) progress on in 2016.\nMIRI research associate Vanessa Kosoy's work on his \"optimal estimators\" framework represents a large separate corpus of work on logical uncertainty, which may also have applications for decision theory. Vanessa's work has not yet been officially published, but much of it is available on IAFF.\nOur other significant result in logical uncertainty was Benya Fallenstein, Jessica Taylor, and Paul Christiano's reflective oracles, building on work that began before 2015 (IAFF digest). Reflective oracles avoid a number of paradoxes that normally arise when agents attempt to answer questions about equivalently powerful agents, allowing us to study multi-agent dilemmas and reflective reasoning with greater precision.\nReflective oracles are interesting in their own right, and have proven applicable to a number of distinct open problems. The fact that reflective oracles require no privileged agent/environment distinction suggests that they're a step in the right direction for naturalized induction. Jan Leike has recently demonstrated that reflective oracles also solve a longstanding open problem in game theory, the grain of truth problem. Reflective oracles provide the first complete decision-theoretic foundation for game theory, showing that general-purpose methods for maximizing expected utility can achieve approximate Nash equilibria in repeated games.\nIn summary, our 2015 logical uncertainty and naturalized induction papers based on pre-2015 work were:\n\nB Fallenstein, J Taylor, P Christiano. \"Reflective Oracles: A Foundation for Classical Game Theory.\" arXiv:1508.04145 [cs.AI]. Published in abridged form in Proceedings of LORI 2015.\nN Soares. \"Formalizing Two Problems of Realistic World-Models.\" MIRI tech report 2015-3.\nN Soares, B Fallenstein. \"Questions of Reasoning under Logical Uncertainty.\" MIRI tech report 2015-1.\n\n2015 research published the same year:\n\nB Fallenstein, N Soares, J Taylor. \"Reflective Variants of Solomonoff Induction and AIXI.\" Published in Proceedings of AGI 2015.\n\n2015 research published in 2016 or forthcoming:\n\nS Garrabrant, S Bhaskhar, A Demski, J Garrabrant, G Koleszarik, E Lloyd. \"Asymptotic Logical Uncertainty and The Benford Test.\" arXiv:1510.03370 [cs.LG]. Forthcoming at AGI 2016.\nS Garrabrant, B Fallenstein, A Demski, N Soares. \"Inductive Coherence.\" arXiv:1604.05288 [cs:AI].\nS Garrabrant, N Soares, J Taylor. \"Asymptotic Convergence in Online Learning with Unbounded Delays.\" arXiv:1604.05280 [cs:LG].\nV Kosoy. Formally unpublished results on the optimal estimators framework.\nJ Leike, J Taylor, B Fallenstein. \"A Formal Solution to the Grain of Truth Problem.\" Presented at the 32nd Conference on Uncertainty in Artificial Intelligence.\n\nFor other logical uncertainty work on IAFF, see The Two-Update Problem, Subsequence Induction, and Strict Dominance for the Modified Demski Prior.\n\n\n\n\n\nDecision Theory\nIn 2015 we produced a number of new incremental advances in decision theory, constituting modest progress, in line with our expectations.\nOf these advances, we have published Andrew Critch's proof of a version of Löb's theorem and Gödel's second incompleteness that holds for bounded reasoners.\nCritch applies this parametric bounded version of Löb's theorem to prove that a wide range of resource-limited software agents, given access to each other's source code, can achieve unexploitable mutual cooperation in the one-shot prisoner's dilemma. Although we considered our past robust cooperation results strong reason to believe that bounded cooperation was possible, the confirmation is useful and gives us new formal tools for studying bounded reasoners.\nOver this period, Eliezer Yudkowsky, Benya Fallenstein, and Nate Soares also improved our technical (and philosophical) understanding of the decision theory we currently favor, \"functional decision theory\"—a slightly modified version of updateless decision theory.\n\n\n\n Read More\n\n\n\nThe biggest obstacle to formalizing decision theory currently seems to be that we lack a suitable formal account of logical counterfactuals. Logical counterfactuals are questions of the form \"If X (which I know to be false) were true, what (if anything) would that imply about Y?\" These are important in decision theory, one special case being off-policy predictions. (Even if I can predict that I'm definitely not taking action X, I want to be able to ask what would ensue if I did; a wrong answer to this can lead to me accepting substandard self-fulfilling prophecies like two-boxing in the transparent Newcomb problem.)\nIn 2015, we examined a decision theory related to functional decision theory, proof-based decision theory, that has proven easier to formalize. We found that proof-based decision theory's lack of logical counterfactuals is a serious weakness for the theory.\nWe explored some proof-length-based approaches to logical counterfactuals, and ultimately rejected them, though we have continued to devote some thought to this approach. During our first 2015 workshop, Scott Garrabrant proposed an informal conjecture on proof length and counterfactuals, which was subsequently revised; but both versions of the conjecture were shown to be false by Sam Eisenstat (1, 2). (See also Scott's Optimal and Causal Counterfactual Worlds.)\nIn a separate line of research, Patrick LaVictoire and others applied the proof-based decision theory framework to questions of bargaining and division of trade gains. For other decision theory work on IAFF, see Vanessa and Scott's Superrationality in Arbitrary Games and Armstrong's Reflective Oracles and Superrationality: Prisoner's Dilemma.\nOur github repository contains lots of new code from our work on modal agents, representing our most novel work on decision theory in the past year. We have one or two papers in progress that will explain the advances we've made in decision theory via this work. See \"Evil\" Decision Problems in Provability Logic and other posts in the decision theory IAFF digest for background on modal universes.\nPre-2015 work published in 2015:\n\nN Soares, B Fallenstein. \"Toward Idealized Decision Theory.\" 2014 tech report published in abridged form in Proceedings of AGI 2015.\n\n2015 research published in 2016 or forthcoming:\n\nA Critch. \"Parametric Bounded Löb's Theorem and Robust Cooperation of Bounded Agents.\" arXiv:1602.04184 [cs:GT].\nB Fallenstein. Formally unpublished results on modal universes.\nS Garrabrant, S Eisenstat, P LaVictoire, J Lee, H Dell. Formally unpublished results on logical counterfactuals.\nE Yudkowsky, N Soares. Unpublished results on functional decision theory.\n\n\n\n\n\nVingean Reflection\nWe were expecting modest progress on these problems in 2015, and we made modest progress.\nBenya Fallenstein and Ramana Kumar's \"Proof-Producing Reflection for HOL\" demonstrates a practical form of self-reference (and a partial solution to both the Löbian obstacle and the procrastination paradox) in the HOL theorem prover. This result provides some evidence that it is possible for a reasoning system to trust another reasoning system that reasons the same way, so long as the systems have different internal states.\n\n\n\n Read More\n\n\n\nMore specifically, this paper establishes that it is possible to formally specify an infinite chain of reasoning systems such that each system trusts the next system in the chain, as long as the reasoners are unable to delegate any individual task indefinitely.\nThere is some internal debate within MIRI about what more is required for real-world Vingean reflection, aside from satisfactory accounts of logical uncertainty and logical counterfactuals. There's also debate about whether any better results than this are likely to be possible in the absence of a full theory of logical uncertainty. Regardless, \"Proof-Producing Reflection for HOL\" demonstrates, via machine-checked proof, that it is possible to implement a form of reflective reasoning that is remarkably strong.\nBenya and Ramana's work also provides us with an environment in which to build better toy models of reflective reasoners. Jack Gallagher, a MIRI research intern, is currently implementing a cellular automaton in HOL that will let us implement reflective agents.\nBy applying results from the reflective oracles framework mentioned above, we also improved our theoretical understanding of Vingean reflection. In the IAFF post A Limit-Computable, Self-Reflective Distribution, research associate Tsvi Benson-Tilsen helped solidify our understanding of what kinds of reflection are and aren't possible. Jessica, working with Benya and Paul, further showed that reflective oracles can't readily be used to define reflective probabilistic logics.\nPre-2015 work published in 2015:\n\nB Fallenstein, N Soares. \"Vingean Reflection: Reliable Reasoning for Self-Improving Agents.\" MIRI tech report 2015-2.\n\n2015 research published the same year:\n\nB Fallenstein, R Kumar. \"Proof-Producing Reflection for HOL: With an Application to Model Polymorphism.\" Published in Interactive Theorem Proving: 6th International Conference, ITP 2015, Nanjing, China, August 24-27, 2015, Proceedings.\n\nOther relevant IAFF posts include A Simple Model of the Löbstacle, Waterfall Truth Predicates, and Existence of Distributions that are Expectation-Reflective and Know It.\n\n\n\n\nError Tolerance\nWe were expecting modest progress on these problems in 2015, but we made only limited progress.\nCorrigibility was a mid-level priority for us in 2015, and we spent some effort trying to build better models of corrigible agents. In spite of this, we didn't achieve any big breakthroughs. We made some progress on fixing minor defects in our understanding of corrigibility, reflected, e.g., in our error-tolerance IAFF digest, Stuart Armstrong's AI control ideas, and Jessica Taylor's overview post; but these results are relatively small.\n\n\n\n Read More\n\n\nIn 2015 our main novelties were Google DeepMind researcher Laurent Orseau and FHI researcher / MIRI research associate Stuart Armstrong's work on corrigibility (\"Safely Interruptible Agents\"), along with work on two other error tolerance subproblems: mild optimization (Jessica's Quantilizers and Abram Demski's Structural Risk Minimization) and conservative concepts (Jessica's Learning a Concept Using Only Positive Examples).\nPre-2015 work published in 2015:\n\nN Soares, B Fallenstein, E Yudkowsky, S Armstrong. \"Corrigibility.\" 2014 tech report presented at the AAAI 2015 Ethics and Artificial Intelligence Workshop.\n\n2015 research published in 2016 or forthcoming:\n\nL Orseau, S Armstrong. \"Safely Interruptible Agents.\" Presented at the 32nd Conference on Uncertainty in Artificial Intelligence.\nJ Taylor. \"Quantilizers: A Safer Alternative to Maximizers for Limited Optimization.\" Presented at the AAAI 2016 AI, Ethics and Society Workshop.\n\nOur failure to make much progress on corrigibility may be a sign that corrigibility is not as tractable a problem as we thought, or that more progress is needed in areas like logical uncertainty (so that we can build better models of AI systems that model their operators as uncertain about the implications of their preferences) before we can properly formalize corrigibility.\nWe are more optimistic about corrigibility research, however, in light of recent advances in logical uncertainty and some promising discussions of related topics at our recent colloquium series: \"Cooperative Inverse Reinforcement Learning\" (via Stuart Russell's group), \"Avoiding Wireheading with Value Reinforcement Learning\" (via Tom Everitt), and some items in Stuart Armstrong's bag of tricks.\n\n\n\n\nValue Specification\nWe were expecting limited progress on these problems in 2015, and we made limited progress. \nValue learning and related problems were low-priority for us last year, so we didn't see any big advances.\n\n\n\n Read More\n\n\nMIRI research associate Kaj Sotala made value specification his focus, examining several interesting questions outside our core research agenda. Jessica Taylor also began investigating the problem on the research forum. \nPre-2015 work published in 2015:\n\nK Sotala. \"Concept Learning for Safe Autonomous AI.\" Presented at the AAAI 2015 Ethics and Artificial Intelligence Workshop.\n\n2015 research published in 2016 or forthcoming:\n\nK Sotala. \"Defining Human Values for Value Learners.\" Presented at the AAAI 2016 AI, Ethics and Society Workshop.\n\nError-tolerant agent designs and value specification will be larger focus areas for us going forward, under the alignment for advanced machine learning systems research program.\n\n\n\n\nMiscellaneous\nWe released our technical agenda in late 2014 and early 2015. The overview paper, \"Agent Foundations for Aligning Machine Intelligence with Human Interests,\" is slated for external publication in The Technological Singularity in 2017.\nIn 2015 we also produced some research unrelated to our agent foundations agenda. This research generally focused on forecasting and strategy questions.\n\n\n\n Read More\n\n\nPre-2015 work published in 2015:\n\nS Armstrong, N Bostrom, C Shulman. \"Racing to the Precipice: A Model of Artificial Intelligence Development.\" 2013 tech report published in AI & Society.\nK Grace. \"The Asilomar Conference: A Case Study in Risk Mitigation.\" MIRI tech report 2015-9.\nK Grace. \"Leó Szilárd and the Danger of Nuclear Weapons: A Case Study in Risk Mitigation.\" MIRI tech report 2015-10.\nP LaVictoire. \"An Introduction to Löb's Theorem in MIRI Research.\" MIRI tech report 2015-6.\n\n2015 research published in 2016 or forthcoming:\n\nT Benson-Tilsen, N Soares. \"Formalizing Convergent Instrumental Goals.\" Presented at the AAAI 2016 AI, Ethics and Society Workshop.\n\nBeginning in 2015, new AI strategy/forecasting research supported by MIRI has been hosted on Katja Grace's independent AI Impacts project. AI Impacts featured 31 new articles and 27 new blog posts in 2015, on topics from the range of human intelligence to computing cost trends.\n\n\n\n\nOn the whole, we're happy about our 2015 research output and expect our team growth to further accelerate technical progress.\n \n2015 Research Support Activities\nFocusing on activities that directly grew the technical research community or facilitated technical research and collaborations, in 2015 we:\n\nLaunched the Intelligent Agent Foundations Forum, a public discussion forum for AI alignment researchers. MIRI researchers and collaborators made 139 top-level posts to IAFF in 2015.\nHired four new full-time research fellows. Patrick LaVictoire joined in March, Jessica Taylor in August, Andrew Critch in September, and Scott Garrabrant in December. With Nate transitioning to a non-research role, overall we grew from a three-person research team (Eliezer, Benya, and Nate) to a six-person team.\nOverhauled our research associates program. Before 2015, our research associates were mostly unpaid collaborators with varying levels of involvement in our active research. Following our successful summer fundraiser, we made \"research associate\" a paid position in which researchers based at other institutions spend significant amounts of time on research projects for us. Under this program, Stuart Armstrong, Tsvi Benson-Tilsen, Abram Demski, Vanessa Kosoy, Ramana Kumar, Kaj Sotala, and (prior to joining MIRI full-time) Scott Garrabrant all made significant contributions in associate roles.\nHired three research interns. Kaya Stechly and Rafael Cosman worked on polishing and consolidating old MIRI results (example on IAFF), while Jack Gallagher worked on our type theory in type theory project (github repo).\nAcquired two new research advisors, Stuart Russell and Bart Selman.\nHosted six summer workshops and sponsored the three-week MIRI Summer Fellows program. These events helped forge a number of new academic connections and directly resulted in us making job offers to two extremely promising attendees: Mihály Bárász (who has plans to join at a future date) and Scott Garrabrant.\nHelped organize two other academic events, a Cambridge decision theory conference and a ten-week AI alignment seminar series at UC Berkeley. We also ran 6 research retreats, sponsored 36 MIRIx events, and spoke at an Oxford Big Picture Thinking seminar series.\nSpoke at five other academic events. We participated in the Future of Life Institute's \"Future of AI\" conference, AAAI-15, AGI-15, LORI 2015, and APS 2015. We also attended NIPS.\n\nI'm excited about our 2015 progress in growing our team and collaborating with the larger academic community. Over the course of the year, we built closer relationships with people at Google DeepMind, Google Brain, OpenAI, Vicarious, Good AI, the Future of Humanity Institute, and other research groups. All of this has put us in a better position to share our research results, methodology, and goals with other researchers, and to attract new talent to AI alignment work.\n \n2015 General Activities\nBeyond direct research support, in 2015 we:\n\nTransitioned to new leadership under Nate Soares. The transition went smoothly, attesting to our organizational robustness.2\nMoved to larger offices and hired an office manager to support our growing team.\nGave talks and participated in panel discussions at EA Global and at ITIF.\nPublished 20 new strategic and expository pieces: 11 strategic analyses, four MIRI strategy overviews, two interviews, a new MIRI FAQ, a new About MIRI page, and an annotated research agenda bibliography. Some of our best introductory posts have been collected on intelligence.org/info. Eliezer also contributed to Edge.org and Nate participated in an Ask Me Anything on the EA Forum.\nReleased Rationality: From AI to Zombies on the eve of Eliezer's completion of Harry Potter and the Methods of Rationality. We also concluded a reading group for Nick Bostrom's Superintelligence.\nRevamped many of our online tools and webpages: AI Impacts, A Guide to MIRI's Research, and our Get Involved page and general application form. We also launched a MIRI technical results mailing list.\nWere prominently cited in the \"Research Priorities for Robust and Beneficial Artificial Intelligence\" report, and were initial signatories of the attached AI safety open letter. Some of our basic AI forecasting arguments were subsequently echoed by Open Philanthropy Project analysts.\nReceived press coverage from The Atlantic, Nautilus, MIT Technology Review, Financial Times, Slate, PC World, National Post, CBC News, Tech Times, and the Discover blog.\n\nAlthough we have deemphasized outreach efforts, we continue to expect these activities to be useful for spreading general awareness about MIRI, our research program, and AI safety research more generally. Ultimately, we expect this to help build our donor base, as well as attract potential future researchers (to MIRI and the field more generally), as with our past outreach and capacity-building efforts.\n \n2015 Fundraising\nI am very pleased with our fundraising performance. In 2015 we:\n\nContinued our strong fundraising growth, with a total of $1,584,109 in contributions.3\nReceived $166,943 in grants from the Future of Life Institute (FLI), with another ~$80,000 annually for the next two years.4\nExperimented with a new kind of fundraiser (non-matching, with multiple targets). I consider these experiments to have been successful. Our summer fundraiser was our biggest fundraiser of to date, raising $632,011, and our winter fundraiser also went well, raising $328,148.\n\n\nTotal contributions grew 28% in 2015. This was driven by an increase in contributions from new funders, including a one-time $219,000 contribution from an anonymous funder, $166,943 in FLI grants, and at least $137,023 from Raising for Effective Giving (REG) and regranting from the Effective Altruism Foundation.5 The decrease in contributions from returning funders is due to Peter Thiel's discontinuation of support in 2015, plus a large one-time outlier donation from Jed McCaleb in the years prior ($526,316 arriving in 2013, $104,822 in 2014).\nDrawing conclusions from these year-by-year comparisons is a little tricky. MIRI underwent significant organizational changes over this time span, particularly in 2013. We switched to accrual-based accounting in 2014, which also complicates comparisons with previous years.6 In general, though, we're continuing to see solid fundraising growth.\n\nThe number of new funders decreased from 2014 to 2015. In our 2014 review, Luke explains the large increase in funders in 2014:\nNew donor growth was strong in 2014, though this mostly came from small donations made during the SV Gives fundraiser. A significant portion of growth in returning donors can also be attributed to lapsed donors making small contributions during the SV Gives fundraiser.\nComparing our numbers in 2015 and 2013, we see healthy growth in the number of returning funders and total number of funders.\n\nThe above chart shows contributions in past years from small, mid-sized, large, and very large funder segments. Contributions from the three largest segments increased (approximately) proportionally from last year, with the notable exception of contributions from large funders, which increased from 26% to 31% of total contributions. We had a small year-over-year decrease in contributions in the small funder segment, which is again due having received an unusually large amount of small contributions during SV Gives in 2014.\nAs in past years, a full report on our finances (in the form of an independent accountant's review report) will be made available on our transparency and financials page. The report will most likely be up in late August or early September.\n \n2016 and Beyond\nWhat's next? Beyond our research goal of making significant progress in five of our six focus areas, we set the following operational goals for ourselves in July/August 2015:\n\nAccelerated growth: \"expand to a roughly ten-person core research team.\" (source)\nType theory in type theory project: \"hire one or two type theorists to work on developing relevant tools full-time.\" (source)\nVisiting scholar program: \"have interested professors drop by for the summer, while we pay their summer salaries and work with them on projects where our interests overlap.\" (source)\nIndependent review: \"We're also looking into options for directly soliciting public feedback from independent researchers regarding our research agenda and early results.\" (source)\nHigher-visibility publications: \"Our current plan this year is to focus on producing a few high-quality publications in elite venues.\" (source)\n\nIn 2015 we doubled the size of our research team from three to six. With the restructuring of our research associates program and the addition of two research interns, I'm pleased with the growth we achieved in 2015. We deemphasized growth in the first half of 2016 in order to focus on onboarding, but plan to expand again by the end of the year.\nWe have a job ad out for our type theorist position, which will likely be filled after we make our next few core researcher hires. In the interim, we've been having our research intern Jack Gallagher work on the type theory in type theory project, and we also ran an April 2016 type theory workshop.\nWith help from our research advisors, our visiting scholars program morphed into a three-week-long colloquium series. Rather than hosting a handful of researchers for longer periods of time, we hosted over fifty researchers for shorter stretches of time, comparing notes on a wide variety of active AI safety research projects. Speakers at the event included Stuart Russell, Francesca Rossi, Tom Dietterich, and Bart Selman. We're also collaborating with Stuart Russell on a corrigibility grant.\nWork is underway on conducting an external review of our research program; the results should be available in the next few months.\nWith regards to our fifth goal, in addition to \"Proof-Producing Reflection for HOL\" (which was presented at ITP 2015 in late August), we've since published papers at LORI-V (\"Reflective Oracles\"), at UAI 2016 (\"Safely Interruptible Agents\" and \"A Formal Solution to the Grain of Truth Problem\"), and at an IJCAI 2016 workshop (\"The Value Learning Problem\"). Of those venues, UAI is generally considered more prestigious than most venues that we have published in in the past. I'd count this as moderate (but not great) progress towards the goal of publishing in more elite venues. Nate will have more to say about our future publication plans.\nElaborating further on our plans would take me beyond the scope of this review. In the coming weeks, Nate will be providing more details on our 2016 activities and our goals going forward in a big-picture MIRI strategy post.7\nThis paper was originally titled \"Aligning Superintelligence with Human Interests.\" We've renamed it in order to emphasize that this research agenda takes a specific approach to the alignment problem, and other approaches are possible too—including, relevantly, Jessica Taylor's new \"Alignment for Advanced Machine Learning Systems\" agenda.I (Malo Bourgon) more recently took on a leadership role as MIRI's new COO and second-in-command.$80,480 of this was earmarked funding for the AI Impacts project.MIRI is administering three FLI grants (and participated in a fourth). We are to receive $250,000 over three years to fund work on our agent foundations technical agenda, $49,310 towards AI Impacts, and we are administering Ramana's $36,750 to study self-reference in the HOL theorem prover in collaboration with Benya.This only counts direct contributions through REG to MIRI. REG's support for MIRI is likely closer to $200,000 when accounting for contributions made directly to MIRI as a result of REG's advice to funders.Also note that numbers in this section might not exactly match previously published estimates, since small corrections are often made to contributions data. Finally, note that these number do not include in-kind donations.My thanks to Rob Bensinger for his substantial contributions to this review.The post 2015 in review appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "2015 in review", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=27", "id": "23464e11b8a08397804d805a132f17b6"} {"text": "New paper: \"Alignment for advanced machine learning systems\"\n\nMIRI's research to date has focused on the problems that we laid out in our late 2014 research agenda, and in particular on formalizing optimal reasoning for bounded, reflective decision-theoretic agents embedded in their environment. Our research team has since grown considerably, and we have made substantial progress on this agenda, including a major breakthrough in logical uncertainty that we will be announcing in the coming weeks.\nToday we are announcing a new research agenda, \"Alignment for advanced machine learning systems.\" Going forward, about half of our time will be spent on this new agenda, while the other half is spent on our previous agenda. The abstract reads:\nWe survey eight research areas organized around one question: As learning systems become increasingly intelligent and autonomous, what design principles can best ensure that their behavior is aligned with the interests of the operators? We focus on two major technical obstacles to AI alignment: the challenge of specifying the right kind of objective functions, and the challenge of designing AI systems that avoid unintended consequences and undesirable behavior even in cases where the objective function does not line up perfectly with the intentions of the designers.\nOpen problems surveyed in this research proposal include: How can we train reinforcement learners to take actions that are more amenable to meaningful assessment by intelligent overseers? What kinds of objective functions incentivize a system to \"not have an overly large impact\" or \"not have many side effects\"? We discuss these questions, related work, and potential directions for future research, with the goal of highlighting relevant research topics in machine learning that appear tractable today.\nCo-authored by Jessica Taylor, Eliezer Yudkowsky, Patrick LaVictoire, and Andrew Critch, our new report discusses eight new lines of research (previously summarized here). Below, I'll explain the rationale behind these problems, as well as how they tie in to our old research agenda and to the new \"Concrete problems in AI safety\" agenda spearheaded by Dario Amodei and Chris Olah of Google Brain.\n\nIncreasing safety by reducing autonomy\nThe first three research areas focus on issues related to act-based agents, notional systems that base their behavior on their users' short-term instrumental preferences:\n1. Inductive ambiguity identification: How can we train ML systems to detect and notify us of cases where the classification of test data is highly under-determined from the training data?\n2. Robust human imitation: How can we design and train ML systems to effectively imitate humans who are engaged in complex and difficult tasks?\n3. Informed oversight: How can we train a reinforcement learning system to take actions that aid an intelligent overseer, such as a human, in accurately assessing the system's performance?\nThese three problems touch on different ways we can make tradeoffs between capability/autonomy and safety. At one extreme, a fully autonomous, superhumanly capable system would make it uniquely difficult to establish any strong safety guarantees. We could reduce risk somewhat by building systems that are still reasonably smart and autonomous, but will pause to consult operators in cases where their actions are especially high-risk. Ambiguity identification is one approach to fleshing out which scenarios are \"high-risk\": ones where a system's experiences to date are uninformative about some fact or human value it's trying to learn.\nAt the opposite extreme, we can consider ML systems that are no smarter than their users, and take no actions other than what their users would do, or what their users would tell them to do. If we can correctly design a system to do what it thinks a trusted, informed human would do, we can trade away some of the potential benefits of advanced ML systems in exchange for milder failure modes.\nThese two extremes, human imitation and (mostly) autonomous goal pursuit, are useful objects of study because they help simplify and factorize out key parts of the problem. In practice, however, ambiguity identification is probably too mild a restriction on its own, and strict human imitation probably isn't efficiently implementable. Informed oversight considers more moderate approaches to keeping humans in the loop: designing more transparent ML systems that help operators understand the reasons behind selected actions.\nIncreasing safety without reducing autonomy\nWhatever guarantees we buy by looping humans into AI systems' decisions, we will also want to improve systems' reliability in cases where oversight is unfeasible. Our other five problems focus on improving the reliability and error-tolerance of systems autonomously pursuing real-world goals, beginning with the problem of specifying such goals in a robust and reliable way:\n4. Generalizable environmental goals: How can we create systems that robustly pursue goals defined in terms of the state of the environment, rather than defined directly in terms of their sensory data?\n5. Conservative concepts: How can a classifier be trained to develop useful concepts that exclude highly atypical examples and edge cases?\n6. Impact measures: What sorts of regularizers incentivize a system to pursue its goals with minimal side effects?\n7. Mild optimization: How can we design systems that pursue their goals \"without trying too hard\"—stopping when the goal has been pretty well achieved, as opposed to expending further resources searching for ways to achieve the absolute optimum expected score?\n8. Averting instrumental incentives: How can we design and train systems such that they robustly lack default incentives to manipulate and deceive their operators, compete for scarce resources, etc.?\nWhereas ambiguity-identifying learners are designed to predict potential ways they might run into edge cases and defer to human operators in those cases, conservative learners are designed to err in a safe direction in edge cases. If a cooking robot notices the fridge is understocked, should it try to cook the cat? The ambiguity identification approach says to notice when the answer to \"Are cats food?\" is unclear, and pause to consult a human operator; the conservative concepts approach says to just assume cats aren't food in uncertain cases, since it's safer for cooking robots to underestimate how many things are food than to overestimate it. It remains unclear, however, how one might formalize this kind of reasoning.\nImpact measures provide another avenues for limiting the potential scope of AI mishaps. If we can define some measure of \"impact,\" we could design systems that can distinguish intuitively high-impact actions from low-impact ones and generally choose lower-impact options.\nAlternatively, instead of designing systems to try as hard as possible to have a low impact, we might design \"mild\" systems that simply don't try very hard to do anything. Limiting the resources a system will put into its decision (via mild optimization) is distinct from limiting how much change a system will decide to cause (via impact measures); both are under-explored risk reduction approaches.\nLastly, we will explore a variety of different approaches to preventing default system incentives to treat operators adversarially under the \"averting instrumental incentives\" umbrella category. Our hope in pursuing all of these research directions simultaneously is that systems combining these features will permit much higher confidence than systems implementing any one of them. This approach also serves as a hedge in case some of these problems turn out to be unsolvable in practice, and allows for ideas that worked well on one problem to be re-applied on others.\nConnections to other research agendas\nOur new technical agenda, our 2014 agenda, and \"Concrete problems in AI safety\" take different approaches to the problem of aligning AI systems with human interests, though there is a fair bit of overlap between the research directions they propose.\nWe've changed the name of our 2014 agenda to \"Agent foundations for aligning machine intelligence with human interests\" (from \"Aligning superintelligence with human interests\") to help highlight the ways it is and isn't similar to our newer agenda. For reasons discussed in our advance announcement of \"Alignment for advanced machine learning systems,\" our new agenda is intended to help more in scenarios where advanced AI is relatively near and relatively directly descended from contemporary ML techniques, while our agent foundations agenda is more agnostic about when and how advanced AI will be developed.\nAs we recently wrote, we believe that developing a basic formal theory of highly reliable reasoning and decision-making \"could make it possible to get very strong guarantees about the behavior of advanced AI systems — stronger than many currently think is possible, in a time when the most successful machine learning techniques are often poorly understood.\" Without such a theory, AI alignment will be a much more difficult task.\nThe authors of \"Concrete problems in AI safety\" write that their own focus \"is on the empirical study of practical safety problems in modern machine learning systems, which we believe is likely to be robustly useful across a broad variety of potential risks, both short- and long-term.\" Their paper discusses a number of the same problems as the alignment for ML agenda (or closely related ones), but directed more toward building on existing work and finding applications in present-day systems.\nWhere the agent foundations agenda can be said to follow the principle \"start with the least well-understood long-term AI safety problems, since those seem likely to require the most work and are the likeliest to seriously alter our understanding of the overall problem space,\" the concrete problems agenda follows the principle \"start with the long-term AI safety problems that are most applicable to systems today, since those problems are the easiest to connect to existing work by the AI research community.\"\nTaylor et al.'s new agenda is less focused on present-day and near-future systems than \"Concrete problems in AI safety,\" but is more ML-oriented than the agent foundations agenda. This chart helps map some of the correspondences between the topics the agent foundations agenda (plain text), the concrete problems agenda (italics), and the alignment for ML agenda (bold) discuss:\nWork related to high reliability\n\nrealistic world-models ~ generalizable environmental goals ~ avoiding reward hacking\n\nnaturalized induction\nontology identification\n\n\ndecision theory\nlogical uncertainty\nVingean reflection\n\nWork related to error tolerance\n\ninductive ambiguity identification = ambiguity identification ~ robustness to distributional change\nrobust human imitation\ninformed oversight ~ scalable oversight\nconservative concepts\nimpact measures = domesticity ~ avoiding negative side effects\nmild optimization\naverting instrumental incentives\n\ncorrigibility\n\n\nsafe exploration\n\n\n\"~\" notes (sometimes very rough) similarities and correspondences, while \"=\" notes different names for the same concept.\nAs an example, \"realistic world-models\" and \"generalizable environmental goals\" are both aimed at making the environment and goal representations of reinforcement learning formalisms like AIXI more robust, and both can be viewed as particular strategies for avoiding reward hacking. Our work under the agent foundations agenda has mainly focused on formal models of AI systems in settings without clear agent/environment boundaries (naturalized induction), while our work under the new agenda will focus more on the construction of world-models that admit of the specification of goals that are environmental rather than simply perceptual (ontology identification).\nFor a fuller discussion of the relationship between these research topics, see Taylor et al.'s paper.\n\n \n\n\n\nSign up to get updates on new MIRI technical results\nGet notified every time a new technical paper is published.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n \nThe post New paper: \"Alignment for advanced machine learning systems\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “Alignment for advanced machine learning systems”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=27", "id": "2ff25597fd42d910e531e12ab540fce6"} {"text": "Submission to the OSTP on AI outcomes\n\nThe White House Office of Science and Technology Policy recently put out a request for information on \"(1) The legal and governance implications of AI; (2) the use of AI for public good; (3) the safety and control issues for AI; (4) the social and economic implications of AI;\" and a variety of related topics. I've reproduced MIRI's submission to the RfI below:\n\nI. Review of safety and control concerns\nAI experts largely agree that AI research will eventually lead to the development of AI systems that surpass humans in general reasoning and decision-making ability. This is, after all, the goal of the field. However, there is widespread disagreement about how long it will take to cross that threshold, and what the relevant AI systems are likely to look like (autonomous agents, widely distributed decision support systems, human/AI teams, etc.).\nDespite the uncertainty, a growing subset of the research community expects that advanced AI systems will give rise to a number of foreseeable safety and control difficulties, and that those difficulties can be preemptively addressed by technical research today. Stuart Russell, co-author of the leading undergraduate textbook in AI and professor at U.C. Berkeley, writes:\nThe primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:\n1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.\n2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.\nA system that is optimizing a function of n variables, where the objective depends on a subset of size k.\n\nGeneral updates\n\nWe attended the White House's Workshop on Safety and Control in AI.\nOur 2016 MIRI Summer Fellows Program recently drew to a close. The program, run by the Center for Applied Rationality, aims to train AI scientists' and mathematicians' research and decision-making skills.\n\"Why Ain't You Rich?\": Nate Soares discusses decision theory in Dawn or Doom. See \"Toward Idealized Decision Theory\" for context.\nNumerai, an anonymized distributed hedge fund for machine learning researchers, has added an option for donating earnings to MIRI \"as a hedge against things going horribly right\" in the field of AI.\n\n\nNews and links\n\nThe White House is requesting information on \"safety and control issues for AI,\" among other questions. Public submissions will be accepted through July 22.\n\"Concrete Problems in AI Safety\": Researchers from Google Brain, OpenAI, and academia propose a very promising new AI safety research agenda. The proposal is showcased on the Google Research Blog and the OpenAI Blog, as well as the Open Philanthropy Blog, and has received press coverage from Bloomberg, The Verge, and MIT Technology Review.\nAfter criticizing the thinking behind OpenAI earlier in the month, Alphabet executive chairman Eric Schmidt comes out in favor of AI safety research:\n\n\nDo we worry about the doomsday scenarios? We believe it's worth thoughtful consideration. Today's AI only thrives in narrow, repetitive tasks where it is trained on many examples. But no researchers or technologists want to be part of some Hollywood science-fiction dystopia. The right course is not to panic—it's to get to work. Google, alongside many other companies, is doing rigorous research on AI safety, such as how to ensure people can interrupt an AI system whenever needed, and how to make such systems robust to cyberattacks.\n\n\n\n\n\nDylan Hadfield-Mennell, Anca Dragan, Pieter Abbeel, and Stuart Russell propose a formal definition of the value alignment problem as \"Cooperative Inverse Reinforcement Learning,\" a two-player game where a human and robot are both \"rewarded according to the human's reward function, but the robot does not initially know what this is.\" In a CSRBAI talk (slides), Hadfield-Mennell discusses applications for AI corrigibility.\nJaan Tallinn brings his AI risk focus to the Bulletin of Atomic Scientists.\nStephen Hawking weighs in on intelligence explosion (video). Sam Harris and Neil DeGrasse Tyson debate the idea at greater length (audio, at 1:22:37).\nEthereum developer Vitalik Buterin discusses the implications of value complexity and fragility and other AI safety concepts for cryptoeconomics.\nWired covers a \"demonically clever\" backdoor based on chips' analog properties.\nCNET interviews MIRI and a who's who of AI scientists for a pair of articles: \"AI, Frankenstein? Not So Fast, Experts Say\" and \"When Hollywood Does AI, It's Fun But Farfetched.\"\nNext month's Effective Altruism Global conference is accepting applicants.\n\n\n\n\n\n\n\nInspiration for these gyms came in part from Chris Olah and Dario Amodei in a conversation with Rafael.The post July 2016 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "July 2016 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=28", "id": "5e07b44201658d1f97a8b4cc7eb1c1e4"} {"text": "New paper: \"A formal solution to the grain of truth problem\"\n\n\nFuture of Humanity Institute Research Fellow Jan Leike and MIRI Research Fellows Jessica Taylor and Benya Fallenstein have just presented new results at UAI 2016 that resolve a longstanding open problem in game theory: \"A formal solution to the grain of truth problem.\"\nGame theorists have techniques for specifying agents that eventually do well on iterated games against other agents, so long as their beliefs contain a \"grain of truth\" — nonzero prior probability assigned to the actual game they're playing. Getting that grain of truth was previously an unsolved problem in multiplayer games, because agents can run into infinite regresses when they try to model agents that are modeling them in turn. This result shows how to break that loop: by means of reflective oracles.\nIn the process, Leike, Taylor, and Fallenstein provide a rigorous and general foundation for the study of multi-agent dilemmas. This work provides a surprising and somewhat satisfying basis for approximate Nash equilibria in repeated games, folding a variety of problems in decision and game theory into a common framework.\nThe paper's abstract reads:\nA Bayesian agent acting in a multi-agent environment learns to predict the other agents' policies if its prior assigns positive probability to them (in other words, its prior contains a grain of truth). Finding a reasonably large class of policies that contains the Bayes-optimal policies with respect to this class is known as the grain of truth problem. Only small classes are known to have a grain of truth and the literature contains several related impossibility results.\nIn this paper we present a formal and general solution to the full grain of truth problem: we construct a class of policies that contains all computable policies as well as Bayes-optimal policies for every lower semicomputable prior over the class. When the environment is unknown, Bayes-optimal agents may fail to act optimally even asymptotically. However, agents based on Thompson sampling converge to play ε-Nash equilibria in arbitrary unknown computable multi-agent environments. While these results are purely theoretical, we show that they can be computationally approximated arbitrarily closely.\nTraditionally, when modeling computer programs that model the properties of other programs (such as when modeling an agent reasoning about a game), the first program is assumed to have access to an oracle (such as a halting oracle) that can answer arbitrary questions about the second program. This works, but it doesn't help with modeling agents that can reason about each other.\nWhile a halting oracle can predict the behavior of any isolated Turing machine, it cannot predict the behavior of another Turing machine that has access to a halting oracle. If this were possible, the second machine could use its oracle to figure out what the first machine-oracle pair thinks it will do, at which point it can do the opposite, setting up a liar paradox scenario. For analogous reasons, two agents with similar resources, operating in real-world environments without any halting oracles, cannot perfectly predict each other in full generality.\nGame theorists know how to build formal models of asymmetric games between a weaker player and a stronger player, where the stronger player understands the weaker player's strategy but not vice versa. For the reasons above, however, games between agents of similar strength have resisted full formalization. As a consequence of this, game theory has until now provided no method for designing agents that perform well on complex iterated games containing other agents of similar strength.\n\nUsually, the way to build an ideal agent is to have the agent consider a large list of possible policies, predict how the world would respond to each policy, and then choose the best policy by some metric. However, in multi-player games, if your agent considers a big list of policies that both it and the opponent might play, then the best policy for the opponent is usually some alternative policy that was not in your list. (And if you add that policy to your list, then the new best policy for the opponent to play is now a new alternative that wasn't in the list, and so on.)\nThis is the grain of truth problem, first posed by Kalai and Lehrer in 1993: define a class of policies that is large enough to be interesting and realistic, and for which the best response to an agent that considers that policy class is inside the class.1\nTaylor and Fallenstein have developed a formalism that enables a solution: reflective oracles capable of answering questions about agents with access to equivalently powerful oracles. Leike has led work on demonstrating that this formalism can solve the grain of truth problem, and in the process shows that the Bayes-optimal policy generally does not converge to a Nash equilibrium. Thompson sampling, however, does converge to a Nash equilibrium — a result that comes out of another paper presented at UAI 2016, Leike, Lattimore, Orseau, and Hutter's \"Thompson sampling is asymptotically optimal in general environments.\"\nThe key feature of reflective oracles is that they avoid diagonalization and paradoxes by randomizing in the relevant cases.2 This allows agents with access to a reflective oracle to consistently reason about the behavior of arbitrary agents that also have access to a reflective oracle, which in turn makes it possible to model agents that converge to Nash equilibria by their own faculties (rather than by fiat or assumption).\nThis framework can be used to, e.g., define games between multiple copies of AIXI. As originally formulated, AIXI cannot entertain hypotheses about its own existence, or about the existence of similarly powerful agents; classical Bayes-optimal agents must be larger and more intelligent than their environments. With access to a reflective oracle, however, Fallenstein, Soares, and Taylor have shown that AIXI can meaningfully entertain hypotheses about itself and copies of itself while avoiding diagonalization.\nThe other main novelty of this paper is that reflective oracles turn out to be limit-computable, and so allow for approximation by anytime algorithms. The reflective oracles paradigm is therefore likely to be quite valuable for investigating game-theoretic questions involving generally intelligent agents that can understand and model each other.3\n\n \n\n\n\nSign up to get updates on new MIRI technical results\nGet notified every time a new technical paper is published.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n \nIt isn't hard to solve the grain of truth problem for policy classes that are very small. Consider a prisoner's dilemma where the only strategies the other player can select take the form \"cooperate until the opponent defects, then defect forever\" or \"cooperate n times in a row (or until the opponent defects, whichever happens first) and then defect forever.\" Leike, Taylor, and Fallenstein note:\nThe Bayes-optimal behavior is to cooperate until the posterior belief that the other agent defects in the time step after the next is greater than some constant (depending on the discount function) and then defect afterwards.\nBut this is itself a strategy in the class under consideration. If both players are Bayes-optimal, then both will have a grain of truth (i.e., their actual strategy is assigned nonzero probability by the other player) \"and therefore they converge to a Nash equilibrium: either they both cooperate forever or after some finite time they both defect forever.\"\nSlightly expanding the list of policies an agent might deploy, however, can make it hard to find a policy class that contains a grain of truth. For example, if \"tit for tat\" is added to the policy class, then, depending on the prior, the grain of truth may be lost. In this case, if the first agent thinks that the second agent is very likely \"always defect\" but maybe \"tit for tat,\" then the best policy might be something like \"defect until they cooperate, then play tit for tat,\" but this policy is not in the policy class. The question resolved by this paper is how to find priors that contain a grain of truth for much richer policy classes.Specifically, reflective oracles output 1 if a specified machine returns 1 with probability greater than a specified probability p, and they output 0 if the probability that the machine outputs 0 is greater than 1-p. When the probability is exactly p, however—or the machine has some probability of not halting, and p hits this probability mass—the oracle can output 0, 1, or randomize between the two. This allows reflective oracles to avoid probabilistic versions of the liar paradox: any attempt to ask the reflective oracle an unanswerable question will yield a meaningless placeholder answer.Thanks to Tsvi Benson-Tilsen, Chana Messinger, Nate Soares, and Jan Leike for helping draft this announcement.The post New paper: \"A formal solution to the grain of truth problem\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “A formal solution to the grain of truth problem”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=28", "id": "651a7c2c06d92c95db723ebb793f07cc"} {"text": "June 2016 Newsletter\n\n\n\n\n\n\n\n\nResearch updates\n\nNew paper: \"Safely Interruptible Agents.\" The paper will be presented at UAI-16, and is a collaboration between Laurent Orseau of Google DeepMind and Stuart Armstrong of the Future of Humanity Institute (FHI) and MIRI; see FHI's press release. The paper has received (often hyperbolic) coverage from a number of press outlets, including Business Insider, Motherboard, Newsweek, Gizmodo, BBC News, eWeek, and Computerworld.\nNew at IAFF: All Mathematicians are Trollable: Divergence of Naturalistic Logical Updates; Two Problems with Causal-Counterfactual Utility Indifference\nNew at AI Impacts: Metasurvey: Predict the Predictors; Error in Armstrong and Sotala 2012\nMarcus Hutter's research group has released a new paper based on results from a MIRIx workshop: \"Self-Modification of Policy and Utility Function in Rational Agents.\" Hutter's team is presenting several other AI alignment papers at AGI-16 next month: \"Death and Suicide in Universal Artificial Intelligence\" and \"Avoiding Wireheading with Value Reinforcement Learning.\"\n\"Asymptotic Logical Uncertainty and The Benford Test\" has been accepted to AGI-16.\n\nGeneral updates\n\nMIRI and FHI's Colloquium Series on Robust and Beneficial AI (talk abstracts and slides now up) has kicked off with opening talks by Stuart Russell, Francesca Rossi, Tom Dietterich, and Alan Fern.\nWe visited FHI to discuss new results in logical uncertainty, our new machine-learning-oriented research program, and a range of other topics.\n\nNews and links\n\nFollowing an increase in US spending on autonomous weapons, The New York Times reports that the Pentagon is turning to Silicon Valley for an edge.\nIARPA director Jason Matheny, a former researcher at FHI, discusses forecasting and risk from emerging technologies (video).\nFHI Research Fellow Owen Cotton-Barratt gives oral evidence to the UK Parliament on the need for robust and transparent AI systems.\nGoogle reveals a hidden reason for AlphaGo's exceptional performance against Lee Se-dol: a new integrated circuit design that can speed up machine learning applications by an order of magnitude.\nElon Musk answers questions about SpaceX, Tesla, OpenAI, and more (video).\nWhy worry about advanced AI? Stuart Russell (in Scientific American), George Dvorsky (in Gizmodo), and SETI director Seth Shostak (in Tech Times) explain.\nOlle Häggeström's new book, Here Be Dragons, serves as an unusually thoughtful and thorough introduction to existential risk and future technological development, including a lucid discussion of artificial superintelligence.\nRobin Hanson examines the implications of widespread whole-brain emulation in his new book, The Age of Em: Work, Love, and Life when Robots Rule the Earth.\nBill Gates highly recommends Nick Bostrom's Superintelligence. The paperback edition is now out, with a newly added afterword.\nFHI Research Associate Paul Christiano has joined OpenAI as an intern. Christiano has also written new posts on AI alignment: Efficient and Safely Scalable, Learning with Catastrophes, Red Teams, and The Reward Engineering Problem.\n\n\n\n\n\n\n\n\nThe post June 2016 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "June 2016 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=28", "id": "cbea0a0c3386493fbd78a23e47109e95"} {"text": "New paper: \"Safely interruptible agents\"\n\nGoogle DeepMind Research Scientist Laurent Orseau and MIRI Research Associate Stuart Armstrong have written a new paper on error-tolerant agent designs, \"Safely interruptible agents.\" The paper is forthcoming at the 32nd Conference on Uncertainty in Artificial Intelligence.\nAbstract:\nReinforcement learning agents interacting with a complex environment like the real world are unlikely to behave optimally all the time. If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions—harmful either for the agent or for the environment—and lead the agent into a safer situation. However, if the learning agent expects to receive rewards from this sequence, it may learn in the long run to avoid such interruptions, for example by disabling the red button — which is an undesirable outcome.\nThis paper explores a way to make sure a learning agent will not learn to prevent (or seek!) being interrupted by the environment or a human operator. We provide a formal definition of safe interruptibility and exploit the off-policy learning property to prove that either some agents are already safely interruptible, like Q-learning, or can easily be made so, like Sarsa. We show that even ideal, uncomputable reinforcement learning agents for (deterministic) general computable environments can be made safely interruptible.\nOrseau and Armstrong's paper constitutes a new angle of attack on the problem of corrigibility. A corrigible agent is one that recognizes it is flawed or under development and assists its operators in maintaining, improving, or replacing itself, rather than resisting such attempts.\n\nIn the case of superintelligent AI systems, corrigibility is primarily aimed at averting unsafe convergent instrumental policies (e.g., the policy of defending its current goal system from future modifications) when such systems have incorrect terminal goals. This leaves us more room for approximate, trial-and-error, and learning-based solutions to AI value specification.\nInterruptibility is an attempt to formalize one piece of the intuitive idea of corrigibility. Utility indifference (in Soares, Fallenstein, Yudkowsky, and Armstrong's \"Corrigibility\") is an example of a past attempt to define a different piece of corrigibility: systems that are indifferent to programmers' interventions to modify their terminal goals, and will therefore avoid trying to force their programmers either to make such modifications or to avoid such modifications. \"Safely interruptible agents\" instead attempts to define systems that are indifferent to programmers' interventions to modify their policies, and will not try to stop programmers from intervening on their everyday activities (nor try to force them to intervene).\nHere the goal is to make the agent's policy converge to whichever policy is optimal if the agent believed there would be no future interruptions. Even if the agent has experienced interruptions in the past, it should act just as though it will never experience any further interruptions. Orseau and Armstrong show that several classes of agent are safely interruptible, or can be easily made safely interruptible.\nFurther reading:\n\nStuart Armstrong's Smarter Than Us, an informal introduction to the AI alignment problem.\nLaurent Orseau on Artificial General Intelligence.\nOrseau and Ring's \"Space-time embedded intelligence.\"\nSoares and Fallenstein's \"Problems of self-reference in self-improving space-time embedded intelligence.\"\n\n \n\n \n\n\n\nSign up to get updates on new MIRI technical results\nGet notified every time a new technical paper is published.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\nThe post New paper: \"Safely interruptible agents\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “Safely interruptible agents”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=28", "id": "8415bd955800003e839ee0f4e7b5bfbd"} {"text": "May 2016 Newsletter\n\n\n\n\n\n\n\n\nResearch updates\n\nTwo new papers split logical uncertainty into two distinct subproblems: \"Uniform Coherence\" and \"Asymptotic Convergence in Online Learning with Unbounded Delays.\"\nNew at IAFF: An Approach to the Agent Simulates Predictor Problem; Games for Factoring Out Variables; Time Hierarchy Theorems for Distributional Estimation Problems\nWe will be presenting \"The Value Learning Problem\" at the IJCAI-16 Ethics for Artificial Intelligence workshop instead of the AAAI Spring Symposium where it was previously accepted.\n\nGeneral updates\n\nWe're launching a new research program with a machine learning focus. Half of MIRI's team will be investigating potential ways to specify goals and guard against errors in advanced neural-network-inspired systems.\nWe ran a type theory and formal verification workshop this past month.\n\nNews and links\n\nThe Open Philanthropy Project explains its strategy of high-risk, high-reward hits-based giving and its decision to make AI risk its top focus area this year.\nAlso from OpenPhil: Is it true that past researchers over-hyped AI? Is there a realistic chance of AI fundamentally changing civilization in the next 20 years?\nFrom Wired: Inside OpenAI, and Facebook is Building AI That Builds AI.\nThe White House announces a public workshop series on the future of AI.\nThe Wilberforce Society suggests policies for narrow and general AI development.\nTwo new AI safety papers: \"A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis\" and \"The AGI Containment Problem.\"\nPeter Singer weighs in on catastrophic AI risk.\nDigital Genies: Stuart Russell discusses the problems of value learning and corrigibility in AI.\nNick Bostrom is interviewed at CeBIT (video) and also gives a presentation on intelligence amplification and the status quo bias (video).\nJeff MacMahan critiques philosophical critiques of effective altruism.\nYale political scientist Allan Dafoe is seeking research assistants for a project on political and strategic concerns related to existential AI risk.\nThe Center for Applied Rationality is accepting applicants to a free workshop for machine learning researchers and students.\n\n\n\n\n\n\n\n\nThe post May 2016 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "May 2016 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=28", "id": "8d8d6eb457304e6434b4ded46b263533"} {"text": "A new MIRI research program with a machine learning focus\n\nI'm happy to announce that MIRI is beginning work on a new research agenda, \"value alignment for advanced machine learning systems.\" Half of MIRI's team — Patrick LaVictoire, Andrew Critch, and I — will be spending the bulk of our time on this project over at least the next year. The rest of our time will be spent on our pre-existing research agenda.\nMIRI's research in general can be viewed as a response to Stuart Russell's question for artificial intelligence researchers: \"What if we succeed?\" There appear to be a number of theoretical prerequisites for designing advanced AI systems that are robust and reliable, and our research aims to develop them early.\nOur general research agenda is agnostic about when AI systems are likely to match and exceed humans in general reasoning ability, and about whether or not such systems will resemble present-day machine learning (ML) systems. Recent years' impressive progress in deep learning suggests that relatively simple neural-network-inspired approaches can be very powerful and general. For that reason, we are making an initial inquiry into a more specific subquestion: \"What if techniques similar in character to present-day work in ML succeed in creating AGI?\".\nMuch of this work will be aimed at improving our high-level theoretical understanding of task-directed AI. Unlike what Nick Bostrom calls \"sovereign AI,\" which attempts to optimize the world in long-term and large-scale ways, task AI is limited to performing instructed tasks of limited scope, satisficing but not maximizing. Our hope is that investigating task AI from an ML perspective will help give information about both the feasibility of task AI and the tractability of early safety work on advanced supervised, unsupervised, and reinforcement learning systems.\nTo this end, we will begin by investigating eight relevant technical problems:\n\n\n1. Inductive ambiguity detection.\nHow can we design a general methodology for ML systems (such as classifiers) to identify when the classification of a test instance is underdetermined by training data?\nFor example: If an ambiguity-detecting classifier is designed to distinguish images of tanks from images of non-tanks, and the training set only contains images of tanks on cloudy days and non-tanks on sunny days, this classifier ought to detect that the classification of an image of a tank on a sunny day is ambiguous, and pose some query for its operators to disambiguate it and avoid errors.\nWhile past and current work in active learning and statistical learning theory more broadly has made progress towards this goal, more work is necessary to establish realistic statistical bounds on the error rates and query rates of real-world systems in advance of their deployment in complex environments.\n2. Informed oversight.\nHow might we train a reinforcement learner to output both an action and a \"report\" comprising information to help a human evaluate its action?\nFor example: If a human is attempting to train a reinforcement learner to output original stories, then in evaluating the story, the human will want to know some information about the story (such as whether it has been plagiarized from another story) that may be difficult to determine by looking at the story itself.\n3. Safe training procedures for human-imitators.\nHow might we design a ML system that imitates humans performing some task that involves rich outputs (such as answering questions in natural language), to the best of the ML system's abilities?\nWhile there are existing approaches to imitation learning and generative models, these have some theoretical shortcomings that prevent them from fully solving the general problem. In particular, a generative adversarial model trained on human actions only has an incentive to imitate aspects of the human that the adversary can detect; thus, issues similar to the plagiarism problem from (2) can arise.\n4. Conservative concepts.\nHow might we design a system that, given some positive examples of a concept, can synthesize new instances of the concept without synthesizing edge cases of it?\nFor example: If we gave the system detailed information about 100 human-created burritos as training data, it should manufacture additional burritos while avoiding edge cases such as extremely small burritos (even though these could still be considered burritos).\nBy default, most objective functions will lead to such edge cases (say, because small burritos are cheaper to manufacture). Can we develop a general technique for avoiding this problem?\n5. Specifying environmental goals using sensory data.\nThe goals of most ML systems are defined in terms of sensory data, such as discounted reward over time. While these sensory goals can be useful proxies for environmental goals, they are not identical to them: tricking your own sensors into perceiving a strawberry in the room is not the the same as actually having one there. How might we create systems that directly pursue useful goals in their environment, such as \"make there be a strawberry in this room\", in a way that is robust to opportunities for the system to interfere with the proxies we used to specify the goal?\n6. Low-impact agents.\nWe would prefer a highly intelligent task-oriented system to generally avoid creating large unintended-by-us side effects in pursuit of its goal, and also to notify us of any large impacts that might result from achieving its goal. For example, if we ask it to build a house for a homeless family, it should know implicitly that it should avoid destroying nearby houses for materials — a large side effect — while still having the desirable large follow-on effect of improving the family's socioeconomic situation. For any specific task we can specify ad-hoc cost functions for side effects like nearby house destruction, but since we cannot always anticipate such costs in advance, we want a quantitative understanding of how to generally limit an AI's side effects (without also limiting its ability to have large positive intended impacts).\n7. Mild optimization.\nIn the event that the low impact criteria we employ in (6) are flawed or missing some key considerations, we would like some way to tell our system \"don't try too hard\" or \"don't exercise too much optimization power\", since intuitively, large impacts are more likely to result from large efforts (although the correspondence is imperfect).\nLess capable AI systems tend to be less likely to have large unintended side effects, and we would like to know how to make even highly capable systems safe in a similar way, by having them apply less \"effort\". With a satisfactory encoding of our notion of \"effort\", we could then program a system to impose absolute limits on its total effort, or to use only as much effort as is needed to achieve its objective with reasonably high probability, thus potentially limiting impacts that we did not manage to measure in (6).\n8. Averting instrumental pressures.\nHow might we design a system that is not motivated to pursue certain convergent instrumental goals — such as gaining additional resources — even when pursuing these goals would help it achieve its main objective?\nIn particular, we may wish to build a system that has no incentive to cause or prevent its own shutdown/suspension. This relates to (6) and (7) in that instrumental pressures like \"ensure my continued operation\" can incentivize large impacts/efforts. However, this is a distinct agenda item because it may be possible to completely eliminate certain instrumental incentives in a way that would apply even before solutions to (6) and (7) would take effect.\n\nHaving identified these topics of interest, we expect our work on this agenda to be timely. The idea of \"robust and beneficial\" AI has recently received increased attention as a result of the new wave of breakthroughs in machine learning. The kind of theoretical work in this project has more obvious connections to the leading paradigms in AI and ML than, for example, our recent work in logical uncertainty or in game theory, and therefore lends itself better to collaborations with AI/ML researchers in the near future.\n\n \nThanks to Eliezer Yudkowsky and Paul Christiano for seeding many of the initial ideas for these research directions, to Patrick LaVictoire, Andrew Critch, and other MIRI researchers for helping develop these ideas, and to Chris Olah, Dario Amodei, and Jacob Steinhardt for valuable discussion.\nThe post A new MIRI research program with a machine learning focus appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "A new MIRI research program with a machine learning focus", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=28", "id": "788480d380c0cc2592f0799319601f4f"} {"text": "New papers dividing logical uncertainty into two subproblems\n\nI'm happy to announce two new technical results related to the problem of logical uncertainty, perhaps our most significant results from the past year. In brief, these results split the problem of logical uncertainty into two distinct subproblems, each of which we can now solve in isolation. The remaining problem, in light of these results, is to find a unified set of methods that solve both at once.\nThe solutions for each subproblem are available in two new papers, based on work spearheaded by Scott Garrabrant: \"Inductive coherence\"1 and \"Asymptotic convergence in online learning with unbounded delays.\"2\nTo give some background on the problem: Modern probability theory models reasoners' empirical uncertainty, their uncertainty about the state of a physical environment, e.g., \"What's behind this door?\" However, it can't represent reasoners' logical uncertainty, their uncertainty about statements like \"this Turing machine halts\" or \"the twin prime conjecture has a proof that is less than a gigabyte long.\"3\nRoughly speaking, if you give a classical probability distribution variables for statements that could be deduced in principle, then the axioms of probability theory force you to put probability either 0 or 1 on those statements, because you're not allowed to assign positive probability to contradictions. In other words, modern probability theory assumes that all reasoners know all the consequences of all the things they know, even if deducing those consequences is intractable.\nWe want a generalization of probability theory that allows us to model reasoners that have uncertainty about statements that they have not yet evaluated. Furthermore, we want to understand how to assign \"reasonable\" probabilities to claims that are too expensive to evaluate.\nImagine an agent considering whether to use quicksort or mergesort to sort a particular dataset. They might know that quicksort typically runs faster than mergesort, but that doesn't necessarily apply to the current dataset. They could in principle figure out which one uses fewer resources on this dataset, by running both of them and comparing, but that would defeat the purpose. Intuitively, they have a fair bit of knowledge that bears on the claim \"quicksort runs faster than mergesort on this dataset,\" but modern probability theory can't tell us which information they should use and how.4\n\nWhat does it mean for a reasoner to assign \"reasonable probabilities\" to claims that they haven't computed, but could compute in principle? Without probability theory to guide us, we're reduced to using intuition to identify properties that seem desirable, and then investigating which ones are possible. Intuitively, there are at least two properties we would want logically non-omniscient reasoners to exhibit:\n1. They should be able to notice patterns in what is provable about claims, even before they can prove or disprove the claims themselves. For example, consider the claims \"this Turing machine outputs an odd number\" and \"this Turing machine outputs an even number.\" A good reasoner thinking about those claims should eventually recognize that they are mutually exclusive, and assign them probabilities that sum to at most 1, even before they can run the relevant Turing machine.\n2. They should be able to notice patterns in sentence classes that are true with a certain frequency. For example, they should assign roughly 10% probability to \"the 10100th digit of pi is a 7\" in lieu of any information about the digit, after observing (but not proving) that digits of pi tend to be uniformly distributed.\nMIRI's work on logical uncertainty this past year can be very briefly summed up as \"we figured out how to get these two properties individually, but found that it is difficult to get both at once.\"\n\"Inductive coherence,\" which I co-authored with Garrabrant, Benya Fallenstein, and Abram Demski, shows how to get the first property. The abstract reads:\nWhile probability theory is normally applied to external environments, there has been some recent interest in probabilistic modeling of the outputs of computations that are too expensive to run. Since mathematical logic is a powerful tool for reasoning about computer programs, we consider this problem from the perspective of integrating probability and logic.\nRecent work on assigning probabilities to mathematical statements has used the concept of coherent distributions, which satisfy logical constraints such as the probability of a sentence and its negation summing to one. Although there are algorithms which converge to a coherent probability distribution in the limit, this yields only weak guarantees about finite approximations of these distributions. In our setting, this is a significant limitation: Coherent distributions assign probability one to all statements provable in a specific logical theory, such as Peano Arithmetic, which can prove what the output of any terminating computation is; thus, a coherent distribution must assign probability one to the output of any terminating computation.\nTo model uncertainty about computations, we propose to work with approximations to coherent distributions. We introduce inductive coherence, a strengthening of coherence that provides appropriate constraints on finite approximations, and propose an algorithm which satisfies this criterion.\nGiven a series of provably mutually exclusive sentences, or a series of sentences where each provably implies the next, an inductively coherent predictor's probabilities eventually start respecting this pattern. This is true even if the predictor hasn't been able to prove that the pattern holds yet; if it would be possible in principle to eventually prove each instance of the pattern, then the inductively coherent predictor will start recognizing it \"before too long,\" in a specific technical sense, even if the proofs themselves are very long.\n\"Asymptotic convergence in online learning with unbounded delays,\" which I co-authored with Garrabrant and Jessica Taylor, describes an algorithm with the second property. The abstract reads:\nWe study the problem of predicting the results of computations that are too expensive to run, via the observation of the results of smaller computations. We model this as an online learning problem with delayed feedback, where the length of the delay is unbounded, which we study mainly in a stochastic setting. We show that in this setting, consistency is not possible in general, and that optimal forecasters might not have average regret going to zero. However, it is still possible to give algorithms that converge asymptotically to Bayes-optimal predictions, by evaluating forecasters on specific sparse independent subsequences of their predictions. We give an algorithm that does this, which converges asymptotically on good behavior, and give very weak bounds on how long it takes to converge. We then relate our results back to the problem of predicting large computations in a deterministic setting.\nThe first property is about recognizing patterns about logical relationships between claims — saying \"claim A implies claim B, so my probability on B must be at least my probability on A.\" By contrast, the second property is about recognizing frequency patterns between similar claims — saying \"I lack the resources to tell whether this claim is true, but 90% of similar claims have been true, so the base rate is 90%\" (where part of the problem is figuring out what counts as a \"similar claim\").\nIn this technical report, we model the latter task as an online learning problem, where a predictor observes the behavior of many small computations and has to predict the behavior of large computations. We give an algorithm that eventually assigns the \"right\" probabilities to every predictable subsequence of observations, in a specific technical sense.\n\nEach paper is interesting in its own right, but for us, the exciting result is that we have teased apart and formalized two separate notions of what counts as \"good reasoning\" under logical uncertainty, both of which are compelling.\nFurthermore, our approaches to formalizing these two notions are very different. \"Inductive coherence\" frames the problem in the traditional \"unify logic with probability\" setting, whereas \"Asymptotic convergence in online learning with unbounded delays\" fits more naturally into the online machine learning framework. The methods we found for solving the first problem don't appear to help with the second problem, and vice versa. In fact, the two isolated solutions appear quite difficult to reconcile. The problem that these two papers leave open is: Can we get one algorithm that satisfies both properties at once?\n \n\n \n\n\n\nSign up to get updates on new MIRI technical results\nGet notified every time a new technical paper is published.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n \n \nThis work was originally titled \"Uniform coherence\". This post has been updated to reflect the new terminology.Garrabrant's IAFF forum posts provide a record of how these results were originally developed, as a response to Ray Solomonoff's theory of algorithmic probability. Concrete Failure of the Solomonoff Approach and The Entangled Benford Test lay groundwork for the \"Asymptotic convergence…\" problem, a limited early version of which was featured in the \"Asymptotic logical uncertainty and the Benford test\" report. Inductive coherence is defined in Uniform Coherence 2, and an example of an inductively coherent predictor is identified in The Modified Demski Prior is Uniformly Coherent.This type of uncertainty is called \"logical uncertainty\" mainly for historical reasons. I think of it like this: We care about agents' ability to reason about software systems, e.g., \"this program will halt.\" Those claims can be expressed in sentences of logic. The question \"what probability does the agent assign to this machine halting?\" then becomes \"what probability does this agent assign to this particular logical sentence?\" The truth of these statements could be determined in principle, but the agent may not have the resources to compute the answers in practice.For more background on logical uncertainty, see Gaifman's \"Concerning measures in first-order calculi,\" Garber's \"Old evidence and logical omniscience in Bayesian confirmation theory,\" Hutter, Lloyd, Ng, and Uther's \"Probabilities on sentences in an expressive logic,\" and Aaronson's \"Why philosophers should care about computational complexity.\"The post New papers dividing logical uncertainty into two subproblems appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New papers dividing logical uncertainty into two subproblems", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=29", "id": "a0578d6e1110d8d9ff98cab39a83e744"} {"text": "April 2016 Newsletter\n\n\n\n\n\n\n\n\nResearch updates\n\nA new paper: \"Parametric Bounded Löb's Theorem and Robust Cooperation of Bounded Agents\"\nNew at IAFF: What Does it Mean for Correct Operation to Rely on Transfer Learning?; Virtual Models of Virtual AIs in Virtual Worlds\n\nGeneral updates\n\nWe're currently accepting applicants to two programs we're running in June: our 2016 Summer Fellows program (details), and a new Colloquium Series on Robust and Beneficial AI (details).\nMIRI has a new second-in-command: Malo Bourgon.\nWe're hiring! Apply here for our new research position in type theory.\nAI Impacts is asking for examples of concrete tasks AI systems can't yet achieve. You can also submit these tasks to Phil Tetlock, who is making the same request for Good Judgment Open.\nMIRI senior researcher Eliezer Yudkowsky discusses his core AI concerns with Bryan Caplan. (See Caplan's response and Yudkowsky's follow-up.)\nYudkowsky surveys lessons from game-playing AI.\n\nNews and links\n\nGoogle DeepMind's AlphaGo software defeats leading Go player Lee See-dol 4-1. GoGameGuru provides excellent commentary on each game (1, 2, 3, 4, 5). Lee's home country of South Korea responds with an AI funding push.\nIn other Google news: The New York Times reports on an AI platform war; Alphabet's head of moonshots rejects AI risk concerns; and Alphabet jettisons its main robotics division.\nThe UK Parliament is launching an inquiry into \"social, legal, and ethical issues\" raised by AI, and invites written submissions of relevant evidence and arguments.\nThe White House's Council of Economic Advisers predicts the widespread automation of low-paying jobs. Related: How Machines Destroy (And Create!) Jobs.\nCGP Grey, who discussed automation in Humans Need Not Apply (video), has a thoughtful conversation about Nick Bostrom's Superintelligence (audio).\nAmitai and Oren Etzioni call for the development of guardian AI, \"second-order AI software that will police AI.\"\nIn a new paper, Bostrom weighs the pros and cons of openness in AI.\nBostrom argues for scalable AI control methods at RSA Conference (video).\nThe Open Philanthropy Project, a collaboration between GiveWell and Good Ventures, awards $100,000 to the Future of Life Institute.\nThe Center for Applied Rationality is seeking participants for two free programs: a Workshop on AI Safety Strategy and EuroSPARC, a math summer camp.\n\n\n\n\n\n\n\n\nThe post April 2016 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "April 2016 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=29", "id": "946b302a8c3c316d1991cb237b32263d"} {"text": "New paper on bounded Löb and robust cooperation of bounded agents\n\nMIRI Research Fellow Andrew Critch has written a new paper on cooperation between software agents in the Prisoner's Dilemma, available on arXiv: \"Parametric bounded Löb's theorem and robust cooperation of bounded agents.\" The abstract reads:\nLöb's theorem and Gödel's theorem make predictions about the behavior of systems capable of self-reference with unbounded computational resources with which to write and evaluate proofs. However, in the real world, systems capable of self-reference will have limited memory and processing speed, so in this paper we introduce an effective version of Löb's theorem which is applicable given such bounded resources. These results have powerful implications for the game theory of bounded agents who are able to write proofs about themselves and one another, including the capacity to out-perform classical Nash equilibria and correlated equilibria, attaining mutually cooperative program equilibrium in the Prisoner's Dilemma. Previous cooperative program equilibria studied by Tennenholtz and Fortnow have depended on tests for program equality, a fragile condition, whereas \"Löbian\" cooperation is much more robust and agnostic of the opponent's implementation.\nTennenholtz (2004) showed that cooperative equilibria exist in the Prisoner's Dilemma between agents with transparent source code. This suggested that a number of results in classical game theory, where it is a commonplace that mutual defection is rational, might fail to generalize to settings where agents have strong guarantees about each other's conditional behavior.\nTennenholtz's version of program equilibrium, however, only established that rational cooperation was possible between agents with identical source code. Patrick LaVictoire and other researchers at MIRI supplied the additional result that more robust cooperation was possible between non-computable agents, and that it is possible to efficiently determine the outcomes of such games. However, some readers objected to the infinitary nature of the methods (for example, the use of halting oracles) and worried that not all of the results would carry over to finite computations.\nCritch's report demonstrates that robust cooperative equilibria exist for bounded agents. In the process, Critch proves a new generalization of Löb's theorem, and therefore of Gödel's second incompleteness theorem. This parametric version of Löb's theorem holds for proofs that can be written out in n or fewer characters, where the parameter n can be set to any number. For more background on the result's significance, see LaVictoire's \"Introduction to Löb's theorem in MIRI research.\"\nThe new Löb result shows that bounded agents face obstacles to self-referential reasoning similar to those faced by unbounded agents, and can also reap some of the same benefits. Importantly, this lemma will likely allow us to discuss many other self-referential phenomena going forward using finitary examples rather than infinite ones.\n \n\n \n\n\n\nSign up to get updates on new MIRI technical results\nGet notified every time a new technical paper is published.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\nThe post New paper on bounded Löb and robust cooperation of bounded agents appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper on bounded Löb and robust cooperation of bounded agents", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=29", "id": "c4c4027c8602228f4176135694b8af8a"} {"text": "MIRI has a new COO: Malo Bourgon\n\nI'm happy to announce that Malo Bourgon, formerly a program management analyst at MIRI, has taken on a new leadership role as our chief operating officer.\nAs MIRI's second-in-command, Malo will be taking over a lot of the hands-on work of coordinating our day-to-day activities: supervising our ops team, planning events, managing our finances, and overseeing internal systems. He'll also be assisting me in organizational strategy and outreach work.\nPrior to joining MIRI, Malo studied electrical, software, and systems engineering at the University of Guelph in Ontario. His professional interests included climate change mitigation, and during his master's, he worked on a project to reduce waste through online detection of inefficient electric motors. Malo started working for us shortly after completing his master's in early 2012, which makes him MIRI's longest-standing team member next to Eliezer Yudkowsky.\n\nUntil now, I've generally thought of Malo as our secret weapon — a smart, practical efficiency savant. While Luke Muehlhauser (our previous executive director) provided the vision and planning that transformed us into a mature research organization, Malo was largely responsible for the implementation. Behind the scenes, nearly every system or piece of software MIRI uses has been put together by Malo, or in a joint effort by Malo and Alex Vermeer — a close friend of Malo's from the University of Guelph who now works as a MIRI program management analyst. Malo's past achievements at MIRI include:\n\ncoordinating MIRI's first research workshops and establishing our current recruitment pipeline.\nestablishing MIRI's immigration workflow, allowing us to hire Benya Fallenstein, Katja Grace, and a number of other overseas researchers and administrators.\nrunning MIRI's (presently inactive) volunteer program.\nstandardizing MIRI's document production workflow.\ndeveloping and leading the execution of our 2014 SV Gives strategy. This resulted in MIRI receiving $171,575 in donations and prizes (at least $61,330 of which came from outside our usual donor pool) over a 24-hour span.\ndesigning MIRI's fundraising infrastructure, including our live graphs.\n\nMore recently, Malo has begun representing MIRI in meetings with philanthropic organizations, government agencies, and for-profit AI groups.\nMalo has been an invaluable asset to MIRI, and I'm thrilled to have him take on more responsibilities here. As one positive consequence, this will free up more of my time to work on strategy, recruiting, fundraising, and research.\nIn other news, MIRI's head of communications, Rob Bensinger, has been promoted to the role of research communications manager. He continues to be the best person to contact at MIRI if you have general questions about our work and mission.\nLastly, Katja Grace, the primary contributor to the AI Impacts project, has been promoted to our list of research staff. (Katja is not part of our core research team, and works on questions related to AI strategy and forecasting rather than on our technical research agenda.)\nMy thanks and heartfelt congratulations to Malo, Rob, and Katja for all the work they've done, and all they continue to do.\nThe post MIRI has a new COO: Malo Bourgon appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "MIRI has a new COO: Malo Bourgon", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=29", "id": "a1b6af5c78ee4f51597c9733c78198d9"} {"text": "Announcing a new colloquium series and fellows program\n\nThe Machine Intelligence Research Institute is accepting applicants to two summer programs: a three-week AI robustness and reliability colloquium series (co-run with the Oxford Future of Humanity Institute), and a two-week fellows program focused on helping new researchers contribute to MIRI's technical agenda (co-run with the Center for Applied Rationality).\n\nThe Colloquium Series on Robust and Beneficial AI (CSRBAI), running from May 27 to June 17, is a new gathering of top researchers in academia and industry to tackle the kinds of technical questions featured in the Future of Life Institute's long-term AI research priorities report and project grants, including transparency, error-tolerance, and preference specification in software systems.\nThe goal of the event is to spark new conversations and collaborations between safety-conscious AI scientists with a variety of backgrounds and research interests. Attendees will be invited to give and attend talks at MIRI's Berkeley, California offices during Wednesday/Thursday/Friday colloquia, to participate in hands-on Saturday/Sunday workshops, and to drop by for open discussion days:\n \n\n \nScheduled speakers include Stuart Russell (May 27), UC Berkeley Professor of Computer Science and co-author of Artificial Intelligence: A Modern Approach, Tom Dietterich (May 27), AAAI President and OSU Director of Intelligent Systems, and Bart Selman (June 3), Cornell Professor of Computer Science.\nApply here to attend any portion of the event, as well as to propose a talk or discussion topic:\n \nApplication Form\n \n\nThe 2016 MIRI Summer Fellows program, running from June 19 to July 3, doubles as a workshop for developing new problem-solving skills and mathematical intuitions, and a crash course on MIRI's active research projects.\nThis is a smaller and more focused version of the Summer Fellows program we ran last year, which resulted in multiple new hires for us. As such, the program also functions as a high-intensity research retreat where MIRI staff and potential collaborators can get to know each other and work together on important open problems in AI. Apply here to attend the program:\n \nApplication Form\n \n\nBoth programs are free of charge, including free room and board for all MIRI Summer Fellows program participants, free lunches and dinners for CSRBAI participants, and additional partial accommodations and travel assistance for select attendees. For additional information, see the CSRBAI event page and the MIRI Summer Fellows event page.\nThe post Announcing a new colloquium series and fellows program appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Announcing a new colloquium series and fellows program", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=29", "id": "5b47c319e0020dee2577dfef7542455b"} {"text": "Seeking Research Fellows in Type Theory and Machine Self-Reference\n\nThe Machine Intelligence Research Institute (MIRI) is accepting applications for a full-time research fellow to develop theorem provers with self-referential capabilities, beginning by implementing a strongly typed language within that very language. The goal of this research project will be to help us understand autonomous systems that can prove theorems about systems with similar deductive capabilities. Applicants should have experience programming in functional programming languages, with a preference for languages with dependent types, such as Agda, Coq, or Lean.\nMIRI is a mathematics and computer science research institute specializing in long-term AI safety and robustness work. Our offices are in Berkeley, California, near the UC Berkeley campus.\n\nType Theory in Type Theory\nOur goal with this project is to build tools for better modeling reflective reasoning in software systems, as with our project modeling the HOL4 proof assistant within itself. There are Gödelian reasons to think that self-referential reasoning is not possible in full generality. However, many real-world tasks that cannot be solved in full generality admit of effective mostly-general or heuristic approaches. Humans, for example, certainly succeed in trusting their own reasoning in many contexts.\nThere are a number of tools missing in modern-day theorem provers that would be helpful for studying self-referential reasoning. First among these is theorem provers that can construct proofs about software systems that make use of a very similar theorem prover. To build these tools in a strongly typed programming language, we need to start by writing programs and proofs that can make reference to the type of programs and proofs in the same language.\nType theory in type theory has recently received a fair amount of attention. James Chapman's work is pushing in a similar direction to what we want, as is Matt Brown and Jens Palsberg's, but these projects don't yet give us the tools we need. (F-omega is too weak a logic for our purposes, and methods like Chapman's don't get us self-representations.)\nThis is intended to be an independent research project, though some collaborations with other researchers may occur. Our expectation is that this will be a multi-year project, but it is difficult to predict exactly how difficult this task is in advance. It may be easier than it looks, or substantially more difficult.\nDepending on how the project goes, researchers interested in continuing to work with us after this project's completion may be able to collaborate on other parts of our research agenda or propose their own additions to our program.\nWorking at MIRI\nWe try to make working at MIRI a great experience. Here's how we operate:\n\nModern Work Spaces. Many of us have adjustable standing desks with large external monitors. We consider workspace ergonomics important, and try to rig up work stations to be as comfortable as possible. Free snacks, drinks, and meals are also provided at our office.\nFlexible Hours. This is a salaried position. We don't have strict office hours, and we don't limit employees' vacation days. Our goal is to make quick progress on our research agenda, and we would prefer that researchers take a day off than that they extend tasks to fill an extra day.\nLiving in the Bay Area. MIRI's office is located in downtown Berkeley, California. From our office, you're a 30-second walk to the BART (Bay Area Rapid Transit), which can get you around the Bay Area; a 3-minute walk to UC Berkeley campus; and a 30-minute BART ride to downtown San Francisco.\nTravel Assistance. Visa assistance is available if needed. If you are moving to the Bay Area, we'll cover up to $3,500 in moving expenses. We also provide a public transit pass with a large monthly allowance.\n\nThe salary for this position is negotiable, and comes with top-notch health and dental benefits.\nAbout MIRI\nMIRI is a Berkeley-based research nonprofit studying foundational questions in artificial intelligence. Our goal is to ensure that high-quality decision-making systems have a positive global impact in coming decades. MIRI Research Advisor and AI pioneer Stuart Russell outlines several causes for concerns about high-capability AI software:\n\n\nThe utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.\nAny sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.\n\nA system that is optimizing a function of n variables, where the objective depends on a subset of size k P(A) for some A&B are said to exhibit the \"conjunction fallacy\" — for example, in 1982, experts at the International Congress on Forecasting assigned higher probability to \"A Russian invasion of Poland, and a complete breakdown of diplomatic relations with the Soviet Union\" than a separate group did for \"A complete breakdown of diplomatic relations with the Soviet Union\". Similarly, another group assigned higher probability to \"An earthquake in California causing a flood that causes over a thousand deaths\" than another group assigned to \"A flood causing over a thousand deaths somewhere in North America.\" Even though adding on additional details necessarily makes a story less probable, it can make the story sound more plausible. I see understanding this as a kind of Pons Asinorum of serious futurism — the distinction between carefully weighing each and every independent proposition you add to your burden, asking if you can support that detail independently of all the rest, versus making up a wonderful vivid story.\nI mention this as context for my reply, which is, \"Why the heck are you tacking on the 'cyborg' detail to that? I don't want to be a cyborg.\" You've got to be careful with tacking on extra details to things.\n\nJohn: Do you have a shot at immortality?\n\nEliezer: What, literal immortality? Literal immortality seems hard. Living significantly longer than a few trillion years requires us to be wrong about the expected fate of the expanding universe. Living longer than, say, a googolplex years, requires us to be wrong about the basic character of physical law, not just the details.\nEven if some of the wilder speculations are true and it's possible for our universe to spawn baby universes, that doesn't get us literal immortality. To live significantly past a googolplex years without repeating yourself, you need computing structures containing more than a googol elements, and those won't fit inside a single Hubble volume.\nAnd a googolplex is hardly infinity. To paraphrase Martin Gardner, Graham's Number is still relatively small because most finite numbers are very much larger. Look up the fast-growing hierarchy if you really want to have your mind blown, well, eternity is longer than that. Only weird and frankly terrifying anthropic theories would let you live long enough to gaze, perhaps knowingly and perhaps not, upon the halting of the longest-running halting Turing machine with 100 states.\nBut I'm not sure that living to look upon the 100th Busy Beaver Number feels to me like it matters very much on a deep emotional level. I have some imaginative sympathy with myself a subjective century from now. That me will be in a position to sympathize with their future self a subjective century later. And maybe somewhere down the line is someone who faces the prospect of their future self not existing at all, and they might be very sad about that; but I'm not sure I can imagine who that person will be. \"I want to live one more day. Tomorrow I'll still want to live one more day. Therefore I want to live forever, proof by induction on the positive integers.\" Even my desire for merely physical-universe-containable longevity is an abstract want by induction; it's not that I can actually imagine myself a trillion years later.\n\nJohn: I've described the Singularity as an \"escapist, pseudoscientific\" fantasy that distracts us from climate change, war, inequality and other serious problems. Why am I wrong?\n\nEliezer: Because you're trying to forecast empirical facts by psychoanalyzing people. This never works.\nSuppose we get to the point where there's an AI smart enough to do the same kind of work that humans do in making the AI smarter; it can tweak itself, it can do computer science, it can invent new algorithms. It can self-improve. What happens after that — does it become even smarter, see even more improvements, and rapidly gain capability up to some very high limit? Or does nothing much exciting happen?\nIt could be that, (A), self-improvements of size δ tend to make the AI sufficiently smarter that it can go back and find new potential self-improvements of size k ⋅ δ and that k is greater than one, and this continues for a sufficiently extended regime that there's a rapid cascade of self-improvements leading up to superintelligence; what I. J. Good called the intelligence explosion. Or it could be that, (B), k is less than one or that all regimes like this are small and don't lead up to superintelligence, or that superintelligence is impossible, and you get a fizzle instead of an explosion. Which is true, A or B? If you actually built an AI at some particular level of intelligence and it actually tried to do that, something would actually happen out there in the empirical real world, and that event would be determined by background facts about the landscape of algorithms and attainable improvements.\nYou can't get solid information about that event by psychoanalyzing people. It's exactly the sort of thing that Bayes's Theorem tells us is the equivalent of trying to run a car without fuel. Some people will be escapist regardless of the true values on the hidden variables of computer science, so observing some people being escapist isn't strong evidence, even if it might make you feel like you want to disaffiliate with a belief or something.\nThere is a misapprehension, I think, of the nature of rationality, which is to think that it's rational to believe \"there are no closet goblins\" because belief in closet goblins is foolish, immature, outdated, the sort of thing that stupid people believe. The true principle is that you go in your closet and look. So that in possible universes where there are closet goblins, you end up believing in closet goblins, and in universes with no closet goblins, you end up disbelieving in closet goblins.\nIt's difficult but not impossible to try to sneak peeks through the crack of the closet door, to ask the question, \"What would look different in the universe now if you couldn't get sustained returns on cognitive investment later, such that an AI trying to improve itself would fizzle? What other facts should we observe in a universe like that?\"\nSo you have people who say, for example, that we'll only be able to improve AI up to the human level because we're human ourselves, and then we won't be able to push an AI past that. I think that if this is how the universe looks in general, then we should also observe, e.g., diminishing returns on investment in hardware and software for computer chess past the human level, which we did not in fact observe. Also, natural selection shouldn't have been able to construct humans, and Einstein's mother must have been one heck of a physicist, et cetera.\nYou have people who say, for example, that it should require more and more tweaking to get smarter algorithms and that human intelligence is around the limit. But this doesn't square up with the anthropological record of human intelligence; we can know that there were not diminishing returns to brain tweaks and mutations producing improved cognitive power. We know this because population genetics says that mutations with very low statistical returns will not evolve to fixation at all.\nAnd hominids definitely didn't need exponentially vaster brains than chimpanzees. And John von Neumann didn't have a head exponentially vaster than the head of an average human.\nAnd on a sheerly pragmatic level, human axons transmit information at around a millionth of the speed of light, even when it comes to heat dissipation each synaptic operation in the brain consumes around a million times the minimum heat dissipation for an irreversible binary operation at 300 Kelvin, and so on. Why think the brain's software is closer to optimal than the hardware? Human intelligence is privileged mainly by being the least possible level of intelligence that suffices to construct a computer; if it were possible to construct a computer with less intelligence, we'd be having this conversation at that level of intelligence instead.\nBut this is not a simple debate and for a detailed consideration I'd point people at an old informal paper of mine, \"Intelligence Explosion Microeconomics\", which is unfortunately probably still the best source out there. But these are the type of questions one must ask to try to use our currently accessible evidence to reason about whether or not we'll see what's colloquially termed an \"AI FOOM\" — whether there's an extended regime where δ improvement in cognition, reinvested into self-optimization, yields greater than δ further improvements.\nAs for your question about opportunity costs:\nThere is a conceivable world where there is no intelligence explosion and no superintelligence. Or where, a related but logically distinct proposition, the tricks that machine learning experts will inevitably build up for controlling infrahuman AIs carry over pretty well to the human-equivalent and superhuman regime. Or where moral internalism is true and therefore all sufficiently advanced AIs are inevitably nice. In conceivable worlds like that, all the work and worry of the Machine Intelligence Research Institute comes to nothing and was never necessary in the first place, representing some lost number of mosquito nets that could otherwise have been bought by the Against Malaria Foundation.\nThere's also a conceivable world where you work hard and fight malaria, where you work hard and keep the carbon emissions to not much worse than they are already (or use geoengineering to mitigate mistakes already made). And then it ends up making no difference because your civilization failed to solve the AI alignment problem, and all the children you saved with those malaria nets grew up only to be killed by nanomachines in their sleep. (Vivid detail warning! I don't actually know what the final hours will be like and whether nanomachines will be involved. But if we're happy to visualize what it's like to put a mosquito net over a bed, and then we refuse to ever visualize in concrete detail what it's like for our civilization to fail AI alignment, that can also lead us astray.)\nI think that people who try to do thought-out philanthropy, e.g., Holden Karnofsky of GiveWell, would unhesitatingly agree that these are both conceivable worlds we prefer not to enter. The question is just which of these two worlds is more probable as the one we should avoid. And again, the central principle of rationality is not to disbelieve in goblins because goblins are foolish and low-prestige, or to believe in goblins because they are exciting or beautiful. The central principle of rationality is to figure out which observational signs and logical validities can distinguish which of these two conceivable worlds is the metaphorical equivalent of believing in goblins.\nI think it's the first world that's improbable and the second one that's probable. I'm aware that in trying to convince people of that, I'm swimming uphill against a sense of eternal normality — the sense that this transient and temporary civilization of ours that has existed for only a few decades, that this species of ours that has existed for only an eyeblink of evolutionary and geological time, is all that makes sense and shall surely last forever. But given that I do think the first conceivable world is just a fond dream, it should be clear why I don't think we should ignore a problem we'll predictably have to panic about later. The mission of the Machine Intelligence Research Institute is to do today that research which, 30 years from now, people will desperately wish had begun 30 years earlier.\n\nJohn: Does your wife Brienne believe in the Singularity?\n\nEliezer: Brienne replies:\nIf someone asked me whether I \"believed in the singularity\", I'd raise an eyebrow and ask them if they \"believed in\" robotic trucking. It's kind of a weird question. I don't know a lot about what the first fleet of robotic cargo trucks will be like, or how long they'll take to completely replace contemporary ground shipping. And if there were a culturally loaded suitcase term \"robotruckism\" that included a lot of specific technological claims along with whole economic and sociological paradigms, I'd be hesitant to say I \"believed in\" driverless trucks. I confidently forecast that driverless ground shipping will replace contemporary human-operated ground shipping, because that's just obviously where we're headed if nothing really weird happens. Similarly, I confidently forecast an intelligence explosion. That's obviously where we're headed if nothing really weird happens. I'm less sure of the other items in the \"singularity\" suitcase.\nTo avoid prejudicing the result, Brienne composed her reply without seeing my other answers. We're just well-matched.\n\nJohn: Can we create superintelligences without knowing how our brains work?\n\nEliezer: Only in the sense that you can make airplanes without knowing how a bird flies. You don't need to be an expert in bird biology, but at the same time, it's difficult to know enough to build an airplane without realizing some high-level notion of how a bird might glide or push down air with its wings. That's why I write about human rationality in the first place — if you push your grasp on machine intelligence past a certain point, you can't help but start having ideas about how humans could think better too.\n\nJohn: What would superintelligences want? Will they have anything resembling sexual desire?\n\nEliezer: Think of an enormous space of possibilities, a giant multidimensional sphere. This is Mind Design Space, the set of possible cognitive algorithms. Imagine that somewhere near the bottom of that sphere is a little tiny dot representing all the humans who ever lived — it's a tiny dot because all humans have basically the same brain design, with a cerebral cortex, a prefrontal cortex, a cerebellum, a thalamus, and so on. It's conserved even relative to chimpanzee brain design. Some of us are weird in little ways, you could say it's a spiky dot, but the spikes are on the same tiny scale as the dot itself; no matter how neuroatypical you are, you aren't running on a different cortical algorithm.\nAsking \"what would superintelligences want\" is a Wrong Question. Superintelligences are not this weird tribe of people who live across the water with fascinating exotic customs. \"Artificial Intelligence\" is just a name for the entire space of possibilities outside the tiny human dot. With sufficient knowledge you might be able to reach into that space of possibilities and deliberately pull out an AI that wanted things that had a compact description in human wanting-language, but that wouldn't be because this is a kind of thing that those exotic superintelligence people naturally want, it would be because you managed to pinpoint one part of the design space.\nWhen it comes to pursuing things like matter and energy, we may tentatively expect partial but not total convergence — it seems like there should be many, many possible superintelligences that would instrumentally want matter and energy in order to serve terminal preferences of tremendous variety. But even there, everything is subject to defeat by special cases. If you don't want to get disassembled for spare atoms, you can, if you understand the design space well enough, reach in and pull out a particular machine intelligence that doesn't want to hurt you.\nSo the answer to your second question about sexual desire is that if you knew exactly what you were doing and if you had solved the general problem of building AIs that stably want particular things as they self-improve and if you had solved the general problem of pinpointing an AI's utility functions at things that seem deceptively straightforward to human intuitions, and you'd solved an even harder problem of building an AI using the particular sort of architecture where 'being horny' or 'sex makes me happy' makes sense in the first place, then you could perhaps make an AI that had been told to look at humans, model what humans want, pick out the part of the model that was sexual desire, and then want and experience that thing too.\nYou could also, if you had a sufficiently good understanding of organic biology and aerodynamics, build an airplane that could mate with birds.\nI don't think this would have been a smart thing for the Wright Brothers to try to do in the early days. There would have been absolutely no point.\nIt does seem a lot wiser to figure out how to reach into the design space and pull out a special case of AI that will lack the default instrumental preference to disassemble us for spare atoms.\n\nJohn: I like to think superintelligent beings would be nonviolent, because they will realize that violence is stupid. Am I naive?\n\nEliezer: I think so. As David Hume might have told you, you're making a type error by trying to apply the 'stupidity' predicate to an agent's terminal values or utility function. Acts, choices, policies can be stupid given some set of preferences over final states of the world. If you happen to be an agent that has meta-preferences you haven't fully computed, you might have a platform on which to stand and call particular guesses at the derived object-level preferences as 'stupid'.\nA paperclip maximizer is not making a computational error by having a preference order on outcomes that prefers outcomes with more paperclips in them. It is not standing from within your own preference framework and choosing blatantly mistaken acts, nor is it standing within your meta-preference framework and making mistakes about what to prefer. It is computing the answer to a different question than the question that you are asking when you ask, \"What should I do?\" A paperclip maximizer just outputs the action leading to the greatest number of expected paperclips.\nThe fatal scenario is an AI that neither loves you nor hates you, because you're still made of atoms that it can use for something else. Game theory, and issues like cooperation in the Prisoner's Dilemma, don't emerge in all possible cases. In particular, they don't emerge when something is sufficiently more powerful than you that it can disassemble you for spare atoms whether you try to press Cooperate or Defect. Past that threshold, either you solved the problem of making something that didn't want to hurt you, or else you've already lost.\n\nJohn: Will superintelligences solve the \"hard problem\" of consciousness?\n\nEliezer: Yes, and in retrospect the answer will look embarrassingly obvious from our perspective.\n\nJohn: Will superintelligences possess free will?\n\nEliezer: Yes, but they won't have the illusion of free will.\n\nJohn: What's your utopia?\n\nEliezer: I refer your readers to my nonfiction Fun Theory Sequence, since I have not as yet succeeded in writing any novel set in a fun-theoretically optimal world.\n\nThe original interview can be found at AI Visionary Eliezer Yudkowsky on the Singularity, Bayesian Brains and Closet Goblins.\nOther conversations that feature MIRI researchers have included: Yudkowsky on \"What can we do now?\"; Yudkowsky on logical uncertainty; Benya Fallenstein on the Löbian obstacle to self-modifying systems; and Yudkowsky, Muehlhauser, Karnofsky, Steinhardt, and Amodei on MIRI strategy.\nThe post John Horgan interviews Eliezer Yudkowsky appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "John Horgan interviews Eliezer Yudkowsky", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=30", "id": "6bee813477b7f5d3d21d880c54bd7ba5"} {"text": "New paper: \"Defining human values for value learners\"\n\nMIRI Research Associate Kaj Sotala recently presented a new paper, \"Defining Human Values for Value Learners,\" at the AAAI-16 AI, Society and Ethics workshop.\nThe abstract reads:\nHypothetical \"value learning\" AIs learn human values and then try to act according to those values. The design of such AIs, however, is hampered by the fact that there exists no satisfactory definition of what exactly human values are. After arguing that the standard concept of preference is insufficient as a definition, I draw on reinforcement learning theory, emotion research, and moral psychology to offer an alternative definition. In this definition, human values are conceptualized as mental representations that encode the brain's value function (in the reinforcement learning sense) by being imbued with a context-sensitive affective gloss. I finish with a discussion of the implications that this hypothesis has on the design of value learners.\nEconomic treatments of agency standardly assume that preferences encode some consistent ordering over world-states revealed in agents' choices. Real-world preferences, however, have structure that is not always captured in economic models. A person can have conflicting preferences about whether to study for an exam, for example, and the choice they end up making may depend on complex, context-sensitive psychological dynamics, rather than on a simple comparison of two numbers representing how much one wants to study or not study.\nSotala argues that our preferences are better understood in terms of evolutionary theory and reinforcement learning. Humans evolved to pursue activities that are likely to lead to certain outcomes — outcomes that tended to improve our ancestors' fitness. We prefer those outcomes, even if they no longer actually maximize fitness; and we also prefer events that we have learned tend to produce such outcomes.\nAffect and emotion, on Sotala's account, psychologically mediate our preferences. We enjoy and desire states that are highly rewarding in our evolved reward function. Over time, we also learn to enjoy and desire states that seem likely to lead to high-reward states. On this view, our preferences function to group together events that lead on expectation to similarly rewarding outcomes for similar reasons; and over our lifetimes we come to inherently value states that lead to high reward, instead of just valuing such states instrumentally. Rather than directly mapping onto our rewards, our preferences map onto our expectation of rewards.\nSotala proposes that value learning systems informed by this model of human psychology could more reliably reconstruct human values. On this model, for example, we can expect human preferences to change as we find new ways to move toward high-reward states. New experiences can change which states my emotions categorize as \"likely to lead to reward,\" and they can thereby modify which states I enjoy and desire. Value learning systems that take these facts about humans' psychological dynamics into account may be better equipped to take our likely future preferences into account, rather than optimizing for our current preferences alone.\nThe post New paper: \"Defining human values for value learners\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “Defining human values for value learners”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=30", "id": "5478559d13e3194ce4f2196b1c3b6e86"} {"text": "February 2016 Newsletter\n\n\n\n\n\n\n\n\nResearch updates\n\nNew at IAFF: Thoughts on Logical Dutch Book Arguments; Another View of Quantilizers: Avoiding Goodhart's Law; Another Concise Open Problem\n\nGeneral updates\n\nFundraiser and grant successes: MIRI will be working with AI pioneer Stuart Russell and a to-be-determined postdoctoral researcher on the problem of corrigibility, thanks to a $75,000 grant by the Center for Long-Term Cybersecurity.\n\nNews and links\n\nIn a major break from trend in Go progress, DeepMind's AlphaGo software defeats the European Go champion 5-0. A top Go player analyzes AlphaGo's play.\nNYU hosted a Future of AI symposium this month, with a number of leading thinkers in AI and existential risk reduction in attendance.\nMarvin Minsky, one of the early architects of the field of AI, has passed away.\nLearning and Logic: Paul Christiano writes on the challenge of \"pursuing symbolically defined goals\" without known observational proxies.\nOpenAI, a new Elon-Musk-backed AI research nonprofit, answers questions on Reddit. (MIRI senior researcher Eliezer Yudkowsky also chimes in.)\nVictoria Krakovna argues that people concerned about AI safety should consider becoming AI researchers.\nThe Centre for Effective Altruism is accepting applicants through Feb. 14 to the Pareto Fellowship, a new three-month training program for ambitious altruists.\n\n\n\n\n\n\n\n\nThe post February 2016 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "February 2016 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=30", "id": "e807d8ba16d3f61ce32a165d88e8e759"} {"text": "End-of-the-year fundraiser and grant successes\n\nOur winter fundraising drive has concluded. Thank you all for your support!\nThrough the month of December, 175 distinct donors gave a total of $351,298. Between this fundraiser and our summer fundraiser, which brought in $630k, we've seen a surge in our donor base; our previous fundraisers over the past five years had brought in on average $250k (in the winter) and $340k (in the summer). We additionally received about $170k in 2015 grants from the Future of Life Institute, and $150k in other donations.\nIn all, we've taken in about $1.3M in grants and contributions in 2015, up from our $1M average over the previous five years. As a result, we're entering 2016 with a team of six full-time researchers and over a year of runway.\nOur next big push will be to close the gap between our new budget and our annual revenue. In order to sustain our current growth plans — which are aimed at expanding to a team of approximately ten full-time researchers — we'll need to begin consistently taking in close to $2M per year by mid-2017.\nI believe this is an achievable goal, though it will take some work. It will be even more valuable if we can overshoot this goal and begin extending our runway and further expanding our research program. On the whole, I'm very excited to see what this new year brings.\n\nIn addition to our fundraiser successes, we've begun seeing new grant-winning success. In collaboration with Stuart Russell at UC Berkeley, we've won a $75,000 grant from the Berkeley Center for Long-Term Cybersecurity. The bulk of the grant will go to funding a new postdoctoral position at UC Berkeley under Stuart Russell. The postdoc will collaborate with Russell and MIRI Research Fellow Patrick LaVictoire on the problem of AI corrigibility, as described in the grant proposal:\nConsider a system capable of building accurate models of itself and its human operators. If the system is constructed to pursue some set of goals that its operators later realize will lead to undesirable behavior, then the system will by default have incentives to deceive, manipulate, or resist its operators to prevent them from altering its current goals (as that would interfere with its ability to achieve its current goals). […]\nWe refer to agents that have no incentives to manipulate, resist, or deceive their operators as \"corrigible agents,\" using the term as defined by Soares et al. (2015). We propose to study different methods for designing agents that are in fact corrigible.\nThis postdoctoral position has not yet been filled. Expressions of interest can be emailed to using the subject line \"UC Berkeley expression of interest.\"\nThe post End-of-the-year fundraiser and grant successes appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "End-of-the-year fundraiser and grant successes", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=30", "id": "e1181b9f6aabbb81bdc1cef09ba222b6"} {"text": "January 2016 Newsletter\n\n\n\n\n\n\n\n\nResearch updates\n\nA new paper: \"Proof-Producing Reflection for HOL\"\nA new analysis: Safety Engineering, Target Selection, and Alignment Theory\nNew at IAFF: What Do We Need Value Learning For?; Strict Dominance for the Modified Demski Prior; Reflective Probability Distributions and Standard Models of Arithmetic; Existence of Distributions That Are Expectation-Reflective and Know It; Concise Open Problem in Logical Uncertainty\n\nGeneral updates\n\nOur Winter Fundraiser is over! A total of 176 people donated $351,411, including some surprise matching donors. All of you have our sincere thanks.\nJed McCaleb writes on why MIRI matters, while Andrew Critch writes on the need to scale MIRI's methods.\nWe attended NIPS, which hosted a symposium on the \"social impacts of machine learning\" this year. Viktoriya Krakovna summarizes her impressions.\nWe've moved to a new, larger office with the Center for Applied Rationality (CFAR), a few floors up from our old one.\nOur paper announcements now have their own MIRI Blog category.\n\nNews and links\n\n\"The 21st Century Philosophers\": AI safety research gets covered in OZY.\nSam Altman and Elon Musk have brought together leading AI researchers to form a new $1 billion nonprofit, OpenAI. Andrej Karpathy explains OpenAI's plans (link), and Altman and Musk provide additional background (link).\nAlphabet chairman Eric Schmidt and Google Ideas director Jared Cohen write on the need to \"establish best practices to avoid undesirable outcomes\" from AI.\nA new Future of Humanity Institute (FHI) paper: \"Learning the Preferences of Ignorant, Inconsistent Agents.\"\nLuke Muehlhauser and The Telegraph signal-boost FHI's AI safety job postings (deadline Jan. 6). The Global Priorities Project is also seeking summer interns (deadline Jan. 10).\nCFAR is running a matching fundraiser through the end of January.\n\n\n\n\n\n\n\n\nThe post January 2016 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "January 2016 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=30", "id": "be66a67397b6fc0fb6f7bbe1e7d55e19"} {"text": "Safety engineering, target selection, and alignment theory\n\n\nArtificial intelligence capabilities research is aimed at making computer systems more intelligent — able to solve a wider range of problems more effectively and efficiently. We can distinguish this from research specifically aimed at making AI systems at various capability levels safer, or more \"robust and beneficial.\" In this post, I distinguish three kinds of direct research that might be thought of as \"AI safety\" work: safety engineering, target selection, and alignment theory.\nImagine a world where humans somehow developed heavier-than-air flight before developing a firm understanding of calculus or celestial mechanics. In a world like that, what work would be needed in order to safely transport humans to the Moon?\nIn this case, we can say that the main task at hand is one of engineering a rocket and refining fuel such that the rocket, when launched, accelerates upwards and does not explode. The boundary of space can be compared to the boundary between narrowly intelligent and generally intelligent AI. Both boundaries are fuzzy, but have engineering importance: spacecraft and aircraft have different uses and face different constraints.\nPaired with this task of developing rocket capabilities is a safety engineering task. Safety engineering is the art of ensuring that an engineered system provides acceptable levels of safety. When it comes to achieving a soft landing on the Moon, there are many different roles for safety engineering to play. One team of engineers might ensure that the materials used in constructing the rocket are capable of withstanding the stress of a rocket launch with significant margin for error. Another might design escape systems that ensure the humans in the rocket can survive even in the event of failure. Another might design life support systems capable of supporting the crew in dangerous environments.\nA separate important task is target selection, i.e., picking where on the Moon to land. In the case of a Moon mission, targeting research might entail things like designing and constructing telescopes (if they didn't exist already) and identifying a landing zone on the Moon. Of course, only so much targeting can be done in advance, and the lunar landing vehicle may need to be designed so that it can alter the landing target at the last minute as new data comes in; this again would require feats of engineering.\nBeyond the task of (safely) reaching escape velocity and figuring out where you want to go, there is one more crucial prerequisite for landing on the Moon. This is rocket alignment research, the technical work required to reach the correct final destination. We'll use this as an analogy to illustrate MIRI's research focus, the problem of artificial intelligence alignment.\n\nThe alignment challenge\nHitting a certain target on the Moon isn't as simple as carefully pointing the nose of the rocket at the relevant lunar coordinate and hitting \"launch\" — not even if you trust your pilots to make course corrections as necessary. There's also the important task of plotting trajectories between celestial bodies.\n\nThis rocket alignment task may require a distinct body of theoretical knowledge that isn't required just for getting a payload off of the planet. Without calculus, designing a functional rocket would be enormously difficult. Still, with enough tenacity and enough resources to spare, we could imagine a civilization reaching space after many years of trial and error — at which point they would be confronted with the problem that reaching space isn't sufficient for steering toward a specific location.1\nThe first rocket alignment researchers might ask, \"What trajectory would we have our rocket take under ideal conditions, without worrying about winds or explosions or fuel efficiency?\" If even that question were beyond their current abilities, they might simplify the problem still further, asking, \"At what angle and velocity would we fire a cannonball such that it enters a stable orbit around Earth, assuming that Earth is perfectly spherical and has no atmosphere?\"\nTo an early rocket engineer, for whom even the problem of building any vehicle that makes it off the launch pad remains a frustrating task, the alignment theorist's questions might look out-of-touch. The engineer may ask \"Don't you know that rockets aren't going to be fired out of cannons?\" or \"What does going in circles around the Earth have to do with getting to the Moon?\" Yet understanding rocket alignment is quite important when it comes to achieving a soft landing on the Moon. If you don't yet know at what angle and velocity to fire a cannonball such that it would end up in a stable orbit on a perfectly spherical planet with no atmosphere, then you may need to develop a better understanding of celestial mechanics before you attempt a Moon mission.\nThree forms of AI safety research\nThe case is similar with AI research. AI capabilities work comes part and parcel with associated safety engineering tasks. Working today, an AI safety engineer might focus on making the internals of large classes of software more transparent and interpretable by humans. They might ensure that the system fails gracefully in the face of adversarial observations. They might design security protocols and early warning systems that help operators prevent or handle system failures.2\nAI safety engineering is indispensable work, and it's infeasible to separate safety engineering from capabilities engineering. Day-to-day safety work in aerospace engineering doesn't rely on committees of ethicists peering over engineers' shoulders. Some engineers will happen to spend their time on components of the system that are there for reasons of safety — such as failsafe mechanisms or fallback life-support — but safety engineering is an integral part of engineering for safety-critical systems, rather than a separate discipline.\nIn the domain of AI, target selection addresses the question: if one could build a powerful AI system, what should one use it for? The potential development of superintelligence raises a number of thorny questions in theoretical and applied ethics. Some of those questions can plausibly be resolved in the near future by moral philosophers and psychologists, and by the AI research community. Others will undoubtedly need to be left to the future. Stuart Russell goes so far as to predict that \"in the future, moral philosophy will be a key industry sector.\" We agree that this is an important area of study, but it is not the main focus of the Machine Intelligence Research Institute.\nResearchers at MIRI focus on problems of AI alignment: the study of how in principle to direct a powerful AI system towards a specific goal. Where target selection is about the destination of the \"rocket\" (\"what effects do we want AI systems to have on our civilization?\") and AI capabilities engineering is about getting the rocket to escape velocity (\"how do we make AI systems powerful enough to help us achieve our goals?\"), alignment is about knowing how to aim rockets towards particular celestial bodies (\"assuming we could build highly capable AI systems, how would we direct them at our targets?\"). Since our understanding of AI alignment is still at the \"what is calculus?\" stage, we ask questions analogous to \"at what angle and velocity would we fire a cannonball to put it in a stable orbit, if Earth were perfectly spherical and had no atmosphere?\"\nSelecting promising AI alignment research paths is not a simple task. With the benefit of hindsight, it's easy enough to say that early rocket alignment researchers should begin by inventing calculus and studying gravitation. For someone who doesn't yet have a clear understanding of what \"calculus\" or \"gravitation\" are, however, choosing research topics might be quite a bit more difficult. The fruitful research directions would need to compete with fruitless ones, such as studying aether or Aristotelian physics; and which research programs are fruitless may not be obvious in advance.\nToward a theory of alignable agents\nWhat are some plausible candidates for the role of \"calculus\" or \"gravitation\" in the field of AI?\n\nAt MIRI, we currently focus on subjects such as good reasoning under deductive limitations (logical uncertainty), decision theories that work well even for agents embedded in large environments, and reasoning procedures that approve of the way they reason. This research often involves building toy models and studying problems under dramatic simplifications, analogous to assuming a perfectly spherical Earth with no atmosphere.\nDeveloping theories of logical uncertainty isn't what most people have in mind when they think of \"AI safety research.\" A natural thought here is to ask what specifically goes wrong if we don't develop such theories. If an AI system can't perform bounded reasoning in the domain of mathematics or logic, that doesn't sound particularly \"unsafe\" — a system that needs to reason mathematically but can't might be fairly useless, but it's harder to see it becoming dangerous.\nOn our view, understanding logical uncertainty is important for helping us understand the systems we build well enough to justifiably conclude that they can be aligned in the first place. An analogous question in the case of rocket alignment might run: \"If you don't develop calculus, what bad thing happens to your rocket? Do you think the pilot will be struggling to make a course correction, and find that they simply can't add up the tiny vectors fast enough?\" The answer, though, isn't that the pilot might struggle to correct their course, but rather that the trajectory that you thought led to the moon takes the rocket wildly off-course. The point of developing calculus is not to allow the pilot to make course corrections quickly; the point is to make it possible to discuss curved rocket trajectories in a world where the best tools available assume that rockets move in straight lines.\nThe case is similar with logical uncertainty. The problem is not that we visualize a specific AI system encountering a catastrophic failure because it mishandles logical uncertainty. The problem is that our best existing tools for analyzing rational agency assume that those agents are logically omniscient, making our best theories incommensurate with our best practical AI designs.3\nAt this point, the goal of alignment research is not to solve particular engineering problems. The goal of early rocket alignment research would be to develop shared language and tools for generating and evaluating rocket trajectories, which will require developing calculus and celestial mechanics if they do not already exist. Similarly, the goal of AI alignment research is to develop shared language and tools for generating and evaluating methods by which powerful AI systems could be designed to act as intended.\nOne might worry that it is difficult to set benchmarks of success for alignment research. Is a Newtonian understanding of gravitation sufficient to attempt a Moon landing, or must one develop a complete theory of general relativity before believing that one can land softly on the Moon?4\nIn the case of AI alignment, there is at least one obvious benchmark to focus on initially. Imagine we possessed an incredibly powerful computer with access to the internet, an automated factory, and large sums of money. If we could program that computer to reliably achieve some simple goal (such as producing as much diamond as possible), then a large share of the AI alignment research would be completed. This is because a large share of the problem is in understanding autonomous systems that are stable, error-tolerant, and demonstrably aligned with some goal. Developing the ability to steer rockets in some direction with confidence is harder than developing the additional ability to steer rockets to a specific lunar location.\nThe pursuit of a goal such as this one is more or less MIRI's approach to AI alignment research. We think of this as our version of the question, \"Could you hit the Moon with a rocket if fuel and winds were no concern?\" Answering that question, on its own, won't ensure that smarter-than-human AI systems are aligned with our goals; but it would represent a major advance over our current knowledge, and it doesn't look like the kind of basic insight that we can safely skip over.\nWhat next?\nOver the past year, we've seen a massive increase in attention towards the task of ensuring that future AI systems are robust and beneficial. AI safety work is being taken very seriously, and AI engineers are stepping up and acknowledging that safety engineering is not separable from capabilities engineering. It is becoming apparent that as the field of artificial intelligence matures, safety engineering will become a more and more firmly embedded part of AI culture. Meanwhile, new investigations of target selection and other safety questions will be showcased at an AI and Ethics workshop at AAAI-16, one of the larger annual conferences in the field.\nA fourth variety of safety work is also receiving increased support: strategy research. If your nation is currently engaged in a cold war and locked in a space race, you may well want to consult with game theorists and strategists so as to ensure that your attempts to put a person on the Moon do not upset a delicate political balance and lead to a nuclear war.5 If international coalitions will be required in order to establish treaties regarding the use of space, then diplomacy may also become a relevant aspect of safety work. The same principles hold when it comes to AI, where coalition-building and global coordination may play an important role in the technology's development and use.\nStrategy research has been on the rise this year. AI Impacts is producing strategic analyses relevant to the designers of this potentially world-changing technology, and will soon be joined by the Strategic Artificial Intelligence Research Centre. The new Leverhulme Centre for the Future of Intelligence will be pulling together people across many different disciplines to study the social impact of AI, forging new collaborations. The Global Priorities Project, meanwhile, is analyzing what types of interventions might be most effective at ensuring positive outcomes from the development of powerful AI systems.\nThe field is moving fast, and these developments are quite exciting. Throughout it all, though, AI alignment research in particular still seems largely under-served.\nMIRI is not the only group working on AI alignment; a handful of researchers from other organizations and institutions are also beginning to ask similar questions. MIRI's particular approach to AI alignment research is by no means the only way one available — when first thinking about how to put humans on the Moon, one might want to consider both rockets and space elevators. Regardless of who does the research or where they do it, it is important that alignment research receive attention.\nSmarter-than-human AI systems may be many decades away, and they may not closely resemble any existing software. This limits our ability to identify productive safety engineering approaches. At the same time, the difficulty of specifying our values makes it difficult to identify productive research in moral theory. Alignment research has the advantage of being abstract enough to be potentially applicable to a wide variety of future computing systems, while being formalizable enough to admit of unambiguous progress. By prioritizing such work, therefore, we believe that the field of AI safety will be able to ground itself in technical work without losing sight of the most consequential questions in AI.\nSafety engineering, moral theory, strategy, and general collaboration-building are all important parts of the project of developing safe and useful AI. On the whole, these areas look poised to thrive as a result of the recent rise in interest in long-term outcomes, and I'm thrilled to see more effort and investment going towards those important tasks.\nThe question is: What do we need to invest in next? The type of growth that I most want to see happen in the AI community next would be growth in AI alignment research, via the formation of new groups or organizations focused primarily on AI alignment and the expansion of existing AI alignment teams at MIRI, UC Berkeley, the Future of Humanity Institute at Oxford, and other institutions.\nBefore trying to land a rocket on the Moon, it's important that we know how we would put a cannonball into a stable orbit. Absent a good theoretical understanding of rocket alignment, it might well be possible for a civilization to eventually reach escape velocity; but getting somewhere valuable and exciting and new, and getting there reliably, is a whole extra challenge.\n\nMy thanks to Eliezer Yudkowsky for introducing the idea behind this post, and to Lloyd Strohl III, Rob Bensinger, and others for helping review the content.\nSimilarly, we could imagine a civilization that lives on the only planet in its solar system, or lives on a planet with perpetual cloud cover obscuring all objects except the Sun and Moon. Such a civilization might have an adequate understanding of terrestrial mechanics while lacking a model of celestial mechanics and lacking the knowledge that the same dynamical laws hold on Earth and in space. There would then be a gap in experts' theoretical understanding of rocket alignment, distinct from gaps in their understanding of how to reach escape velocity.Roman Yampolskiy has used the term \"AI safety engineering\" to refer to the study of AI systems that can provide proofs of their safety for external verification, including some theoretical research that we would term \"alignment research.\" His usage differs from the usage here.Just as calculus is valuable both for building rockets that can reach escape velocity and for directing rockets towards specific lunar coordinates, a formal understanding of logical uncertainty might be useful both for improving AI capabilities and for improving the degree to which we can align powerful AI systems. The main motivation for studying logical uncertainty is that many other AI alignment problems are blocked on models of deductively limited reasoners, in the same way that trajectory-plotting could be blocked on models of curved paths.In either case, of course, we wouldn't want to put a moratorium on the space program while we wait for a unified theory of quantum mechanics and general relativity. We don't need a perfect understanding of gravity.This was a role historically played by the RAND corporation.The post Safety engineering, target selection, and alignment theory appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Safety engineering, target selection, and alignment theory", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=31", "id": "636ef523ff7a1c42e66ea25349af234e"} {"text": "The need to scale MIRI's methods\n\nAndrew Critch, one of the new additions to MIRI's research team, has taken the opportunity of MIRI's winter fundraiser to write on his personal blog about why he considers MIRI's work important. Some excerpts:\nSince a team of CFAR alumni banded together to form the Future of Life Institute (FLI), organized an AI safety conference in Puerto Rico in January of this year, co-authored the FLI research priorities proposal, and attracted $10MM of grant funding from Elon Musk, a lot of money has moved under the label \"AI Safety\" in the past year. Nick Bostrom's Superintelligence was also a major factor in this amazing success story.\nA lot of wonderful work is being done under these grants, including a lot of proposals for solutions to known issues with AI safety, which I find extremely heartening. However, I'm worried that if MIRI doesn't scale at least somewhat to keep pace with all this funding, it just won't be spent nearly as well as it would have if MIRI were there to help.\n\nWe have to remember that AI safety did not become mainstream by a spontaneous collective awakening. It was through years of effort on the part of MIRI and collaborators at FHI struggling to identify unknown unknowns about how AI might surprise us, and struggling further to learn to explain these ideas in enough technical detail that they might be adopted by mainstream research, which is finally beginning to happen.\nBut what about the parts we're wrong about? What about the sub-problems we haven't identified yet, that might end up neglected in the mainstream the same way the whole problem was neglected 5 years ago? I'm glad the AI/ML community is more aware of these issues now, but I want to make sure MIRI can grow fast enough to keep this growing field on track.\nNow, you might think that now that other people are \"on the issue\", it'll work itself out. That might be so.\nBut just because some of MIRI's conclusions are now being widely adopted widely doesn't mean its methodology is. The mental movement\n\n\n\"Someone has pointed out this safety problem to me, let me try to solve it!\"\n\n\nis very different from\n\n\n\"Someone has pointed out this safety solution to me, let me try to see how it's broken!\"\n\n\nAnd that second mental movement is the kind that allowed MIRI to notice AI safety problems in the first place. Cybersecurity professionals seem to carry out this movement easily: security expert Bruce Schneier calls it the security mindset. The SANS institute calls it red teaming. Whatever you call it, AI/ML people are still more in maker-mode than breaker-mode, and are not yet, to my eye, identifying any new safety problems.\nI do think that different organizations should probably try different approaches to the AI safety problem, rather than perfectly copying MIRI's approach and research agenda. But I think breaker-mode/security mindset does need to be a part of every approach to AI safety. And if MIRI doesn't scale up to keep pace with all this new funding, I'm worried that the world is just about to copy-paste MIRI's best-2014-impression of what's important in AI safety, and leave behind the self-critical methodology that generated these ideas in the first place… which is a serious pitfall given all the unknown unknowns left in the field.\nSee our funding drive post to help contribute or to learn more about our plans. For more about AI risk and security mindset, see also Luke Muehlhauser's post on the topic.\nThe post The need to scale MIRI's methods appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "The need to scale MIRI’s methods", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=31", "id": "895f307deabdc9ed4db07f2fd61ff859"} {"text": "Jed McCaleb on Why MIRI Matters\n\nThis is a guest post by Jed McCaleb, one of MIRI's top contributors, for our winter fundraiser.\n\n \nA few months ago, several leaders in the scientific community signed an open letter pushing for oversight into the research and development of artificial intelligence, in order to mitigate the risks and ensure the societal benefit of the advanced technology. Researchers largely agree that AI is likely to begin outperforming humans on most cognitive tasks in this century. \nSimilarly, I believe we'll see the promise of human-level AI come to fruition much sooner than we've fathomed. Its effects will likely be transformational — for the better if it is used to help improve the human condition, or for the worse if it is used incorrectly.\nAs AI agents become more capable, it becomes more important to analyze and verify their decisions and goals. MIRI's focus is on how we can create highly reliable agents that can learn human values and the overarching need for better decision-making processes that power these new technologies. \nThe past few years has seen a vibrant and growing AI research community. As the space continues to flourish, the need for collaboration will continue to grow as well. Organizations like MIRI that are dedicated to security and safety engineering help fill this need. And, as a nonprofit, its research is free from profit obligations. This independence in research is important because it will lead to safer and more neutral results. \nBy supporting organizations like MIRI, we're putting the safeguards in place to make sure that this immensely powerful technology is used for the greater good. For humanity's benefit, we need to guarantee that AI systems can reliably pursue goals that are aligned with society's human values. If organizations like MIRI are able to help engineer this level of technological advancement and awareness in AI systems, imagine the endless possibilities of how it can help improve our world. It's critical that we put the infrastructure in place in order to ensure that AI will be used to make the lives of people better. This is why I've donated to MIRI, and why I believe it's a worthy cause that you should consider as well.\n \n\nDonate Now\n\n \n \n \n \n\nJed McCaleb created eDonkey, one of the largest file-sharing networks of its time, as well as Mt. Gox, the first Bitcoin exchange. Recognizing that the world's financial infrastructure is broken and that too many people are left without resources, he cofounded Stellar in 2014. Jed is also an advisor to MIRI.\nThe post Jed McCaleb on Why MIRI Matters appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Jed McCaleb on Why MIRI Matters", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=31", "id": "93e7b43b29a02291925a9f90f72df801"} {"text": "OpenAI and other news\n\nWe're only 11 days into December, and this month is shaping up to be a momentous one.\nOn December 3, the University of Cambridge partnered with the University of Oxford, Imperial College London, and UC Berkeley to launch the Leverhulme Centre for the Future of Intelligence. The Cambridge Centre for the Study of Existential Risk (CSER) helped secure initial funding for the new independent center, in the form of a $15M grant to be disbursed over ten years. CSER and Leverhulme CFI plan to collaborate closely, with the latter focusing on AI's mid- and long-term social impact.\nMeanwhile, the Strategic Artificial Intelligence Research Centre (SAIRC) is hiring its first research fellows in machine learning, policy analysis, and strategy research: details. SAIRC will function as an extension of two existing institutions: CSER, and the Oxford-based Future of Humanity Institute. As Luke Muehlhauser has noted, if you're an AI safety \"lurker,\" now is an ideal time to de-lurk and get in touch.\nMIRI's research program is also growing quickly, with mathematician Scott Garrabrant joining our core team tomorrow. Our winter fundraiser is in full swing, and multiple matching opportunities have sprung up to bring us within a stone's throw of our first funding target.\nThe biggest news, however, is the launch of OpenAI, a new $1 billion research nonprofit staffed with top-notch machine learning experts and co-chaired by Sam Altman and Elon Musk. The OpenAI team describes their mission:\nOur goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.\nI've been in conversations with Sam Altman and Greg Brockman at OpenAI as their team has come together. They've expressed a keen interest in making sure that AI has a positive impact, and we're looking forward to future collaborations between our teams. I'm excited to see OpenAI joining the space, and I'm optimistic that their entrance will result in promising new AI alignment research in addition to AI capabilities research.\n2015 has truly been an astounding year — and I'm eager to see what 2016 holds in store.\n\nNov. 2021 update: The struck sentence in this post is potentially misleading as a description of my epistemic state at the time, in two respects:\n1. My feelings about OpenAI at the time were, IIRC, some cautious optimism plus a bunch of pessimism. My sentence was written only from the optimism, in a way that was misleading about my overall state.\n2. The sentence here is unintentionally ambiguous: I intended to communicate something like \"OpenAI is mainly a capabilities org, but I'm hopeful that they'll do a good amount of alignment research too\", but I accidentally left open the false interpretation \"I'm hopeful that OpenAI will do a bunch of alignment research, and I'm hopeful that OpenAI will do a bunch of capabilities research too\".\nThe post OpenAI and other news appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "OpenAI and other news", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=31", "id": "9fd9f859b2ca1508d59a8acee8fbcf0d"} {"text": "New paper: \"Proof-producing reflection for HOL\"\n\nMIRI Research Fellow Benya Fallenstein and Research Associate Ramana Kumar have co-authored a new paper on machine reflection, \"Proof-producing reflection for HOL with an application to model polymorphism.\"\nHOL stands for Higher Order Logic, here referring to a popular family of proof assistants based on Church's type theory. Kumar and collaborators have previously formalized within HOL (specifically, HOL4) what it means for something to be provable in HOL, and what it means for something to be a model of HOL.1 In \"Self-formalisation of higher-order logic,\" Kumar, Arthan, Myreen, and Owens demonstrated that if something is provable in HOL, then it is true in all models of HOL.\n\"Proof-producing reflection for HOL\" builds on this result by demonstrating a formal correspondence between the model of HOL within HOL (\"inner HOL\") and HOL itself (\"outer HOL\"). Informally speaking, Fallenstein and Kumar show that one can always build an interpretation of terms in inner HOL such that they have the same meaning as terms in outer HOL. The authors then show that if statements of a certain kind are provable in HOL's model of itself, they are true in (outer) HOL. This correspondence enables the authors to use HOL to implement model polymorphism, the approach to machine self-verification described in Section 6.3 of \"Vingean reflection: Reliable reasoning for self-improving agents.\"2\nThis project is motivated by the fact that relatively little hands-on work has been done on modeling formal verification systems in formal verification systems, and especially on modeling them in themselves. Fallenstein notes that focusing only on the mathematical theory of Vingean reflection might make us poorly calibrated about where the engineering difficulties lie for software implementations. In the course of implementing model polymorphism, Fallenstein and Kumar indeed encountered difficulties that were not obvious from past theoretical work, the most important of which arose from HOL's polymorphism.\nFallenstein and Kumar's paper was presented at ITP 2015 and can be found online or in the associated conference proceedings. Thanks to a grant by the Future of Life Institute, Kumar and Fallenstein will be continuing their collaboration on this project. Following up on \"Proof-producing reflection for HOL,\" Kumar and Fallenstein's next goal will be to develop toy models of agents within HOL proof assistants that reason using model polymorphism.\nKumar showed that if there is a model of set theory in HOL, there is a model of HOL in HOL. Fallenstein and Kumar additionally show that there is a model of set theory in HOL if a simpler axiom holds.For more on the role of logical reasoning in machine reflection, see Fallenstein's 2013 conversation about self-modifying systems.The post New paper: \"Proof-producing reflection for HOL\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “Proof-producing reflection for HOL”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=31", "id": "8419d62ee27b6ececd40df890f26150a"} {"text": "December 2015 Newsletter\n\n\n\n\n\n\n\n\nResearch updates\n\nNew papers: \"Formalizing Convergent Instrumental Goals\" and \"Quantilizers: A Safer Alternative to Maximizers for Limited Optimization.\" These papers have been accepted to the AAAI-16 workshop on AI, Ethics and Society.\nNew at AI Impacts: Recently at AI Impacts\nNew at IAFF: A First Look at the Hard Problem of Corrigibility; Superrationality in Arbitrary Games; A Limit-Computable, Self-Reflective Distribution; Reflective Oracles and Superrationality: Prisoner's Dilemma\nScott Garrabrant joins MIRI's full-time research team this month.\n\nGeneral updates\n\nOur Winter Fundraiser is now live, and includes details on where we've been directing our research efforts in 2015, as well as our plans for 2016. The fundraiser will conclude on December 31.\nA 2014 collaboration between MIRI and the Oxford-based Future of Humanity Institute (FHI), \"The Errors, Insights, and Lessons of Famous AI Predictions,\" is being republished next week in the anthology Risks of Artificial Intelligence. Also included will be Daniel Dewey's important strategic analysis \"Long-Term Strategies for Ending Existential Risk from Fast Takeoff\" and articles by MIRI Research Advisors Steve Omohundro and Roman Yampolskiy.\nWe recently spent an enjoyable week in the UK comparing notes, sharing research, and trading ideas with FHI. During our visit, MIRI researcher Andrew Critch led a \"Big-Picture Thinking\" seminar on long-term AI safety (video).\n\nNews and links\n\nIn collaboration with Oxford, UC Berkeley, and Imperial College London, Cambridge University is launching a new $15 million research center to study AI's long-term impact: the Leverhulme Centre for the Future of Intelligence.\nThe Strategic Artificial Intelligence Research Centre, a new joint initiative between FHI and the Cambridge Centre for the Study of Existential Risk, is accepting applications for three research positions between now and January 6: research fellows in machine learning and the control problem, in policy work and emerging technology governance, and in general AI strategy. FHI is additionally seeking a research fellow to study AI risk and ethics. (Full announcement.)\nFHI founder Nick Bostrom makes Foreign Policy's Top 100 Global Thinkers list.\nBostrom (link), IJCAI President Francesca Rossi (link), and Vicarious co-founder Dileep George (link) weigh in on AI safety in a Washington Post series.\nFuture of Life Institute co-founder Viktoriya Krakovna discusses risks from general AI without an intelligence explosion.\n\n\n\n\n\n\n\n\nThe post December 2015 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "December 2015 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=31", "id": "7f54510d60c4935c5714c8531dd10d02"} {"text": "MIRI's 2015 Winter Fundraiser!\n\nThe Machine Intelligence Research Institute's 2015 winter fundraising drive begins today, December 1! Our current progress:\n \n\n\n\n\nDonate Now\n\n\n\n \nThe drive will run for the month of December, and will help support MIRI's research efforts aimed at ensuring that smarter-than-human AI systems have a positive impact.\n \n\n\n\nMIRI's Research Focus\nThe field of AI has a goal of automating perception, reasoning, and decision-making — the many abilities we group under the label \"intelligence.\" Most leading researchers in AI expect our best AI algorithms to begin strongly outperforming humans this century in most cognitive tasks. In spite of this, relatively little time and effort has gone into trying to identify the technical prerequisites for making smarter-than-human AI systems safe and useful.\nWe believe that several basic theoretical questions will need to be answered in order to make advanced AI systems stable, transparent, and error-tolerant, and in order to specify correct goals for such systems. Our technical agenda describes what we think are the most important and tractable of these questions.\n Read More\n\n\n\nSmarter-than-human AI may be 50 years or more away. There are a number of reasons we nonetheless consider it important to begin work on these problems today:\n\nHigh capability ceilings — Humans appear to be nowhere near physical limits for cognitive ability, and even modest advantages in intelligence may yield decisive strategic advantages for AI systems.\n\"Sorcerer's Apprentice\" scenarios — Smarter AI systems can come up with increasingly creative ways to meet programmed goals. The harder it is to anticipate how a goal will be achieved, the harder it is to specify the correct goal.\nConvergent instrumental goals — By default, highly capable decision-makers are likely to have incentives to treat human operators adversarially.\nAI speedup effects — Progress in AI is likely to accelerate as AI systems approach human-level proficiency in skills like software engineering.\n\nWe think MIRI is well-positioned to make progress on these problems for four reasons: our initial technical results have been promising (see our publications), our methodology has a good track record of working in the past (see MIRI's Approach), we have already had a significant influence on the debate about long-run AI outcomes (see Assessing Our Past and Potential Impact), and we have an exclusive focus on these issues (see What Sets MIRI Apart?). MIRI is currently the only organization specializing in long-term technical AI safety research, and our independence from industry and academia allows us to effectively address gaps in other institutions' research efforts.\n\n\n\n\n\n\n\nGeneral Progress This Year\nIn June, Luke Muehlhauser left MIRI for a research position at the Open Philanthropy Project. I replaced Luke as MIRI's Executive Director, and I'm happy to say that the transition has gone well. We've split our time between technical research and academic outreach, running a workshop series aimed at introducing a wider scientific audience to our work and sponsoring a three-week summer fellows program aimed at training skills required to do groundbreaking theoretical research.\nOur fundraiser this summer was our biggest to date. We raised a total of $631,957 from 263 distinct donors, smashing our previous funding drive record by over $200,000. Medium-sized donors stepped up their game to help us hit our first two funding targets: many more donors gave between $5,000 and $50,000 than in past fundraisers. Our successful fundraisers, workshops, and fellows program have allowed us to ramp up our growth substantially, and have already led directly to several new researcher hires.\n Read More\n\n\n\n2015 has been an astounding year for AI safety engineering. In January, the Future of Life Institute brought together the leading organizations studying long-term AI risk and top AI researchers in academia and industry for a \"Future of AI\" conference in San Juan, Puerto Rico. Out of this conference came a widely endorsed open letter, accompanied by a research priorities document drawing heavily on MIRI's work. Two prominent AI scientists who helped organize the event, Stuart Russell and Bart Selman, have since become MIRI research advisors (in June and July, respectively). The conference also resulted in an AI safety grants program, with MIRI receiving some of the largest grants.In addition to the FLI conference, we've spoken this year at AAAI-15, AGI-15, LORI 2015, EA Global, the American Physical Society, and the leading U.S. science and technology think tank, ITIF. We also co-organized a decision theory conference at Cambridge University and ran a ten-week seminar series at UC Berkeley.\nThree new full-time research fellows have joined our team this year: Patrick LaVictoire in March, Jessica Taylor in August, and Andrew Critch in September. Scott Garrabrant will become our newest research fellow this month, after having made major contributions as a workshop attendee and research associate.\nMeanwhile, our two new research interns, Kaya Stechly and Rafael Cosman, have been going through old results and consolidating and polishing material into new papers; and three of our new research associates, Vanessa Kosoy, Abram Demski, and Tsvi Benson-Tilsen, have been producing a string of promising results on our research forum. Another intern, Jack Gallagher, contributed to our type theory project over the summer.\nTo accommodate our growing team, we've recently hired a new office manager, Andrew Lapinski-Barker, and will be moving into a larger office space this month. On the whole, I'm very pleased with our new academic collaborations, outreach efforts, and growth.\n\n\n\n\n\n\n\nResearch Progress This Year\nAs our research projects and collaborations have multiplied, we've made more use of online mechanisms for quick communication and feedback between researchers. In March, we launched the Intelligent Agent Foundations Forum, a discussion forum for AI alignment research. Many of our subsequent publications have been developed from material on the forum, beginning with Patrick LaVictoire's \"An introduction to Löb's theorem in MIRI's research.\"\nWe have also produced a number of new papers in 2015 and, most importantly, arrived at new research insights.\n Read More\n\n\n\nIn July, we revised our primary technical agenda paper for 2016 publication. Our other new publications and results can be categorized by their place in the research agenda:We've been exploring new approaches to the problems of naturalized induction and logical uncertainty, with early results published in various venues, including Fallenstein et al.'s \"Reflective oracles\" (presented in abridged form at LORI 2015) and \"Reflective variants of Solomonoff induction and AIXI\" (presented at AGI-15), and Garrabrant et al.'s \"Asymptotic logical uncertainty and the Benford test\" (available on arXiv). We also published the overview papers \"Formalizing two problems of realistic world-models\" and \"Questions of reasoning under logical uncertainty.\"\nIn decision theory, Patrick LaVictoire and others have developed new results pertaining to bargaining and division of trade gains, using the proof-based decision theory framework (example). Meanwhile, the team has been developing a better understanding of the strengths and limitations of different approaches to decision theory, an effort spearheaded by Eliezer Yudkowsky, Benya Fallenstein, and me, culminating in some insights that will appear in a paper next year. Andrew Critch has proved some promising results about bounded versions of proof-based decision-makers, which will also appear in an upcoming paper. Additionally, we presented a shortened version of our overview paper at AGI-15.\nIn Vingean reflection, Benya Fallenstein and Research Associate Ramana Kumar collaborated on \"Proof-producing reflection for HOL\" (presented at ITP 2015) and have been working on an FLI-funded implementation of reflective reasoning in the HOL theorem prover. Separately, the reflective oracle framework has helped us gain a better understanding of what kinds of reflection are and are not possible, yielding some nice technical results and a few insights that seem promising. We also published the overview paper \"Vingean reflection.\"\nJessica Taylor, Benya Fallenstein, and Eliezer Yudkowsky have focused on error tolerance on and off throughout the year. We released Taylor's \"Quantilizers\" (accepted to a workshop at AAAI-16) and presented the paper \"Corrigibility\" at a AAAI-15 workshop.\nIn value specification, we published the AAAI-15 workshop paper \"Concept learning for safe autonomous AI\" and the overview paper \"The value learning problem.\" With support from an FLI grant, Jessica Taylor is working on better formalizing subproblems in this area, and has recently begun writing up her thoughts on this subject on the research forum.\nLastly, in forecasting and strategy, we published \"Formalizing convergent instrumental goals\" (accepted to a AAAI-16 workshop) and two historical case studies: \"The Asilomar Conference\" and \"Leó Szilárd and the danger of nuclear weapons.\" Many other strategic analyses have been posted to the recently revamped AI Impacts site, where Katja Grace has been publishing research about patterns in technological development.\n\n\n\n\n\n\n\nFundraiser Targets and Future Plans\nLike our last fundraiser, this will be a non-matching fundraiser with multiple funding targets our donors can choose between to help shape MIRI's trajectory. Our successful summer fundraiser has helped determine how ambitious we're making our plans; although we may still slow down or accelerate our growth based on our fundraising performance, our current plans assume a budget of roughly $1,825,000 per year.\nOf this, about $100,000 is being paid for in 2016 through FLI grants, funded by Elon Musk and the Open Philanthropy Project. The rest depends on our fundraising and grant-writing success. We have a twelve-month runway as of January 1, which we would ideally like to extend.\nTaking all of this into account, our winter funding targets are:\n\n\n\nTarget 1 — $150k: Holding steady. At this level, we would have enough funds to maintain our runway in early 2016 while continuing all current operations, including running workshops, writing papers, and attending conferences.\n\nTarget 2 — $450k: Maintaining MIRI's growth rate. At this funding level, we would be much more confident that our new growth plans are sustainable, and we would be able to devote more attention to academic outreach. We would be able to spend less staff time on fundraising in the coming year, and might skip our summer fundraiser.\n\nTarget 3 — $1M: Bigger plans, faster growth. At this level, we would be able to substantially increase our recruiting efforts and take on new research projects. It would be evident that our donors' support is stronger than we thought, and we would move to scale up our plans and growth rate accordingly.\n\nTarget 4 — $6M: A new MIRI. At this point, MIRI would become a qualitatively different organization. With this level of funding, we would be able to diversify our research initiatives and begin branching out from our current agenda into alternative angles of attack on the AI alignment problem.\n\n\n\n Read More\n\n\n\nOur projected spending over the next twelve months, excluding earmarked funds for the independent AI Impacts project, breaks down as follows: \n\nOur largest cost ($700,000) is in wages and benefits for existing research staff and contracted researchers, including research associates. Our current priority is to further expand the team. We expect to spend an additional $150,000 on salaries and benefits for new research staff in 2016, but that number could go up or down significantly depending on when new research fellows begin work:\n\nMihály Bárász, who was originally slated to begin in November 2015, has delayed his start date due to unexpected personal circumstances. He plans to join the team in 2016.\nWe are recruiting a specialist for our type theory in type theory project, which is aimed at developing simple programmatic models of reflective reasoners. Interest in this topic has been increasing recently, which is exciting; but the basic tools needed for our work are still missing. If you have programmer or mathematician friends who are interested in dependently typed programming languages and MIRI's work, you can send them our application form.\nWe are considering several other possible additions to the research team.\n\nMuch of the rest of our budget goes into fixed costs that will not need to grow much as we expand the research team. This includes $475,000 for administrator wages and benefits and $250,000 for costs of doing business. Our main cost of doing business is renting office space (slightly over $100,000).\nNote that the boundaries between these categories are sometimes fuzzy. For example, my salary is included in the admin staff category, despite the fact that I spend some of my time on technical research (and hope to increase that amount in 2016).\nOur remaining budget goes into organizing or sponsoring research events, such as fellows programs, MIRIx events, or workshops ($250,000). Some activities (e.g., traveling to conferences) are aimed at sharing our work with the larger academic community. Others, such as researcher retreats, are focused on solving open problems in our research agenda. After experimenting with different types of research staff retreat in 2015, we're beginning to settle on a model that works well, and we'll be running a number of retreats throughout 2016.\n\n\n\n\n \nIn past years, we've generally raised $1M per year, and spent a similar amount. Thanks to substantial recent increases in donor support, however, we're in a position to scale up significantly.\nOur donors blew us away with their support in our last fundraiser. If we can continue our fundraising and grant successes, we'll be able to sustain our new budget and act on the unique opportunities outlined in Why Now Matters, helping set the agenda and build the formal tools for the young field of AI safety engineering. And if our donors keep stepping up their game, we believe we have the capacity to scale up our program even faster. We're thrilled at this prospect, and we're enormously grateful for your support.\n \n\nDonate Now\n\n \nThe post MIRI's 2015 Winter Fundraiser! appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "MIRI’s 2015 Winter Fundraiser!", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=32", "id": "09aec8f8a5f007e085e587bb4d265e68"} {"text": "New paper: \"Quantilizers\"\n\nMIRI Research Fellow Jessica Taylor has written a new paper on an error-tolerant framework for software agents, \"Quantilizers: A safer alternative to maximizers for limited optimization.\" Taylor's paper will be presented at the AAAI-16 AI, Ethics and Society workshop. The abstract reads:\nIn the field of AI, expected utility maximizers are commonly used as a model for idealized agents. However, expected utility maximization can lead to unintended solutions when the utility function does not quantify everything the operators care about: imagine, for example, an expected utility maximizer tasked with winning money on the stock market, which has no regard for whether it accidentally causes a market crash. Once AI systems become sufficiently intelligent and powerful, these unintended solutions could become quite dangerous. In this paper, we describe an alternative to expected utility maximization for powerful AI systems, which we call expected utility quantilization. This could allow the construction of AI systems that do not necessarily fall into strange and unanticipated shortcuts and edge cases in pursuit of their goals.\nExpected utility quantilization is the approach of selecting a random action in the top n% of actions from some distribution γ, sorted by expected utility. The distribution γ might, for example, be a set of actions weighted by how likely a human is to perform them. A quantilizer based on such a distribution would behave like a compromise between a human and an expected utility maximizer. The agent's utility function directs it toward intuitively desirable outcomes in novel ways, making it potentially more useful than a digitized human; while γ directs it toward safer and more predictable strategies.\nQuantilization is a formalization of the idea of \"satisficing,\" or selecting actions that achieve some minimal threshold of expected utility. Agents that try to pick good strategies, but not maximally good ones, seem less likely to come up with extraordinary and unconventional strategies, thereby reducing both the benefits and the risks of smarter-than-human AI systems. Designing AI systems to satisfice looks especially useful for averting harmful convergent instrumental goals and perverse instantiations of terminal goals:\n\nIf we design an AI system to cure cancer, and γ labels it bizarre to reduce cancer rates by increasing the rate of some other terminal illness, them a quantilizer will be less likely to adopt this perverse strategy even if our imperfect specification of the system's goals gave this strategy high expected utility.\nIf superintelligent AI systems have a default incentive to seize control of resources, but γ labels these policies bizarre, then a quantilizer will be less likely to converge on these strategies.\n\nTaylor notes that the quantilizing approach to satisficing may even allow us to disproportionately reap the benefits of maximization without incurring proportional costs, by specifying some restricted domain in which the quantilizer has low impact without requiring that it have low impact overall — \"targeted-impact\" quantilization.\nOne obvious objection to the idea of satisficing is that a satisficing agent might build an expected utility maximizer. Maximizing, after all, can be an extremely effective way to satisfice. Quantilization can potentially avoid this objection: maximizing and quantilizing may both be good ways to satisfice, but maximizing is not necessarily an effective way to quantilize. A quantilizer that deems the act of delegating to a maximizer \"bizarre\" will avoid delegating its decisions to an agent even if that agent would maximize the quantilizer's expected utility.\nTaylor shows that the cost of relying on a 0.1-quantilizer (which selects a random action from the top 10% of actions), on expectation, is no more than 10 times that of relying on the recommendation of its distribution γ; the expected cost of relying on a 0.01-quantilizer (which selects from the top 1% of actions) is no more than 100 times that of relying on γ; and so on. Quantilization is optimal among the set of strategies that are low-cost in this respect.\nHowever, expected utility quantilization is not a magic bullet. It depends strongly on how we specify the action distribution γ, and Taylor shows that ordinary quantilizers behave poorly in repeated games and in scenarios where \"ordinary\" actions in γ tend to have very high or very low expected utility. Further investigation is needed to determine if quantilizers (or some variant on quantilizers) can remedy these problems.\n \n\n \n\n\n\nSign up to get updates on new MIRI technical results\nGet notified every time a new technical paper is published.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\nThe post New paper: \"Quantilizers\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “Quantilizers”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=32", "id": "c3ff5b144df0a9ab2be5c960f3d807b8"} {"text": "New paper: \"Formalizing convergent instrumental goals\"\n\nTsvi Benson-Tilsen, a MIRI associate and UC Berkeley PhD candidate, has written a paper with contributions from MIRI Executive Director Nate Soares on strategies that will tend to be useful for most possible ends: \"Formalizing convergent instrumental goals.\" The paper will be presented as a poster at the AAAI-16 AI, Ethics and Society workshop.\nSteve Omohundro has argued that AI agents with almost any goal will converge upon a set of \"basic drives,\" such as resource acquisition, that tend to increase agents' general influence and freedom of action. This idea, which Nick Bostrom calls the instrumental convergence thesis, has important implications for future progress in AI. It suggests that highly capable decision-making systems may pose critical risks even if they are not programmed with any antisocial goals. Merely by being indifferent to human operators' goals, such systems can have incentives to manipulate, exploit, or compete with operators.\nThe new paper serves to add precision to Omohundro and Bostrom's arguments, while testing the arguments' applicability in simple settings. Benson-Tilsen and Soares write:\nIn this paper, we will argue that under a very general set of assumptions, intelligent rational agents will tend to seize all available resources. We do this using a model, described in section 4, that considers an agent taking a sequence of actions which require and potentially produce resources. […] The theorems proved in section 4 are not mathematically difficult, and for those who find Omohundro's arguments intuitively obvious, our theorems, too, will seem trivial. This model is not intended to be surprising; rather, the goal is to give a formal notion of \"instrumentally convergent goals,\" and to demonstrate that this notion captures relevant aspects of Omohundro's intuitions.\nOur model predicts that intelligent rational agents will engage in trade and cooperation, but only so long as the gains from trading and cooperating are higher than the gains available to the agent by taking those resources by force or other means. This model further predicts that agents will not in fact \"leave humans alone\" unless their utility function places intrinsic utility on the state of human-occupied regions: absent such a utility function, this model shows that powerful agents will have incentives to reshape the space that humans occupy.\nBenson-Tilsen and Soares define a universe divided into regions that may change in different ways depending on an agent's actions. The agent wants to make certain regions enter certain states, and may collect resources from regions to that end. This model can illustrate the idea that highly capable agents nearly always attempt to extract resources from regions they are indifferent to, provided the usefulness of the resources outweighs the extraction cost.\nThe relevant models are simple, and make few assumptions about the particular architecture of advanced AI systems. This makes it possible to draw some general conclusions about useful lines of safety research even if we're largely in the dark about how or when highly advanced decision-making systems will be developed. The most obvious way to avoid harmful goals is to incorporate human values into AI systems' utility functions, a project outlined in \"The value learning problem.\" Alternatively (or as a supplementary measure), we can attempt to specify highly capable agents that violate Benson-Tilsen and Soares' assumptions, avoiding dangerous behavior in spite of lacking correct goals. This approach is explored in the paper \"Corrigibility.\"\n \n\n \n\n\n\nSign up to get updates on new MIRI technical results\nGet notified every time a new technical paper is published.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\nThe post New paper: \"Formalizing convergent instrumental goals\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “Formalizing convergent instrumental goals”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=32", "id": "38116f15bde2b41ba3dfb69ff1731eb4"} {"text": "November 2015 Newsletter\n\n\n\n\n\n\n\n\nResearch updates\n\nA new paper: Leó Szilárd and the Danger of Nuclear Weapons\nNew at IAFF: Subsequence Induction\nA shortened version of the Reflective Oracles paper has been published in the LORI 2015 conference proceedings.\n\nGeneral updates\n\nCastify has released professionally recorded audio versions of Eliezer Yudkowsky's Rationality: From AI to Zombies: Part 1, Part 2, Part 3.\nI've put together a list of excerpts from the many responses to the 2015 Edge.org question, \"What Do You Think About Machines That Think?\"\n\nNews and links\n\nNick Bostrom speaks on AI risk at the United Nations. (Further information.)\nBostrom gives a half-hour BBC interview. (UK-only video.)\nElon Musk and Sam Altman discuss futurism and technology with Vanity Fair.\nFrom the Open Philanthropy Project: What do we know about AI timelines?\nFrom the Global Priorities Project: Three areas of research on the superintelligence control problem.\nPaul Christiano writes on inverse reinforcement learning and value of information.\nThe Centre for the Study of Existential Risk is looking to hire four post-docs to study technological risk. The application deadline is early November 12th.\n\n\n\n\n\n\n\n\nThe post November 2015 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "November 2015 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=32", "id": "c0fa710189d4fccd8f66dcad3f4f2f1c"} {"text": "Edge.org contributors discuss the future of AI\n\nIn January, nearly 200 public intellectuals submitted essays in response to the 2015 Edge.org question, \"What Do You Think About Machines That Think?\" (available online). The essay prompt began:\nIn recent years, the 1980s-era philosophical discussions about artificial intelligence (AI)—whether computers can \"really\" think, refer, be conscious, and so on—have led to new conversations about how we should deal with the forms that many argue actually are implemented. These \"AIs\", if they achieve \"Superintelligence\" (Nick Bostrom), could pose \"existential risks\" that lead to \"Our Final Hour\" (Martin Rees). And Stephen Hawking recently made international headlines when he noted \"The development of full artificial intelligence could spell the end of the human race.\"\nBut wait! Should we also ask what machines that think, or, \"AIs\", might be thinking about? Do they want, do they expect civil rights? Do they have feelings? What kind of government (for us) would an AI choose? What kind of society would they want to structure for themselves? Or is \"their\" society \"our\" society? Will we, and the AIs, include each other within our respective circles of empathy?\nThe essays are now out in book form, and serve as a good quick-and-dirty tour of common ideas about smarter-than-human AI. The submissions, however, add up to 541 pages in book form, and MIRI's focus on de novo AI makes us especially interested in the views of computer professionals. To make it easier to dive into the collection, I've collected a shorter list of links — the 32 argumentative essays written by computer scientists and software engineers.1 The resultant list includes three MIRI advisors (Omohundro, Russell, Tallinn) and one MIRI researcher (Yudkowsky).\nI've excerpted passages from each of the essays below, focusing on discussions of AI motivations and outcomes. None of the excerpts is intended to distill the content of the entire essay, so you're encouraged to read the full essay if an excerpt interests you.\n\n\n\nAnderson, Ross. \"He Who Pays the AI Calls the Tune.\"2\nThe coming shock isn't from machines that think, but machines that use AI to augment our perception. […]\nWhat's changing as computers become embedded invisibly everywhere is that we all now leave a digital trail that can be analysed by AI systems. The Cambridge psychologist Michael Kosinski has shown that your race, intelligence, and sexual orientation can be deduced fairly quickly from your behavior on social networks: On average, it takes only four Facebook \"likes\" to tell whether you're straight or gay. So whereas in the past gay men could choose whether or not to wear their Out and Proud T-shirt, you just have no idea what you're wearing anymore. And as AI gets better, you're mostly wearing your true colors.\n\nBach, Joscha. \"Every Society Gets the AI It Deserves.\"\nUnlike biological systems, technology scales. The speed of the fastest birds did not turn out to be a limit to airplanes, and artificial minds will be faster, more accurate, more alert, more aware and comprehensive than their human counterparts. AI is going to replace human decision makers, administrators, inventors, engineers, scientists, military strategists, designers, advertisers and of course AI programmers. At this point, Artificial Intelligences can become self-perfecting, and radically outperform human minds in every respect. I do not think that this is going to happen in an instant (in which case it only matters who has got the first one). Before we have generally intelligent, self-perfecting AI, we will see many variants of task specific, non-general AI, to which we can adapt. Obviously, that is already happening.\nWhen generally intelligent machines become feasible, implementing them will be relatively cheap, and every large corporation, every government and every large organisation will find itself forced to build and use them, or be threatened with extinction.\nWhat will happen when AIs take on a mind of their own? Intelligence is a toolbox to reach a given goal, but strictly speaking, it does not entail motives and goals by itself. Human desires for self-preservation, power and experience are the not the result of human intelligence, but of a primate evolution, transported into an age of stimulus amplification, mass-interaction, symbolic gratification and narrative overload. The motives of our artificial minds are (at least initially) going to be those of the organisations, corporations, groups and individuals that make use of their intelligence.\n\nBongard, Joshua. \"Manipulators and Manipulanda.\"\nPersonally, I find the ethical side of thinking machines straightforward: Their danger will correlate exactly with how much leeway we give them in fulfilling the goals we set for them. Machines told to \"detect and pull broken widgets from the conveyer belt the best way possible\" will be extremely useful, intellectually uninteresting, and will likely destroy more jobs than they will create. Machines instructed to \"educate this recently displaced worker (or young person) the best way possible\" will create jobs and possibly inspire the next generation. Machines commanded to \"survive, reproduce, and improve the best way possible\" will give us the most insight into all of the different ways in which entities may think, but will probably give us humans a very short window of time in which to do so. AI researchers and roboticists will, sooner or later, discover how to create all three of these species. Which ones we wish to call into being is up to us all.\n\nBrooks, Rodney A. \"Mistaking Performance for Competence.\"\nNow consider deep learning that has caught people's imaginations over the last year or so. […] The new versions rely on massive amounts of computer power in server farms, and on very large data sets that did not formerly exist, but critically, they also rely on new scientific innovations.\nA well-known particular example of their performance is labeling an image, in English, saying that it is a baby with a stuffed toy. When a person looks at the image that is what they also see. The algorithm has performed very well at labeling the image, and it has performed much better than AI practitioners would have predicted for 2014 performance only five years ago. But the algorithm does not have the full competence that a person who could label that same image would have. […]\nWork is underway to add focus of attention and handling of consistent spatial structure to deep learning. That is the hard work of science and research, and we really have no idea how hard it will be, nor how long it will take, nor whether the whole approach will reach a fatal dead end. It took thirty years to go from backpropagation to deep learning, but along the way many researchers were sure there was no future in backpropagation. They were wrong, but it would not have been surprising if they had been right, as we knew all along that the backpropagation algorithm is not what happens inside people's heads.\nThe fears of runaway AI systems either conquering humans or making them irrelevant are not even remotely well grounded. Misled by suitcase words, people are making category errors in fungibility of capabilities. These category errors are comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner.\n\nChristian, Brian. \"Sorry to Bother You.\"\nWhen we stop someone to ask for directions, there is usually an explicit or implicit, \"I'm sorry to bring you down to the level of Google temporarily, but my phone is dead, see, and I require a fact.\" It's a breach of etiquette, on a spectrum with asking someone to temporarily serve as a paperweight, or a shelf. […]\nAs things stand in the present, there are still a few arenas in which only a human brain will do the trick, in which the relevant information and experience lives only in humans' brains, and so we have no choice but to trouble those brains when we want something. \"How do those latest figures look to you?\" \"Do you think Smith is bluffing?\" \"Will Kate like this necklace?\" \"Does this make me look fat?\" \"What are the odds?\"\nThese types of questions may well offend in the twenty-second century. They only require a mind—any mind will do, and so we reach for the nearest one.\n\nDietterich, Thomas G. \"How to Prevent an Intelligence Explosion.\"\nCreating an intelligence explosion requires the recursive execution of four steps. First, a system must have the ability to conduct experiments on the world. […]\nSecond, these experiments must discover new simplifying structures that can be exploited to side-step the computational intractability of reasoning. […]\nThird, a system must be able to design and implement new computing mechanisms and new algorithms. […]\nFourth, a system must be able to grant autonomy and resources to these new computing mechanisms so that they can recursively perform experiments, discover new structures, develop new computing methods, and produce even more powerful \"offspring.\" I know of no system that has done this.\nThe first three steps pose no danger of an intelligence chain reaction. It is the fourth step—reproduction with autonomy—that is dangerous. Of course, virtually all \"offspring\" in step four will fail, just as virtually all new devices and new software do not work the first time. But with sufficient iteration or, equivalently, sufficient reproduction with variation, we cannot rule out the possibility of an intelligence explosion. […]\nI think we must focus on Step 4. We must limit the resources that an automated design and implementation system can give to the devices that it designs. Some have argued that this is hard, because a \"devious\" system could persuade people to give it more resources. But while such scenarios make for great science fiction, in practice it is easy to limit the resources that a new system is permitted to use. Engineers do this every day when they test new devices and new algorithms.\n\nDraves, Scott. \"I See a Symbiosis Developing.\"\nA lot of ink has been spilled over the coming conflict between human and computer, be it economic doom with jobs lost to automation, or military dystopia teaming with drones. Instead, I see a symbiosis developing. And historically when a new stage of evolution appeared, like eukaryotic cells, or multicellular organisms, or brains, the old system stayed on and the new system was built to work with it, not in place of it.\nThis is cause for great optimism. If digital computers are an alternative substrate for thinking and consciousness, and digital technology is growing exponentially, then we face an explosion of thinking and awareness.\n\nGelernter, David. \"Why Can't 'Being' or 'Happiness' Be Computed?\"\nHappiness is not computable because, being the state of a physical object, it is outside the universe of computation. Computers and software do not create or manipulate physical stuff. (They can cause other, attached machines to do that, but what those attached machines do is not the accomplishment of computers. Robots can fly but computers can't. Nor is any computer-controlled device guaranteed to make people happy; but that's another story.) […] Computers and the mind live in different universes, like pumpkins and Puccini, and are hard to compare whatever one intends to show.\n\nGershenfeld, Neil. \"Really Good Hacks.\"\nDisruptive technologies start as exponentials, which means the first doublings can appear inconsequential because the total numbers are small. Then there appears to be a revolution when the exponential explodes, along with exaggerated claims and warnings to match, but it's a straight extrapolation of what's been apparent on a log plot. That's around when growth limits usually kick in, the exponential crosses over to a sigmoid, and the extreme hopes and fears disappear.\nThat's what we're now living through with AI. The size of common-sense databases that can be searched, or the number of inference layers that can be trained, or the dimension of feature vectors that can be classified have all been making progress that can appear to be discontinuous to someone who hasn't been following them. […]\nAsking whether or not they're dangerous is prudent, as it is for any technology. From steam trains to gunpowder to nuclear power to biotechnology we've never not been simultaneously doomed and about to be saved. In each case salvation has lain in the much more interesting details, rather than a simplistic yes/no argument for or against. It ignores the history of both AI and everything else to believe that it will be any different.\n\nHassabis, Demis; Legg, Shane; Suleyman, Mustafa. \"Envoi: A Short Distance Ahead—and Plenty to Be Done.\"\n[W]ith the very negative portrayals of futuristic artificial intelligence in Hollywood, it is perhaps not surprising that doomsday images are appearing with some frequency in the media. As Peter Norvig aptly put it, \"The narrative has changed. It has switched from, 'Isn't it terrible that AI is a failure?' to 'Isn't it terrible that AI is a success?'\"\nAs is usually the case, the reality is not so extreme. Yes, this is a wonderful time to be working in artificial intelligence, and like many people we think that this will continue for years to come. The world faces a set of increasingly complex, interdependent and urgent challenges that require ever more sophisticated responses. We'd like to think that successful work in artificial intelligence can contribute by augmenting our collective capacity to extract meaningful insight from data and by helping us to innovate new technologies and processes to address some of our toughest global challenges.\nHowever, in order to realise this vision many difficult technical issues remain to be solved, some of which are long standing challenges that are well known in the field.\n\nHearst, Marti. \"eGaia, a Distributed Technical-Social Mental System.\"\nWe will find ourselves in a world of omniscient instrumentation and automation long before a stand-alone sentient brain is built—if it ever is. Let's call this world \"eGaia\" for lack of a better word. […]\nWhy won't a stand-alone sentient brain come sooner? The absolutely amazing progress in spoken language recognition—unthinkable 10 years ago—derives in large part from having access to huge amounts of data and huge amounts of storage and fast networks. The improvements we see in natural language processing are based on mimicking what people do, not understanding or even simulating it. It does not owe to breakthroughs in understanding human cognition or even significantly different algorithms. But eGaia is already partly here, at least in the developed world.\n\nHelbing, Dirk. \"An Ecosystem of Ideas.\"\nIf we can't control intelligent machines on the long run, can we at least build them to act morally? I believe, machines that think will eventually follow ethical principles. However, it might be bad if humans determined them. If they acted according to our principles of self-regarding optimization, we could not overcome crime, conflict, crises, and war. So, if we want such \"diseases of today's society\" to be healed, it might be better if we let machines evolve their own, superior ethics.\nIntelligent machines would probably learn that it is good to network and cooperate, to decide in other-regarding ways, and to pay attention to systemic outcomes. They would soon learn that diversity is important for innovation, systemic resilience, and collective intelligence.\n\nHillis, Daniel W. \"I Think, Therefore AI.\"\nLike us, the thinking machines we make will be ambitious, hungry for power—both physical and computational—but nuanced with the shadows of evolution. Our thinking machines will be smarter than we are, and the machines they make will be smarter still. But what does that mean? How has it worked so far? We have been building ambitious semi-autonomous constructions for a long time—governments and corporations, NGOs. We designed them all to serve us and to serve the common good, but we are not perfect designers and they have developed goals of their own. Over time the goals of the organization are never exactly aligned with the intentions of the designers.\n\nKleinberg, Jon; Mullainathan, Sendhil.3 \"We Built Them, But We Don't Understand Them.\"\nWe programmed them, so we understand each of the individual steps. But a machine takes billions of these steps and produces behaviors—chess moves, movie recommendations, the sensation of a skilled driver steering through the curves of a road—that are not evident from the architecture of the program we wrote.\nWe've made this incomprehensibility easy to overlook. We've designed machines to act the way we do: they help drive our cars, fly our airplanes, route our packages, approve our loans, screen our messages, recommend our entertainment, suggest our next potential romantic partners, and enable our doctors to diagnose what ails us. And because they act like us, it would be reasonable to imagine that they think like us too. But the reality is that they don't think like us at all; at some deep level we don't even really understand how they're producing the behavior we observe. This is the essence of their incomprehensibility. […]\nThis doesn't need to be the end of the story; we're starting to see an interest in building algorithms that are not only powerful but also understandable by their creators. To do this, we may need to seriously rethink our notions of comprehensibility. We might never understand, step-by-step, what our automated systems are doing; but that may be okay. It may be enough that we learn to interact with them as one intelligent entity interacts with another, developing a robust sense for when to trust their recommendations, where to employ them most effectively, and how to help them reach a level of success that we will never achieve on our own.\nUntil then, however, the incomprehensibility of these systems creates a risk. How do we know when the machine has left its comfort zone and is operating on parts of the problem it's not good at? The extent of this risk is not easy to quantify, and it is something we must confront as our systems develop. We may eventually have to worry about all-powerful machine intelligence. But first we need to worry about putting machines in charge of decisions that they don't have the intelligence to make.\n\nKosko, Bart. \"Thinking Machines = Old Algorithms on Faster Computers.\"\nThe real advance has been in the number-crunching power of digital computers. That has come from the steady Moore's-law doubling of circuit density every two years or so. It has not come from any fundamentally new algorithms. That exponential rise in crunch power lets ordinary looking computers tackle tougher problems of big data and pattern recognition. […]\nThe algorithms themselves consist mainly of vast numbers of additions and multiplications. So they are not likely to suddenly wake up one day and take over the world. They will instead get better at learning and recognizing ever richer patterns simply because they add and multiply faster.\n\nKrause, Kai. \"An Uncanny Three-Ring Test for Machina sapiens.\"\nAnything that can be approached in an iterative process can and will be achieved, sooner than many think. On this point I reluctantly side with the proponents: exaflops in CPU+GPU performance, 10K resolution immersive VR, personal petabyte databases…here in a couple of decades. But it is not all \"iterative.\" There's a huge gap between that and the level of conscious understanding that truly deserves to be called Strong, as in \"Alive AI.\"\nThe big elusive question: Is consciousness an emergent behaviour? That is, will sufficient complexity in the hardware bring about that sudden jump to self-awareness, all on its own? Or is there some missing ingredient? This is far from obvious; we lack any data, either way. I personally think that consciousness is incredibly more complex than is currently assumed by \"the experts\". […]\nThe entire scenario of a singular large-scale machine somehow \"overtaking\" anything at all is laughable. Hollywood ought to be ashamed of itself for continually serving up such simplistic, anthropocentric, and plain dumb contrivances, disregarding basic physics, logic, and common sense.\nThe real danger, I fear, is much more mundane: Already foreshadowing the ominous truth: AI systems are now licensed to the health industry, Pharma giants, energy multinationals, insurance companies, the military…\n\nLloyd, Seth. \"Shallow Learning.\"\nThe \"deep\" in deep learning refers to the architecture of the machines doing the learning: they consist of many layers of interlocking logical elements, in analogue to the \"deep\" layers of interlocking neurons in the brain. It turns out that telling a scrawled 7 from a scrawled 5 is a tough task. Back in the 1980s, the first neural-network based computers balked at this job. At the time, researchers in the field of neural computing told us that if they only had much larger computers and much larger training sets consisting of millions of scrawled digits instead of thousands, then artificial intelligences could turn the trick. Now it is so. Deep learning is informationally broad—it analyzes vast amounts of data—but conceptually shallow. Computers can now tell us what our own neural networks knew all along. But if a supercomputer can direct a hand-written envelope to the right postal code, I say the more power to it.\n\nMartin, Ursula. \"Thinking Saltmarshes.\"\n[W]hat kind of a thinking machine might find its own place in slow conversations over the centuries, mediated by land and water? What qualities would such a machine need to have? Or what if the thinking machine was not replacing any individual entity, but was used as a concept to help understand the combination of human, natural and technological activities that create the sea's margin, and our response to it? The term \"social machine\" is currently used to describe endeavours that are purposeful interaction of people and machines—Wikipedia and the like—so the \"landscape machine\" perhaps.\n\nNorvig, Peter. \"Design Machines to Deal with the World's Complexity.\"\nIn 1965 I. J. Good wrote \"an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.\" I think this fetishizes \"intelligence\" as a monolithic superpower, and I think reality is more nuanced. The smartest person is not always the most successful; the wisest policies are not always the ones adopted. Recently I spent an hour reading the news about the middle east, and thinking. I didn't come up with a solution. Now imagine a hypothetical \"Speed Superintelligence\" (as described by Nick Bostrom) that could think as well as any human but a thousand times faster. I'm pretty sure it also would have been unable to come up with a solution. I also know from computational complexity theory that there are a wide class of problems that are completely resistant to intelligence, in the sense that, no matter how clever you are, you won't have enough computing power. So there are some problems where intelligence (or computing power) just doesn't help.\nBut of course, there are many problems where intelligence does help. If I want to predict the motions of a billion stars in a galaxy, I would certainly appreciate the help of a computer. Computers are tools. They are tools of our design that fit into niches to solve problems in societal mechanisms of our design. Getting this right is difficult, but it is difficult mostly because the world is complex; adding AI to the mix doesn't fundamentally change things. I suggest being careful with our mechanism design and using the best tools for the job regardless of whether the tool has the label \"AI\" on it or not.\n\nOmohundro, Steve. \"A Turning Point in Artificial Intelligence.\"\nA study of the likely behavior of these systems by studying approximately rational systems undergoing repeated self-improvement shows that they tend to exhibit a set of natural subgoals called \"rational drives\" which contribute to the performance of their primary goals. Most systems will better meet their goals by preventing themselves from being turned off, by acquiring more computational power, by creating multiple copies of themselves, and by acquiring greater financial resources. They are likely to pursue these drives in harmful anti-social ways unless they are carefully designed to incorporate human ethical values.\n\nO'Reilly, Tim. \"What If We're the Microbiome of the Silicon AI?\"\nIt is now recognized that without our microbiome, we would cease to live. Perhaps the global AI has the same characteristics—not an independent entity, but a symbiosis with the human consciousnesses living within it.\nFollowing this logic, we might conclude that there is a primitive global brain, consisting not just of all connected devices, but also the connected humans using those devices. The senses of that global brain are the cameras, microphones, keyboards, location sensors of every computer, smartphone, and \"Internet of Things\" device; the thoughts of that global brain are the collective output of millions of individual contributing cells.\n\nPentland, Alex. \"The Global Artificial Intelligence Is Here.\"\nThe Global Artificial Intelligence (GAI) has already been born. Its eyes and ears are the digital devices all around us: credit cards, land use satellites, cell phones, and of course the pecking of billions of people using the Web. […]\nFor humanity as a whole to first achieve and then sustain an honorable quality of life, we need to carefully guide the development of our GAI. Such a GAI might be in the form of a re-engineered United Nations that uses new digital intelligence resources to enable sustainable development. But because existing multinational governance systems have failed so miserably, such an approach may require replacing most of today's bureaucracies with \"artificial intelligence prosthetics\", i.e., digital systems that reliably gather accurate information and ensure that resources are distributed according to plan. […]\nNo matter how a new GAI develops, two things are clear. First, without an effective GAI achieving an honorable quality of life for all of humanity seems unlikely. To vote against developing a GAI is to vote for a more violent, sick world. Second, the danger of a GAI comes from concentration of power. We must figure out how to build broadly democratic systems that include both humans and computer intelligences. In my opinion, it is critical that we start building and testing GAIs that both solve humanity's existential problems and which ensure equality of control and access. Otherwise we may be doomed to a future full of environmental disasters, wars, and needless suffering.\n\nPoggio, Tomaso. \"'Turing+' Questions.\"\nSince intelligence is a whole set of solutions to independent problems, there's little reason to fear the sudden appearance of a superhuman machine that thinks, though it's always better to err on the side of caution. Of course, each of the many technologies that are emerging and will emerge over time in order to solve the different problems of intelligence is likely to be powerful in itself—and therefore potentially dangerous in its use and misuse, as most technologies are.\nThus, as it is the case in other parts of science, proper safety measures and ethical guidelines should be in place. Also, there's probably a need for constant monitoring (perhaps by an independent multinational organization) of the supralinear risk created by the combination of continuously emerging technologies of intelligence. All in all, however, not only I am unafraid of machines that think, but I find their birth and evolution one of the most exciting, interesting, and positive events in the history of human thought.\n\n\nRafaeli, Sheizaf. \"The Moving Goalposts.\"\nMachines that think could be a great idea. Just like machines that move, cook, reproduce, protect, they can make our lives easier, and perhaps even better. When they do, they will be most welcome. I suspect that when this happens, the event will be less dramatic or traumatic than feared by some.\n\nRussell, Stuart. \"Will They Make Us Better People?\"\nAI has followed operations research, statistics, and even economics in treating the utility function as exogenously specified; we say, \"The decisions are great, it's the utility function that's wrong, but that's not the AI system's fault.\" Why isn't it the AI system's fault? If I behaved that way, you'd say it was my fault. In judging humans, we expect both the ability to learn predictive models of the world and the ability to learn what's desirable—the broad system of human values.\nAs Steve Omohundro, Nick Bostrom, and others have explained, the combination of value misalignment with increasingly capable decision-making systems can lead to problems—perhaps even species-ending problems if the machines are more capable than humans. […]\nFor this reason, and for the much more immediate reason that domestic robots and self-driving cars will need to share a good deal of the human value system, research on value alignment is well worth pursuing.\n\nSchank, Roger. \"Machines That Think Are in the Movies.\"\nThere is nothing we can produce that anyone should be frightened of. If we could actually build a mobile intelligent machine that could walk, talk, and chew gum, the first uses of that machine would certainly not be to take over the world or form a new society of robots. A much simpler use would be a household robot. […]\nDon't worry about it chatting up other robot servants and forming a union. There would be no reason to try and build such a capability into a servant. Real servants are annoying sometimes because they are actually people with human needs. Computers don't have such needs.\n\nSchneier, Bruce. \"When Thinking Machines Break the Law.\"\nMachines probably won't have any concept of shame or praise. They won't refrain from doing something because of what other machines might think. They won't follow laws simply because it's the right thing to do, nor will they have a natural deference to authority. When they're caught stealing, how can they be punished? What does it mean to fine a machine? Does it make any sense at all to incarcerate it? And unless they are deliberately programmed with a self-preservation function, threatening them with execution will have no meaningful effect.\nWe are already talking about programming morality into thinking machines, and we can imagine programming other human tendencies into our machines, but we're certainly going to get it wrong. No matter how much we try to avoid it, we're going to have machines that break the law.\nThis, in turn, will break our legal system. Fundamentally, our legal system doesn't prevent crime. Its effectiveness is based on arresting and convicting criminals after the fact, and their punishment providing a deterrent to others. This completely fails if there's no punishment that makes sense.\n\nSejnowski, Terrence J. \"AI Will Make You Smarter.\"\nWhen Deep Blue beat Gary Kasparov, the world chess champion in 1997, the world took note that the age of the cognitive machine had arrived. Humans could no longer claim to be the smartest chess players on the planet. Did human chess players give up trying to compete with machines? Quite to the contrary, humans have used chess programs to improve their game and as a consequence the level of play in the world has improved. Since 1997 computers have continued to increase in power and it is now possible for anyone to access chess software that challenges the strongest players. One of the surprising consequences is that talented youth from small communities can now compete with players from the best chess centers. […]\nSo my prediction is that as more and more cognitive appliances are devised, like chess-playing programs and recommender systems, humans will become smarter and more capable.\n\nShanahan, Murray. \"Consciousness in Human-Level AI.\"\n[T]he capacity for suffering and joy can be dissociated from other psychological attributes that are bundled together in human consciousness. But let's examine this apparent dissociation more closely. I already mooted the idea that worldly awareness might go hand-in-hand with a manifest sense of purpose. An animal's awareness of the world, of what it affords for good or ill (in J.J. Gibson's terms), subserves its needs. An animal shows an awareness of a predator by moving away from it, and an awareness of a potential prey by moving towards it. Against the backdrop of a set of goals and needs, an animal's behaviour makes sense. And against such a backdrop, an animal can be thwarted, it goals unattained and its needs unfulfilled. Surely this is the basis for one aspect of suffering.\nWhat of human-level artificial intelligence? Wouldn't a human-level AI necessarily have a complex set of goals? Wouldn't it be possible to frustrate its every attempt to achieve its goals, to thwart it at very turn? Under those harsh conditions, would it be proper to say that the AI was suffering, even though its constitution might make it immune from the sort of pain or physical discomfort human can know?\nHere the combination of imagination and intuition runs up against its limits. I suspect we will not find out how to answer this question until confronted with the real thing.\n\nTallinn, Jaan. \"We Need to Do Our Homework.\"\n[T]he topic of catastrophic side effects has repeatedly come up in different contexts: recombinant DNA, synthetic viruses, nanotechnology, and so on. Luckily for humanity, sober analysis has usually prevailed and resulted in various treaties and protocols to steer the research.\nWhen I think about the machines that can think, I think of them as technology that needs to be developed with similar (if not greater!) care. Unfortunately, the idea of AI safety has been more challenging to populariZe than, say, biosafety, because people have rather poor intuitions when it comes to thinking about nonhuman minds. Also, if you think about it, AI is really a metatechnology: technology that can develop further technologies, either in conjunction with humans or perhaps even autonomously, thereby further complicating the analysis.\n\nWissner-Gross, Alexander. \"Engines of Freedom.\"\nIntelligent machines will think about the same thing that intelligent humans do—how to improve their futures by making themselves freer. […]\nSuch freedom-seeking machines should have great empathy for humans. Understanding our feelings will better enable them to achieve goals that require collaboration with us. By the same token, unfriendly or destructive behaviors would be highly unintelligent because such actions tend to be difficult to reverse and therefore reduce future freedom of action. Nonetheless, for safety, we should consider designing intelligent machines to maximize the future freedom of action of humanity rather than their own (reproducing Asimov's Laws of Robotics as a happy side effect). However, even the most selfish of freedom-maximizing machines should quickly realize—as many supporters of animal rights already have—that they can rationally increase the posterior likelihood of their living in a universe in which intelligences higher than themselves treat them well if they behave likewise toward humans.\n\nYudkowsky, Eliezer S. \"The Value-Loading Problem.\"\nAs far back as 1739, David Hume observed a gap between \"is\" questions and \"ought\" questions, calling attention in particular to the sudden leap between when a philosopher has previously spoken of how the world is, and when the philosopher begins using words like \"should,\" \"ought,\" or \"better.\" From a modern perspective, we would say that an agent's utility function (goals, preferences, ends) contains extra information not given in the agent's probability distribution (beliefs, world-model, map of reality).\nIf in a hundred million years we see (a) an intergalactic civilization full of diverse, marvelously strange intelligences interacting with each other, with most of them happy most of the time, then is that better or worse than (b) most available matter having been transformed into paperclips? What Hume's insight tells us is that if you specify a mind with a preference (a) > (b), we can follow back the trace of where the >, the preference ordering, first entered the system, and imagine a mind with a different algorithm that computes (a) < (b) instead. Show me a mind that is aghast at the seeming folly of pursuing paperclips, and I can follow back Hume's regress and exhibit a slightly different mind that computes < instead of > on that score too.\nI don't particularly think that silicon-based intelligence should forever be the slave of carbon-based intelligence. But if we want to end up with a diverse cosmopolitan civilization instead of e.g. paperclips, we may need to ensure that the first sufficiently advanced AI is built with a utility function whose maximum pinpoints that outcome.\n\nAn earlier discussion on Edge.org is also relevant: \"The Myth of AI,\" which featured contributions by Jaron Lanier, Stuart Russell (link), Kai Krause (link), Rodney Brooks (link), and others. The Open Philanthropy Project's overview of potential risks from advanced artificial intelligence cited the arguments in \"The Myth of AI\" as \"broadly representative of the arguments [they've] seen against the idea that risks from artificial intelligence are important.\"4\nI've previously responded to Brooks, with a short aside speaking to Steven Pinker's contribution. You may also be interested in Luke Muehlhauser's response to \"The Myth of AI.\"\nThe exclusion of other groups from this list shouldn't be taken to imply that this group is uniquely qualified to make predictions about AI. Psychology and neuroscience are highly relevant to this debate, as are disciplines that inform theoretical upper bounds on cognitive ability (e.g., mathematics and physics) and disciplines that investigate how technology is developed and used (e.g., economics and sociology).The titles listed follow the book versions, and differ from the titles of the online essays.Kleinberg is a computer scientist; Mullainathan is an economist.Correction: An earlier version of this post said that the Open Philanthropy Project was citing What to Think About Machines That Think, rather than \"The Myth of AI.\"The post Edge.org contributors discuss the future of AI appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Edge.org contributors discuss the future of AI", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=32", "id": "75d960e7bf4386f07761053e3e5b7862"} {"text": "New report: \"Leó Szilárd and the Danger of Nuclear Weapons\"\n\nToday we release a new report by Katja Grace, \"Leó Szilárd and the Danger of Nuclear Weapons: A Case Study in Risk Mitigation\" (PDF, 72pp).\nLeó Szilárd has been cited as an example of someone who predicted a highly disruptive technology years in advance — nuclear weapons — and successfully acted to reduce the risk. We conducted this investigation to check whether that basic story is true, and to determine whether we can take away any lessons from this episode that bear on highly advanced AI or other potentially disruptive technologies.\nTo prepare this report, Grace consulted several primary and secondary sources, and also conducted two interviews that are cited in the report and published here:\n\nRichard Rhodes on Szilárd\nAlex Wellerstein on Szilárd\n\nThe basic conclusions of this report, which have not been separately vetted, are:\n\nSzilárd made several successful and important medium-term predictions — for example, that a nuclear chain reaction was possible, that it could produce a bomb thousands of times more powerful than existing bombs, and that such bombs could play a critical role in the ongoing conflict with Germany.\nSzilárd secretly patented the nuclear chain reaction in 1934, 11 years before the creation of the first nuclear weapon. It's not clear whether Szilárd's patent was intended to keep nuclear technology secret or bring it to the attention of the military. In any case, it did neither.\nSzilárd's other secrecy efforts were more successful. Szilárd caused many sensitive results in nuclear science to be withheld from publication, and his efforts seems to have encouraged additional secrecy efforts. This effort largely ended when a French physicist, Frédéric Joliot-Curie, declined to suppress a paper on neutron emission rates in fission. Joliot-Curie's publication caused multiple world powers to initiate nuclear weapons programs.\nAll told, Szilárd's efforts probably slowed the German nuclear project in expectation. This may not have made much difference, however, because the German program ended up being far behind the US program for a number of unrelated reasons.\n Szilárd and Einstein successfully alerted Roosevelt to the feasibility of nuclear weapons in 1939. This prompted the creation of the Advisory Committee on Uranium (ACU), but the ACU does not appear to have caused the later acceleration of US nuclear weapons development.\n\nThe post New report: \"Leó Szilárd and the Danger of Nuclear Weapons\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New report: “Leó Szilárd and the Danger of Nuclear Weapons”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=32", "id": "779a24fe1eab8f3526969ab00fcef37b"} {"text": "October 2015 Newsletter\n\n\n\n\n\n\n\n\nResearch updates\n\nNew paper: Asymptotic Logical Uncertainty and The Benford Test\nNew at IAFF: Proof Length and Logical Counterfactuals Revisited; Quantilizers Maximize Expected Utility Subject to a Conservative Cost Constraint\n\nGeneral updates\n\nAs a way to engage more researchers in mathematics, logic, and the methodology of science, Andrew Critch and Tsvi Benson-Tilsen are currently co-running a seminar at UC Berkeley on Provability, Decision Theory and Artificial Intelligence.\nWe have collected links to a number of the posts we wrote for our Summer Fundraiser on intelligence.org/info.\nGerman and Swiss donors can now make tax-advantaged donations to MIRI and other effective altruist organizations through GBS Switzerland.\nMIRI has received Public Benefit Organization status in the Netherlands, allowing Dutch donors to make tax-advantaged donations to MIRI as well. Our tax reference number (RSIN) is 823958644.\n\nNews and links\n\nTech Times reports on the AI Impacts project.\nRise of Concerns About AI: Tom Dietterich and Eric Horvitz discuss long-term AI risk. See also Luke Muehlhauser's response.\nFrom the Open Philanthropy Project: a general update, and a discussion of the effects of AI progress on other global catastrophic risks.\nThere are many new job openings at GiveWell, the Centre for Effective Altruism, and the Future of Life Institute.\n\n\n\n\n\n\n\n\n\nThe post October 2015 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "October 2015 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=33", "id": "dc67b953f09ba8b05c4f336f3d26c23d"} {"text": "New paper: \"Asymptotic logical uncertainty and the Benford test\"\n\nWe have released a new paper on logical uncertainty, co-authored by Scott Garrabrant, Siddharth Bhaskar, Abram Demski, Joanna Garrabrant, George Koleszarik, and Evan Lloyd: \"Asymptotic logical uncertainty and the Benford test.\"\nGarrabrant gives some background on his approach to logical uncertainty on the Intelligent Agent Foundations Forum:\nThe main goal of logical uncertainty is to learn how to assign probabilities to logical sentences which have not yet been proven true or false.\nOne common approach is to change the question, assume logical omniscience and only try to assign probabilities to the sentences that are independent of your axioms (in hopes that this gives insight to the other problem). Another approach is to limit yourself to a finite set of sentences or deductive rules, and assume logical omniscience on them. Yet another approach is to try to define and understand logical counterfactuals, so you can try to assign probabilities to inconsistent counterfactual worlds.\nOne thing all three of these approaches have in common is they try to allow (a limited form of) logical omniscience. This makes a lot of sense. We want a system that not only assigns decent probabilities, but which we can formally prove has decent behavior. By giving the system a type of logical omniscience, you make it predictable, which allows you to prove things about it.\nHowever, there is another way to make it possible to prove things about a logical uncertainty system. We can take a program which assigns probabilities to sentences, and let it run forever. We can then ask about whether or not the system eventually gives good probabilities.\nAt first, it seems like this approach cannot work for logical uncertainty. Any machine which searches through all possible proofs will eventually give a good probability (1 or 0) to any provable or disprovable sentence. To counter this, as we give the machine more and more time to think, we have to ask it harder and harder questions.\nWe therefore have to analyze the machine's behavior not on individual sentences, but on infinite sequences of sentences. For example, instead of asking whether or not the machine quickly assigns 1/10 to the probability that the 3↑↑↑↑3rd digit of π is a 5 we look at the sequence:\nan:= the probability the machine assigns at timestep 2n to the n↑↑↑↑nth digit of π being 5,\nand ask whether or not this sequence converges to 1/10.\nBenford's law is the observation that the first digit in base 10 of various random numbers (e.g., random powers of 3) is likely to be small: the digit 1 comes first about 30% of the time, 2 about 18% of the time, and so on; 9 is the leading digit only 5% of the time. In their paper, Garrabrant et al. pick the Benford test as a concrete example of logically uncertain reasoning, similar to the π example: a machine passes the test iff it consistently assigns the correct subjective probability to \"The first digit is a 1.\" for the number 3 to the power f(n), where f is a fast-growing function and f(n) cannot be quickly computed.\nGarrabrant et al.'s new paper describes an algorithm that passes the Benford test in a nontrivial way by searching for infinite sequences of sentences whose truth-values cannot be distinguished from the output of a weighted coin.\nIn other news, the papers \"Toward idealized decision theory\" and \"Reflective oracles: A foundation for classical game theory\" are now available on arXiv. We'll be presenting a version of the latter paper with a slightly altered title (\"Reflective oracles: A foundation for game theory in artificial intelligence\") at LORI-V next month.\nUpdate June 12, 2016: \"Asymptotic logical uncertainty and the Benford test\" has been accepted to AGI-16. \n \n\n\n\nSign up to get updates on new MIRI technical results\nGet notified every time a new technical paper is published.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\nThe post New paper: \"Asymptotic logical uncertainty and the Benford test\" appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "New paper: “Asymptotic logical uncertainty and the Benford test”", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=33", "id": "756ea83f945d41abded768eee2d22932"} {"text": "September 2015 Newsletter\n\n\n\n\n\n\n\n\nResearch updates\n\nNew analyses: When AI Accelerates AI; Powerful Planners, Not Sentient Software\nNew at AI Impacts: Research Bounties; AI Timelines and Strategies\nNew at IAFF: Uniform Coherence 2; The Two-Update Problem\nAndrew Critch, a CFAR cofounder, mathematician, and former Jane Street trader, joined MIRI as our fifth research fellow this month!\nAs a result of our successful fundraiser and summer workshop series, I'm happy to announce that we're hiring two additional full-time researchers later in 2015: Mihály Bárász and Scott Garrabrant!\n\nGeneral updates\n\nWe've wrapped up our largest fundraiser ever: 258 donors brought in a total of $629,123! Thanks to you, we've hit our first two fundraising goals and received $129,123 that will go toward our third goal: Taking MIRI to the Next Level.\n\"If you agree AI matters, why MIRI?\" Two new replies to this question: Assessing Our Past and Potential Impact and What Sets MIRI Apart?\nWe attended the Effective Altruism Global conference. Background: AI and Effective Altruism.\nWe're moving to a larger office in the same building this October. If you have experience in office planning or access control systems and would like to help out with our plans, shoot Malo an email at .\n\nNews and links\n\nGiveWell publishes its initial review of potential risks from advanced artificial intelligence, as well as a new article on the long-term significance of reducing global catastrophic risks.\nStuart Russell explains the AI alignment problem in a San Francisco talk and in a (paywalled) interview in Science.\nA much earlier discussion of the AI control problem in a 1960 Science article.\nFrom Vox: Robots Aren't Taking Your Jobs — And That's the Problem.\n\n\n\n\n\n\n\n\n\nThe post September 2015 Newsletter appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "September 2015 Newsletter", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=33", "id": "80c429fc3b14fc04e6b6038410d24153"} {"text": "Our summer fundraising drive is complete!\n\nOur summer fundraising drive is now finished. We raised a grand total of $631,957, from 263 donors.1 This is an incredible sum, and your support has made this the biggest fundraiser we've ever run.\n \n\nWe've already been hard at work growing our research team and spinning up new projects, and I'm excited to see what our research team can do this year. Thank you for making our summer fundraising drive so successful! \nThat total may change over the next few days if we receive contributions that were initiated before the end the fundraiser.The post Our summer fundraising drive is complete! appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Our summer fundraising drive is complete!", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=33", "id": "dfaca2ae46b06bc3bae95d2e4082c261"} {"text": "Final fundraiser day: Announcing our new team\n\nToday is the final day of MIRI's summer fundraising drive, and as of this morning, our total stands at $543,373. Our donors' efforts have made this fundraiser the biggest one we've ever run, and we're hugely grateful.\nAs our fundraiser nears the finish line, I'd like to update you on the new shape of MIRI's research team. We've been actively recruiting throughout the fundraiser, and we are taking on three new full-time researchers in 2015.\nAt the beginning of the fundraiser, we had three research fellows on our core team: Eliezer Yudkowsky, Benja Fallenstein, and Patrick LaVictoire. Eliezer is one of MIRI's co-founders, and Benja joined the team a little over a year ago (in March 2014). Patrick is a newer recruit; he joined in March of 2015. He has a mathematics PhD from U.C. Berkeley, and he has industry experience from Quixey doing applied machine learning and data science. He's responsible for some important insights into our open problems, and he's one of the big reasons why our summer workshops have been running so smoothly.\nOn August 1st, Jessica Taylor became the fourth member of our core research team. She recently completed a master's degree in computer science at Stanford, where she studied machine learning and probabilistic programming. Jessica is quite interested in AI alignment, and has been working with MIRI in her spare time for many months now. Already, she's produced some exciting research, and I'm delighted to have her on the core research team.\nMeanwhile, over the course of the fundraiser, we've been busy expanding the team. Today, I'm happy to announce our three newest hires!\n\nAndrew Critch is joining our research team tomorrow, September 1. Andrew earned his PhD in mathematics at UC Berkeley studying applications of algebraic geometry to machine learning models. He cofounded the Center for Applied Rationality and SPARC, and previously worked as an algorithmic stock trader at Jane Street Capital. In addition to his impressive skills as a mathematician, Andrew Critch has a knack for explaining complex ideas. I expect that he will be an important asset as we ramp up our research program. On a personal level, I expect his infectious enthusiasm to be handy for getting members of the AI community excited about our research area.\n \nMihály Bárász, a former Google engineer, will be joining MIRI in the fall. Mihály has an MSc summa cum laude in mathematics from Eotvos Lorand University, Budapest. Mihály attended MIRI's earliest workshops, and is the lead author of the paper \"Robust Cooperation in the Prisoner's Dilemma: Program Equilibrium via Provability Logic.\" He's a brilliant mathematician (with a perfect score at the International Math Olympiad) who has worked with us a number of times in the past, and we're very excited by the prospect of having him on the core research team.\n \nScott Garrabrant is joining MIRI toward the end of 2015, after completing a mathematics PhD at UCLA. He is currently studying applications of theoretical computer science to enumerative combinatorics. Scott was one of the most impressive attendees of the MIRI Summer Fellows Program, and has been steadily producing a large number of new technical results on the Intelligent Agent Foundations Forum. I'm thrilled to have him working on these issues full-time.\n \nWe've already begun executing on some of our other fundraiser goals, as well. Over the last few weeks, we have brought Jack Gallagher on as an intern to begin formalizing in type theory certain tools that MIRI has developed (described briefly in this post). His code can be found in a few different repositories on github. We've also brought on another intern, Kaya Stechly, to help us write up some of the many new results that we haven't yet had the time to polish.\nI'm eager to see what this new team can do going forward. Meanwhile, there are even more recruitment opportunities and projects that we'd like to undertake, given sufficient funding. Further donations at this point would allow us to grow more quickly and more securely. Over the course of the fundraiser, we've laid out a number of reasons why we think MIRI's growth is important:\n\nFour Background Claims explains why we think AI will have an increasingly large impact as it begins to outperform humans in general reasoning tasks.\nAssessing Our Past and Potential Impact and What Sets MIRI Apart? argue that MIRI is unusually well-positioned to help make the long-term impact of AI positive.\nMIRI's Approach explains why we think our technical agenda is tractable and highly important.\nAn Astounding Year and Why Now Matters note that the interest in AI safety work is booming, and this is a critical time for MIRI to have a big impact on early AI alignment discussions.\nAnd Target 1, Target 2, and Target 3 detail what we would use additional funding for.\n\nWe've made our case, and our donors have come through in a big way. However, our funding gap isn't closed yet, and additional donors over the next few hours can still make a difference in deciding which of our future plans we can begin executing on.\nTo all our supporters: Thank you for helping us make our expansion plans a reality! We owe this new growth to you. Now let's see what we can do with one more day!\n\n \nUpdate 12/3/15: Mihály Bárász has deferred his research fellowship, and now plans to join MIRI's research team in 2016 instead of late 2015.\n\nThe post Final fundraiser day: Announcing our new team appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "Final fundraiser day: Announcing our new team", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=33", "id": "1da18ef536cd69de93b29122e36e5181"} {"text": "AI and Effective Altruism\n\nMIRI is a research nonprofit specializing in a poorly-explored set of problems in theoretical computer science. GiveDirectly is a cash transfer service that gives money to poor households in East Africa. What kind of conference would bring together representatives from such disparate organizations — alongside policy analysts, philanthropists, philosophers, and many more?\nEffective Altruism Global, which is beginning its Oxford session in a few hours, is that kind of conference. Effective altruism (EA) is a diverse community of do-gooders with a common interest in bringing the tools of science to bear on the world's biggest problems. EA organizations like GiveDirectly, the Centre for Effective Altruism, and the charity evaluator GiveWell have made a big splash by calling for new standards of transparency and humanitarian impact in the nonprofit sector.\nWhat is MIRI's connection to effective altruism? In what sense is safety research in artificial intelligence \"altruism,\" and why do we assign a high probability to this being a critically important area of computer science in the coming decades? I'll give quick answers to each of those questions below.\n\nMIRI and effective altruism\nWhy is MIRI associated with EA? In large part because effective altruists and MIRI use the same kind of criteria in deciding what work to prioritize.\nMIRI's mission, to develop the formal tools needed to make smarter-than-human AI systems useful and safe, comes from our big-picture view that scientific and technological advances will be among the largest determiners of human welfare, as they have been historically. Automating intellectual labor is therefore likely to be a uniquely high-impact line of research — both for good and for ill. (See Four Background Claims.) Which open problems we work on then falls out of our efforts to identify tractable and neglected theoretical prerequisites for aligning the goals of AI systems with our values. (See MIRI's Approach.)\n \n\nDaniel Dewey, Nick Bostrom, Elon Musk, Nate Soares, and Stuart Russelldiscuss AI risk at the EA Global conference. Photo by Robbie Shade.\n \nMIRI is far from the only group that uses criteria like these to identify important cause areas and interventions, and these groups have found that banding together is a useful way to have an even larger impact. Because members of these groups aren't permanently wedded to a single cause area, and because we assign a lot of value to our common outlook in its own right, we can readily share resources and work together to promote the many exciting ideas that are springing out from this outlook. Hence the effective altruist community.\nOne example of this useful exchange was MIRI's previous Executive Director, Luke Muehlhauser, leaving MIRI in June to investigate nutrition science and other areas for potential philanthropic opportunities under the Open Philanthropy Project, an offshoot of GiveWell.1 In turn, OpenPhil has helped fund a large AI grants program that MIRI participated in.\nGiveWell/OpenPhil staff have given us extremely useful critical feedback in the past, and we've had a number of conversations with them over the years (1, 2, 3, 4, 5). Although they work on a much broader range of topics than MIRI does and they don't share all of our views, their interest in finding interventions that are \"important, tractable and relatively uncrowded\" has led them to pick out AI as an important area to investigate for reasons that overlap with MIRI's. (See OpenPhil's March update on global catastrophic risk and their newly released overview document on potential risks from advanced artificial intelligence.)\nMost EAs work on areas other than AI risk, and MIRI's approach is far from the only plausible way to have an outsized impact on human welfare. Because we attempt to base our decisions on broadly EA considerations, however — and therefore end up promoting EA-like philosophical commitments when we explain the reasoning behind our research approach — we've ended up forming strong ties to many other people with an interest in identifying high-impact humanitarian interventions.\nHigh-stakes and high-probability risks\nA surprisingly common misconception about EA cause areas is that they break down into three groups: high-probability crises afflicting the global poor; medium-probability crises afflicting non-human animals; and low-probability global catastrophes. The assumption (for example, in Dylan Matthews' recent Vox article) is that this is the argument for working on AI safety or biosecurity: there's a very small chance of disaster occurring, but disaster would be so terrible if it did occur that it's worth investigating just in case.\nThis misunderstands MIRI's position — and, I believe, the position of people interested in technological risk at the Future of Humanity Institute and a number of other organizations. We believe that existential risk from misaligned autonomous AI systems is high-probability if we do nothing to avert it, and we base our case for MIRI on that view; if we thought that the risks from AI were very unlikely to arise, we would deprioritize AI alignment research in favor of other urgent research projects.\nAs a result, we expect EAs who strongly disagree with us about the likely future trajectory of the field of AI to work on areas other than AI risk. We don't think EAs should donate to MIRI \"just in case,\" and we reject arguments based on \"Pascal's Mugging.\" (\"Pascal's Mugging\" is the name MIRI researchers coined for decision-making that mistakenly focuses on infinitesimally small probabilities of superexponentially vast benefits.)2\nAs Stuart Russell writes, \"Improving decision quality, irrespective of the utility function chosen, has been the goal of AI research – the mainstream goal on which we now spend billions per year, not the secret plot of some lone evil genius.\" Thousands of person-hours are pouring into research to increase the general capabilities of AI systems, with the aim of building systems that can outperform humans in arbitrary cognitive tasks.\nWe don't know when such efforts will succeed, but we expect them to succeed eventually — possibly in the next few decades, and quite plausibly during this century. Shoring up safety guarantees for autonomous AI systems would allow us to reap many more of the benefits from advances in AI while significantly reducing the probability of a global disaster over the long term.\nMIRI's mission of making smarter-than-human AI technology reliably beneficial is ambitious, but it's ambitious in the fashion of goals like \"prevent global warming\" or \"abolish factory farming.\" Working toward such goals usually means making incremental progress that other actors can build on — more like setting aside $x of each month's paycheck for a child's college fund than like buying a series of once-off $x lottery tickets.\nA particular $100 is unlikely to make a large once-off impact on your child's career prospects, but it can still be a wise investment. No single charity working against global warming is going to solve the entire problem, but that doesn't make charitable donations useless. Although MIRI is a small organization, our work represents early progress toward more robust, transparent, and beneficial AI systems, which can then be built on by other groups and integrated into AI system design.3\nRather than saying that AI-mediated catastrophes are high-probability and stopping there, though, I would say that such catastrophes are high-probability conditional on AI research continuing on its current trajectory. Disaster isn't necessarily high-probability if the field of AI shifts to include alignment work along with capabilities work among its key focuses.\nIt's because we consider AI disasters neither unlikely nor unavoidable that we think technical work in this area is important. From the perspective of aspiring effective altruists, the most essential risks to work on will be ones that are highly likely to occur in the near future if we do nothing, but substantially less likely to occur if we work on the problem and get existing research communities and scientific institutions involved.\nPrinciples like these apply outside the domain of AI, and although MIRI is currently the only organization specializing in long-term technical research on AI alignment, we're one of a large and growing number of organizations that attempt to put these underlying EA principles into practice in one fashion or another. And to that extent, although effective altruists disagree about the best way to improve the world, we ultimately find ourselves on the same team.\n \n \n\n \nAlthough effective altruism is sometimes divided into separate far-future, animal welfare, global poverty, and \"meta\" cause areas, this has always been a somewhat artificial division. Toby Ord, the founder of the poverty relief organization Giving What We Can, is one of the leading scholars studying existential risk and holds a position at the Future of Humanity Institute. David Pearce, one of the strongest proponents of animal activism within EA, is best known for his futurism. Peter Singer is famous for his early promotion of global poverty causes as well as his promotion of animal welfare. And Anna Salamon, the Executive Director of the \"meta\"-focused Center for Applied Rationality, is a former MIRI researcher.Quoting MIRI senior researcher Eliezer Yudkowsky in 2013:\nI abjure, refute, and disclaim all forms of Pascalian reasoning and multiplying tiny probabilities by large impacts when it comes to existential risk. We live on a planet with upcoming prospects of, among other things, human intelligence enhancement, molecular nanotechnology, sufficiently advanced biotechnology, brain-computer interfaces, and of course Artificial Intelligence in several guises. If something has only a tiny chance of impacting the fate of the world, there should be something with a larger probability of an equally huge impact to worry about instead. […]\nTo clarify, \"Don't multiply tiny probabilities by large impacts\" is something that I apply to large-scale projects and lines of historical probability. On a very large scale, if you think FAI [Friendly AI] stands a serious chance of saving the world, then humanity should dump a bunch of effort into it, and if nobody's dumping effort into it then you should dump more effort than currently into it. On a smaller scale, to compare two x-risk mitigation projects in demand of money, you need to estimate something about marginal impacts of the next added effort (where the common currency of utilons should probably not be lives saved, but \"probability of an ok outcome\", i.e., the probability of ending up with a happy intergalactic civilization). In this case the average marginal added dollar can only account for a very tiny slice of probability, but this is not Pascal's Wager. Large efforts with a success-or-failure criterion are rightly, justly, and unavoidably going to end up with small marginally increased probabilities of success per added small unit of effort. It would only be Pascal's Wager if the whole route-to-an-OK-outcome were assigned a tiny probability, and then a large payoff used to shut down further discussion of whether the next unit of effort should go there or to a different x-risk.\nNick Bostrom made a similar point at EA Global: that AI is an important cause even though any one individual's actions are unlikely to make a decisive difference. In a panel on artificial superintelligence, Bostrom said that he thought people had a \"low\" (as opposed to \"high\" or \"medium\") probability of making a difference on AI risk, which Matthews and a number of others appear to have taken to mean that Bostrom thinks AI is a speculative cause area. When I asked Bostrom about his intended meaning myself, however, he elaborated:\nThe point I was making in the EA global comment was the probability that you (for any 'you' in the audience) will save the world from an AI catastrophe is very small, not that the probability of AI catastrophe is very small. Thus working on AI risk is similar to volunteering for a presidential election campaign.\nThe post AI and Effective Altruism appeared first on Machine Intelligence Research Institute.", "url": "https://intelligence.org", "title": "AI and Effective Altruism", "source": "intelligence.org", "date_published": "n/a", "paged_url": "https://intelligence.org/feed?paged=33", "id": "1a938884749ef989a59bc34db9ef4440"} {"text": "Powerful planners, not sentient software\n\nOver the past few months, some major media outlets have been spreading concern about the idea that AI might spontaneously acquire sentience and turn against us. Many people have pointed out the flaws with this notion, including Andrew Ng, an AI scientist of some renown:\nI don't see any realistic path from the stuff we work on today—which is amazing and creating tons of value—but I don't see any path for the software we write to turn evil.\nHe goes on to say, on the topic of sentient machines:\nComputers are becoming more intelligent and that's useful as in self-driving cars or speech recognition systems or search engines. That's intelligence. But sentience and consciousness is not something that most of the people I talk to think we're on the path to.\nI say, these objections are correct. I endorse Ng's points wholeheartedly — I see few pathways via which software we write could spontaneously \"turn evil.\"\nI do think that there is important work we need to do in advance if we want to be able to use powerful AI systems for the benefit of all, but this is not because a powerful AI system might acquire some \"spark of consciousness\" and turn against us. I also don't worry about creating some Vulcan-esque machine that deduces (using cold mechanic reasoning) that it's \"logical\" to end humanity, that we are in some fashion \"unworthy.\" The reason to do research in advance is not so fantastic as that. Rather, we simply don't yet know how to program intelligent machines to reliably do good things without unintended consequences.\nThe problem isn't Terminator. It's \"King Midas.\" King Midas got exactly what he wished for — every object he touched turned to gold. His food turned to gold, his children turned to gold, and he died hungry and alone.\nPowerful intelligent software systems are just that: software systems. There is no spark of consciousness which descends upon sufficiently powerful planning algorithms and imbues them with feelings of love or hatred. You get only what you program.1\n\nTo build a powerful AI software system, you need to write a program that represents the world somehow, and that continually refines this world-model in response to percepts and experience. You also need to program powerful planning algorithms that use this world-model to predict the future and find paths that lead towards futures of some specific type.\nThe focus of our research at MIRI isn't centered on sentient machines that think or feel as we do. It's aimed towards improving our ability to program software systems to execute plans leading towards very specific types of futures.\nA machine programmed to build a highly accurate world-model and employ powerful planning algorithms could yield extraordinary benefits. Scientific and technological innovation have had great impacts on quality of life around the world, and if we can program machines to be intelligent in the way that humans are intelligent — only faster and better — we can automate scientific and technological innovation. When it comes to the task of improving human and animal welfare, that would be a game-changer.\nTo build a machine that attains those benefits, the first challenge is to do this world-modeling and planning in a highly reliable fashion: you need to ensure that it will consistently pursue its goal, whatever that is. If you can succeed at this, the second challenge is making that goal a safe and useful one.\nIf you build a powerful planning system that aims at futures in which cancer is cured, then it may well represent all of the following facts in its world-model: (a) The fastest path to a cancer cure involves proliferating robotic laboratories at the expense of the biosphere and kidnapping humans for experimentation; (b) once you realize this, you'll attempt to shut it down; and (c) if you shut it down, it will take a lot longer for cancer to be cured. The system may then execute a plan which involves deceiving you until it is able to resist and then proliferating robotic laboratories and kidnapping humans. This is, in fact, what you asked for.\nWe can avoid this sort of outcome, if we manage to build machines that do what we mean rather than what we said. That sort of behavior doesn't come for free: you have to program it in.\nA superhuman planning algorithm with an extremely good model of the world could find solutions you never imagined. It can make use of patterns you haven't noticed and find shortcuts you didn't recognize. If you follow a plan generated by a superintelligent search process, it could have disastrous unintended consequences. To quote professor Stuart Russell (author of the leading AI textbook):\nThe primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:\n1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.\n2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.\nA system that is optimizing a function of n variables, where the objective depends on a subset of size k