&sa=D&source=editors&ust=1646948966984122&usg=AOvVaw3Gx59NjOB31GtFbu-Vqy-3){.c9} ]{.c13} is [ the first writeup we are aware of the ontology identification problem. It proposes a possible solution based on bisimulation.]{.c1}\n\n[]{.c1}\n\nWe believe this proposal is vulnerable to a similar counterexample to the other methods we discuss in [ [Section: regularizers](#ElicitingLatentKnowledge.xhtml#h.akje5cz7knt2){.c9} ]{.c13} . ^[\\[113\\]](#ElicitingLatentKnowledge.xhtml#ftnt113){#ElicitingLatentKnowledge.xhtml#ftnt_ref113}^ [ This is likely not surprising to the authors, since they are not claiming a solution that works in the worst case.]{.c1}\n\n[]{.c1}\n\n[ [Taylor et al](https://www.google.com/url?q=https://intelligence.org/files/AlignmentMachineLearning.pdf&sa=D&source=editors&ust=1646948966986060&usg=AOvVaw3Z9x17fcn5cY7lRxMtboA2){.c9} ]{.c13} [ suggest learning a metric over possible worlds where (i) worlds change slowly over time and (ii) if two worlds are close then they behave similarly for all inputs. We sympathize with the intuition behind this approach, but haven't been able to turn it into a promising angle of attack on ELK.]{.c1}\n\n[]{.c1}\n\n[ [Yudkowsky](https://www.google.com/url?q=https://arbital.greaterwrong.com/p/pointing_finger&sa=D&source=editors&ust=1646948966987220&usg=AOvVaw242R68HYMJJkT3Kdd2fuT_){.c9} ]{.c13} suggests a variety of approaches to encouraging learned models to represent events themselves rather than human judgments about those events. This discussion gestures at some of the ideas in [ [Section: regularizers](#ElicitingLatentKnowledge.xhtml#h.akje5cz7knt2){.c9} ]{.c13} [ but is even more informal and preliminary, in part because Yudkowsky is more focused on issues with data quality.]{.c1}\n\n### [Mechanistic interpretability]{.c32} {#ElicitingLatentKnowledge.xhtml#h.w3tt0jawudyv .c6}\n\nELK is also closely related to interpretability and especially \"mechanistic\" interpretability as in [ [Cammarata et al](https://www.google.com/url?q=https://distill.pub/2020/circuits/&sa=D&source=editors&ust=1646948966988731&usg=AOvVaw3iZNICHprrlcJZtelaEcVg){.c9} ]{.c13} [. This work tries to examine machine learning models to understand how they are computing their outputs and especially what intermediates they are reasoning about.]{.c1}\n\n[]{.c1}\n\n[If we understand how a neural network is thinking, then we can hope to directly \"read off\" what it knows about the world rather than try to train the model to tell us this information. In a very simple case, we might discover that a particular neuron represents the model's \"beliefs\" about whether there is a diamond in the room, and then simply look at that neuron rather than training the model to answer questions about diamonds.]{.c1}\n\n[]{.c1}\n\nFrom our perspective, the core questions are: how complex will our \"interpretations\" need to become, and how do interpretability researchers (or the tools they build) decide which interpretations are correct? ^[\\[114\\]](#ElicitingLatentKnowledge.xhtml#ftnt114){#ElicitingLatentKnowledge.xhtml#ftnt_ref114}^\n\n[]{.c1}\n\n[If a question can be answered using a single neuron, then it is relatively straightforward to get answers by using either interpretability or fine-tuning: we can look through neurons by hand, or we can fine-tune using a simplicity/sparsity constraint or regularizer. Even then there are possible concerns about finding a \"human thinks there is a diamond\" neuron, but we might rule these out by looking at their relationships to other neurons.]{.c1}\n\n[]{.c1}\n\nBut the hard cases for ELK are precisely those where familiar human concepts are a very complicated function of model internals, and worst of all when the identification itself is [more]{.c23} [ complex than the original predictor. In these cases, it is easy for the reporter to essentially \"make up\" the structure that the human evaluators or interpretability researchers are looking for.]{.c1}\n\n[]{.c1}\n\n[In these cases we need to think about how evaluators decide between many different interpretations of a given model. That decision is likely to involve consistency checks between interpretations of different parts of the model, as well as structural judgments about which interpretation is less likely to be inventing the concepts we are looking for from scratch, which essentially amount to regularizers to select amongst.]{.c1}\n\n[]{.c1}\n\nIf we could fully define those checks and regularizers, then that would represent a plausible solution to ELK (that could be used either as a loss function for fine-tuning or a recipe for automated interpretability). If we can't define them, then we are left wondering whether any given set of [ad hoc]{.c23} [ methods will generalize to more powerful models. Overall, our sense is that interpretability-based and fine-tuning approaches to ELK are extremely closely related.]{.c1}\n\n[]{.c1}\n\nThe more important distinction is again methodological: in this report we are looking for \"worst-case\" solutions that successfully recover knowledge no matter how the predictor works. Interpretability research typically takes a more empirical approach, trying to understand how knowledge is structured in modern machine learning systems and aiming to scale up that understanding as models improve.\n\n------------------------------------------------------------------------\n\n\n\n[\\[1\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref1){#ElicitingLatentKnowledge.xhtml#ftnt1} [ Ajeya Cotra works at Open Philanthropy and collaborated extensively on writing this report.]{.c11}\n\n
\n\n\n\n[\\[2\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref2){#ElicitingLatentKnowledge.xhtml#ftnt2} [ See ]{.c21} [ [Ontological Crises in Artificial Agent's Value Systems](https://www.google.com/url?q=https://arxiv.org/abs/
&sa=D&source=editors&ust=1646948967055780&usg=AOvVaw0xiaimjyRxSL6q38jUCpT5){.c9} ]{.c13 .c21} [, ]{.c21} [ [Formalizing two problems of realistic world models](https://www.google.com/url?q=https://intelligence.org/files/RealisticWorldModels.pdf&sa=D&source=editors&ust=1646948967056156&usg=AOvVaw39xHlfryVY0m7HUhpkqJBn){.c9} ]{.c13 .c21} [, and ]{.c21} [ [Ontology identification problem](https://www.google.com/url?q=https://arbital.greaterwrong.com/p/ontology_identification?l%3D5c&sa=D&source=editors&ust=1646948967056438&usg=AOvVaw0dQlamdpDD84bJW47WaM9X){.c9} ]{.c13 .c21} [. We discuss the differences with our perspective in ]{.c21} [ [Appendix: ontology identification](#ElicitingLatentKnowledge.xhtml#h.2j70u1u7lcw7){.c9} ]{.c13 .c21} [.]{.c11}\n\n \n\n\n\n[\\[3\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref3){#ElicitingLatentKnowledge.xhtml#ftnt3} [ Most relevantly ]{.c21} [ [the pointers problem](https://www.google.com/url?q=https://www.alignmentforum.org/posts/gQY6LrTWJNkTv8YJR/the-pointers-problem-human-values-are-a-function-of-humans&sa=D&source=editors&ust=1646948967057093&usg=AOvVaw0jRXcOrbFHBixr1hL6yXSy){.c9} ]{.c13 .c21} [, ]{.c21} [ [generalizable environment goals](https://www.google.com/url?q=https://intelligence.org/files/AlignmentMachineLearning.pdf&sa=D&source=editors&ust=1646948967057394&usg=AOvVaw2_zLqN027ax-l2lxjs67Xb){.c9} ]{.c13 .c21} [, and ]{.c21} [ [look where I'm pointing, not at my finger](https://www.google.com/url?q=https://arbital.greaterwrong.com/p/pointing_finger&sa=D&source=editors&ust=1646948967057668&usg=AOvVaw0GzQpJO3rtIEMI1XXSTI0V){.c9} ]{.c13 .c21} [.]{.c11}\n\n
\n\n\n\n[\\[4\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref4){#ElicitingLatentKnowledge.xhtml#ftnt4} [ For simplicity we talk about the system taking actions all at once and then observing its consequences. More realistically, the SmartVault may make a sequence of actions and observations and perform additional planning in between them. This complicates the planning algorithm considerably but doesn't affect any of our discussion.]{.c21}\n\n
\n\n\n\n[\\[5\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref5){#ElicitingLatentKnowledge.xhtml#ftnt5} [ For simplicity and concreteness you can imagine a brute force search. A more interesting system might train a value function and/or policy, do Monte-Carlo Tree Search with learned heuristics, and so on. These techniques introduce new learned models, and in practice we would care about ELK for each of them. But we don't believe that this complication changes the basic picture and so we leave it out.]{.c11}\n\n
\n\n\n\n[\\[6\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref6){#ElicitingLatentKnowledge.xhtml#ftnt6} [ This algorithm is what ]{.c21} [ [Everitt et al](https://www.google.com/url?q=https://arxiv.org/abs/1908.04734&sa=D&source=editors&ust=1646948966997520&usg=AOvVaw1-KWmoin55jG4-tTL_59fv){.c9} ]{.c13 .c21} [ call \"current RF optimization.\"]{.c11}\n\n
\n\n\n\n[\\[7\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref7){#ElicitingLatentKnowledge.xhtml#ftnt7} [ One important kind of sensor is a human embedded in the environment who could e.g. actually visit the SmartVault to check up on what's happening and write up a report. We are most concerned about the worst-case scenario where the SmartVault itself (likely acting in concert with other AI systems) incapacitates those humans and writes the \"report\" for them.]{.c11}\n\n
\n\n\n\n[\\[8\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref8){#ElicitingLatentKnowledge.xhtml#ftnt8} [ The model need not actually be divided into parts like this. Also the details of the model's structure will depend on exactly how it is trained; in ]{.c21} [ [Appendix: generative modeling](#ElicitingLatentKnowledge.xhtml#h.trvedm0xgro){.c9} ]{.c13 .c21} [ we spell out a more concrete situation where the predictor is a VAE. In this section we'll stick with the simple caricature for simplicity.]{.c11}\n\n
\n\n\n\n[\\[9\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref9){#ElicitingLatentKnowledge.xhtml#ftnt9} [ In order to train ]{.c21} [the SmartVault to predict the consequences of very sophisticated action sequences, we may need to iteratively train the predictor on more and more sophisticated plans rather than only ever training on actions produced by a very weak AI. We won't discuss any of the complications posed by this kind of iterative scheme, but we don't think it changes any of the dynamics discussed in this report.]{.c11}\n\n
\n\n\n\n[\\[10\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref10){#ElicitingLatentKnowledge.xhtml#ftnt10} [ In order to have any hope of generalization, we will either need to use some form of regularization (such as early stopping or dropout) or rely on some hypothesis about what kind of model SGD tends to learn given limited parameters. We will explicitly discuss regularization in ]{.c21} [ [Section: regularization](#ElicitingLatentKnowledge.xhtml#h.akje5cz7knt2){.c9} ]{.c13 .c21} [ where we will explain why it doesn't address any of the counterexamples raised in this section. Until]{.c21} [ then we will brush this issue under the rug, but avoid considering counterexamples like \"answer correctly unless the year is 2022, in which case say 'banana'\" which we think would be addressed by realistic regularization.]{.c11}\n\n
\n\n\n\n[\\[11\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref11){#ElicitingLatentKnowledge.xhtml#ftnt11} [ The arguments in this article can be immediately generalized to richer probabilistic models of the environment. It's less obvious that they can be generalized to non-Bayesian models, but we do expect that the basic idea will apply for a very wide range of ways that the predictor and human could think about the world.]{.c11}\n\n
\n\n\n\n[\\[12\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref12){#ElicitingLatentKnowledge.xhtml#ftnt12} [ The Bayes net specifies the probability distribution over each node's value given the values of its parents, and that defines a joint distribution over all the nodes. It's easy to sample randomly from this distribution\\-\\--you can start at the top and sample from each node in turn\\-\\--but it's potentially very hard to compute the conditional probability distribution given an observation, because you need to figure out what the best explanation is for that observation. For simplicity you can imagine the model exhaustively listing out every possible world and computing its probability, but it would be more realistic to consider some ]{.c21} [ [approximate inference algorithm](https://www.google.com/url?q=https://en.wikipedia.org/wiki/Bayesian_network%23Inference_and_learning&sa=D&source=editors&ust=1646948967001491&usg=AOvVaw0fJ4Ir_drF2HBjXSMlY1Ie){.c9} ]{.c13 .c21} [.]{.c11}\n\n
\n\n\n\n[\\[13\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref13){#ElicitingLatentKnowledge.xhtml#ftnt13} [ Of course it's not realistic to output an explicit description of a probability distribution over such a high-dimensional space (for example we could not list all 2]{.c21} [
]{.c21 .c56} [ possible videos to give the probability for each). Different ways of approximating this dynamic lead to different training strategies for the predictor, we describe one example in ]{.c21} [ [Appendix: generative modeling](#ElicitingLatentKnowledge.xhtml#h.trvedm0xgro){.c9} ]{.c13 .c21} [.]{.c21}\n\n \n\n\n\n[\\[14\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref14){#ElicitingLatentKnowledge.xhtml#ftnt14} [ Though we do believe that messiness may quantitatively change ]{.c21} [when]{.c21 .c23} [ problems occur. As a caricature, if we had a method that worked as long as the predictor's Bayes net had fewer than 10]{.c21} [9]{.c21 .c56} [ parameters, it might end up working for a realistic messy AI until it had 10]{.c21} [12]{.c21 .c56} [ parameters, since most of those parameters do not specify a single monolithic model in which inference is performed.]{.c11}\n\n
\n\n\n\n[\\[15\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref15){#ElicitingLatentKnowledge.xhtml#ftnt15} [ We have no precise way to specify]{.c21} [ what these functions should do or even why we should expect them to exist. Intuitively, both the predictor and the human have some internal Bayes nets that make good predictions about reality, and there is some \"real\" correspondence between those Bayes nets and reality which causes the predictions to be good. T]{.c21} [hen direct translation should effectively compose those two correspondences]{.c21} [: translate from the predictor's Bayes net to \"reality\", and then from \"reality\" to the human Bayes net]{.c21} [. Getting a better philosophical handle on this relationship is one possible approach to ELK, although it's not clear whether it's necessary.]{.c11}\n\n[]{.c11}\n\n[Fortunately our methodology does not require giving a general answer to this question in order to start doing research: our goal is just to construct counterexamples in which a proposed training strategy definitely doesn't work. And it's easy to construct counterexamples in which the expected behavior of the direct translator is clear\\-\\--we present one in ]{.c21} [ [Appendix: game of life](#ElicitingLatentKnowledge.xhtml#h.5jm9ag9hztbs){.c9} ]{.c13 .c21} [.]{.c21} [ If we were able to solve ELK in these cases, then we could try to construct a different kind of counterexample where it wasn't even clear what the reporter ]{.c21} [should]{.c21 .c23} [ do.]{.c11}\n\n
\n\n\n\n[\\[16\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref16){#ElicitingLatentKnowledge.xhtml#ftnt16} [ A realistic human Bayes net would be too rich to represent this kind of fact with a single node\\-\\--there isn't always a single diamond whose location is unknown. But more complex relationships can also be described as Bayes nets.]{.c11}\n\n
\n\n\n\n[\\[17\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref17){#ElicitingLatentKnowledge.xhtml#ftnt17} [ At a high level, our justification is that (i) there are many plausible approaches for reducing error rates low enough that you would have zero in the training set, (ii) it seems quite hard to robustly learn the direct ]{.c21} [translator]{.c21} [ even ]{.c21} [without]{.c21 .c23} [ errors, (iii) if we could robustly learn the direct ]{.c21} [translator]{.c21} [ even without errors, we would likely automatically have some \"margin of error\", (iv) it seems methodologically easier to start with the problem \"how do we learn the right thing even without errors?\"]{.c11}\n\n
\n\n\n\n[\\[18\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref18){#ElicitingLatentKnowledge.xhtml#ftnt18} [ See Paul's post ]{.c21} [ [My research methodology](https://www.google.com/url?q=https://www.alignmentforum.org/posts/EF5M6CmKRd6qZk27Z/my-research-methodology&sa=D&source=editors&ust=1646948967034013&usg=AOvVaw34YGNsKKsrRwKnijclCst8){.c9} ]{.c13 .c21} [ which describes essentially the same methodology. Note that the discussion in this report is slightly specialized to algorithms for ELK that try to learn a reporter, but the general approach is similar.]{.c21}\n\n
\n\n\n\n[\\[19\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref19){#ElicitingLatentKnowledge.xhtml#ftnt19} [ Related to Eliezer's 2015 ]{.c21} [ [Methodology of foreseeable difficulties](https://www.google.com/url?q=https://arbital.greaterwrong.com/p/foreseeable_difficulties?l%3D6r&sa=D&source=editors&ust=1646948967034917&usg=AOvVaw0UXW4Zlhi9utjTbokQ3aEF){.c9} ]{.c13 .c21} [; the differences are that we are more interested in the day-to-day process behind producing research than the underlying philosophy, are more open to \"weird\" counterexamples (which seem plausible but unlikely), and are not claiming that our method is necessary for aligning powerful AI. For us this methodology fills a role more similar to the role of proof in theoretical fields or experiment in empirical fields.]{.c21}\n\n
\n\n\n\n[\\[20\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref20){#ElicitingLatentKnowledge.xhtml#ftnt20} [ Often we'd first go back to step \\#3 and give the builder a chance to revise their desired reporter, e.g. by incorporating ideas used in the breaker's bad reporter. We only go back to step \\#1/\\#2 once the second part of the game has reached its equilibrium.]{.c11}\n\n
\n\n\n\n[\\[21\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref21){#ElicitingLatentKnowledge.xhtml#ftnt21} [ Note that the human may not be able to follow all the events ]{.c21} [leading up to ]{.c21 .c23} [the situation in question --- most of our counterexamples will involve cases like these. But at the end of the day, after whatever sophisticated tampering or robbing occurred, the human can easily understand the concept that the diamond is not actually in the room (regardless of what it looks like on camera).]{.c11}\n\n
\n\n\n\n[\\[22\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref22){#ElicitingLatentKnowledge.xhtml#ftnt22} [ For example, \"very quickly swapping the diamond out for a fake and running away with the actual diamond\" is a central example of undetectable robbery, while \"slowly swapping the atoms in the diamond out one by one with new carbon atoms over the course of many years\" is a non-central example of undetectable robbery.]{.c11}\n\n
\n\n\n\n[\\[23\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref23){#ElicitingLatentKnowledge.xhtml#ftnt23} [ ]{.c21} [ [Appendix: game of life](#ElicitingLatentKnowledge.xhtml#h.5jm9ag9hztbs){.c9} ]{.c13 .c21} [ gives an example where we go unusually far in this direction.]{.c11}\n\n
\n\n\n\n[\\[24\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref24){#ElicitingLatentKnowledge.xhtml#ftnt24} [ Throughout this section we will assume that all humans use the same Bayes net. If humans understand different things about the world, then we can generally do better by collecting labels from many humans, having each of them label cases they can understand. In that case we say \"human Bayes net\" to mean some kind of informal combination of all of the models---we won't go into details on this combination but we don't think it changes the basic picture described in this report.]{.c11}\n\n
\n\n\n\n[\\[25\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref25){#ElicitingLatentKnowledge.xhtml#ftnt25} [ The space of possible considerations is generally ]{.c21} [exponentially]{.c21} [ large. For example, a human demonstrator could generate some text and encrypt it, and an identical human observer would take exponentially long to figure out what text they generated. In general \"considerations\" don't map neatly onto nodes in a Bayes net, but we stick with the visual metaphor for simplicity.]{.c11}\n\n
\n\n\n\n[\\[26\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref26){#ElicitingLatentKnowledge.xhtml#ftnt26} [ In reality a human model of the world is not a Bayes net, but we'll stick with the example we set up in previous sections---we believe the picture would be the same for a more realistic model.]{.c11}\n\n
\n\n\n\n[\\[27\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref27){#ElicitingLatentKnowledge.xhtml#ftnt27} [ That is, getting the true joint probability distribution over all the nodes in the human Bayes net, conditional on the video and action sequence.]{.c11}\n\n
\n\n\n\n[\\[28\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref28){#ElicitingLatentKnowledge.xhtml#ftnt28} [ For example, suppose the human operator exploited a certain kind of software vulnerability to hack into the camera. If the predictor correctly predicted that everything would look fine on video, then it must be because the AI was able to identify the software vulnerability the human used. And in that case if the reporter does the best it can at inference in the human model, it will also identify the software vulnerability and hence report correctly. There are some important subtleties in this argument, which we elide here but discuss in ]{.c21} [ [Section: speed](#ElicitingLatentKnowledge.xhtml#h.goyuzwqyv9m8){.c9} ]{.c13 .c21} [.]{.c11}\n\n
\n\n\n\n[\\[29\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref29){#ElicitingLatentKnowledge.xhtml#ftnt29} [ One way of looking at the difficulty is that ]{.c21} [if]{.c21 .c23} [ a human operating the SmartVault ]{.c21} [were]{.c21} [ to find an action sequence that exploited this vulnerability, it would probably be because they had chanced into it randomly rather than because they had deliberately engineered it. And so a model which is effectively predicting \"how would a human operator answer questions?\" would be simulating a human who was just as surprised as the observer. Given realistic amounts of model uncertainty this is not ]{.c21} [really]{.c21 .c23} [ the most likely explanation, but (i) it's nevertheless a very plausible generalization for our models since they will never have seen this kind of extreme improbability, (ii) the actual best explanation may be something even worse, e.g. where the human operator has been assassinated and a more intelligent AI is now making decisions for them.]{.c11}\n\n
\n\n\n\n[\\[30\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref30){#ElicitingLatentKnowledge.xhtml#ftnt30} [ We're modeling the human understanding of the world as a Bayes net, and so improved understanding takes the form of a new Bayes net, but of course real learning is much more complex. As usual, it would be completely valid for the builder to point out some important property of human learning that was lost in this model and then demand that the breaker specify a test case that reproduced that property. But we don't think that this simplification changes anything material.]{.c11}\n\n
\n\n\n\n[\\[31\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref31){#ElicitingLatentKnowledge.xhtml#ftnt31} [ Or whose evaluation can be broken down into subtasks.]{.c11}\n\n
\n\n\n\n[\\[32\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref32){#ElicitingLatentKnowledge.xhtml#ftnt32} [ For some examples of Paul and others' thinking, see ]{.c21} [ [capability amplification](https://www.google.com/url?q=https://ai-alignment.com/policy-amplification-6a70cbee4f34&sa=D&source=editors&ust=1646948967037342&usg=AOvVaw0VIPlu1TP-LB8XlhMT95Td){.c9} ]{.c13 .c21} [, ]{.c21} [ [AI safety via debate](https://www.google.com/url?q=https://arxiv.org/abs/1805.00899&sa=D&source=editors&ust=1646948967037768&usg=AOvVaw1VxW0-_wMtXFRwfz1nCQwl){.c9} ]{.c13 .c21} [, ]{.c21} [ [security amplification](https://www.google.com/url?q=https://ai-alignment.com/security-amplification-f4931419f903&sa=D&source=editors&ust=1646948967038172&usg=AOvVaw3YPNq3kVPi4vt_Ehj8PWLB){.c9} ]{.c13 .c21} [, ]{.c21} [ [reliability amplification](https://www.google.com/url?q=https://ai-alignment.com/reliability-amplification-a96efa115687&sa=D&source=editors&ust=1646948967038542&usg=AOvVaw1DY1mKY2yxllc6ttJV7Hv7){.c9} ]{.c13 .c21} [, ]{.c21} [ [universality and consequentialism within HCH](https://www.google.com/url?q=https://ai-alignment.com/universality-and-consequentialism-within-hch-c0bee00365bd&sa=D&source=editors&ust=1646948967038934&usg=AOvVaw3qXO_9Wjr1i6jR0HGpXoCe){.c9} ]{.c13 .c21} [, ]{.c21} [ [supervising strong learners by amplifying weak experts](https://www.google.com/url?q=https://arxiv.org/abs/1810.08575&sa=D&source=editors&ust=1646948967039277&usg=AOvVaw3EaRheXnJ-i5mssQI6MPKz){.c9} ]{.c13 .c21} [, ]{.c21} [ [obfuscated arguments](https://www.google.com/url?q=https://www.lesswrong.com/posts/PJLABqQ962hZEqhdB/debate-update-obfuscated-arguments-problem&sa=D&source=editors&ust=1646948967039658&usg=AOvVaw171TkOtW_bogwkdoArSq5s){.c9} ]{.c13 .c21} [, ]{.c21} [ [informed oversight](https://www.google.com/url?q=https://ai-alignment.com/informed-oversight-18fcb5d3d1e1&sa=D&source=editors&ust=1646948967040003&usg=AOvVaw0KT1mM8FRqVafW_qnMTlhN){.c9} ]{.c13 .c21} [, ]{.c21} [ [ALBA](https://www.google.com/url?q=https://ai-alignment.com/alba-an-explicit-proposal-for-aligned-ai-17a55f60bbcf&sa=D&source=editors&ust=1646948967040360&usg=AOvVaw0hd5CH0Kuvpafomz7MhF79){.c9} ]{.c13 .c21} [, and other posts at ]{.c21} [ [ai-alignment.com](https://www.google.com/url?q=https://ai-alignment.com/&sa=D&source=editors&ust=1646948967040674&usg=AOvVaw1JSelMSvanKyn1csNpump6){.c9} ]{.c13 .c21} [.]{.c11}\n\n
\n\n\n\n[\\[33\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref33){#ElicitingLatentKnowledge.xhtml#ftnt33} [ Most likely this would involve some kind of joint training, where our AI helps humans understand the world better in parallel with using gradient descent to develop its own understanding. To reiterate, we are leaving details vague because we don't think that our counterexample depends on those details.]{.c21}\n\n
\n\n\n\n[\\[34\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref34){#ElicitingLatentKnowledge.xhtml#ftnt34} [ In practice, this procedure could produce an aligned AI which would operate the SmartVault instead of the human.]{.c11}\n\n
\n\n\n\n[\\[35\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref35){#ElicitingLatentKnowledge.xhtml#ftnt35} [ Or at least it superficially ]{.c21} [looks]{.c21 .c23} [ plausible---it's possible that a builder could give a good argument that actually anything learnable by gradient descent is learnable by groups of humans doing science, and this would undermine the proposed counterexample and be a valid move in our research methodology. But for now it's not clear how the builder would do that, or how they would exhibit a contradiction in the answers of a breaker who simply said \"no\" whenever asked \"could the humans learn about this feature of the world by doing X?\"]{.c11}\n\n
\n\n\n\n[\\[36\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref36){#ElicitingLatentKnowledge.xhtml#ftnt36} [ We could try to subsume gradient descent into the process of science---perhaps it's OK if humans aren't improving their understanding of natural phenomena so much as replacing it wholesale with a new model found by gradient descent. We'll return to this possibility in the next section, but in this section we'll imagine using AI assistants to build models that are \"human-understandable\" in a more naive sense.]{.c11}\n\n
\n\n\n\n[\\[37\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref37){#ElicitingLatentKnowledge.xhtml#ftnt37} [ Ideally scaling sublinearly with the cost of the unaligned benchmark.]{.c11}\n\n
\n\n\n\n[\\[38\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref38){#ElicitingLatentKnowledge.xhtml#ftnt38} [ We could also achieve a similar goal via a special form of debate where the participants suggest models and then we evaluate their suggestions against the training set. This is a quantitatively unusual form of debate (since the size of the \"move\" made by debaters is comparable to their number of parameters) and may be worth thinking about separately.]{.c11}\n\n
\n\n\n\n[\\[39\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref39){#ElicitingLatentKnowledge.xhtml#ftnt39} [ We also include a term that penalizes changes for their probability, as assessed by the human. The main purpose of this approach is to combine the efficiency of gradient descent and flexibility of the neural net parameterization with something closer to the human prior.]{.c11}\n\n
\n\n\n\n[\\[40\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref40){#ElicitingLatentKnowledge.xhtml#ftnt40} [ In reality the operator would be a human with AI assistants; this is particularly important since a proposed change may be too large for a human to process it in a reasonable amount of time.]{.c11}\n\n
\n\n\n\n[\\[41\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref41){#ElicitingLatentKnowledge.xhtml#ftnt41} [ You might hope that we ]{.c21} [only]{.c21 .c23} [ need gradient descent for learning this kind of well-scoped model, i.e. that we won't have to train incomprehensible models with incomprehensible holistic pictures of the world and how different domains fit together. That hope raises many subtle questions about how such models are integrated and exactly what kind of narrowness is needed to make them safe. But in this report we'll neglect that hope for the same reasons that we ignore the possibility of building more sensors or avoiding human-incomprehensible models altogether: we are focused on the worst case where AI systems can quickly build extremely useful, broad and incomprehensible pictures of the world, and want to remove the economic incentives to take risky actions even in that very pessimistic situation.]{.c11}\n\n
\n\n\n\n[\\[42\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref42){#ElicitingLatentKnowledge.xhtml#ftnt42} [ Or formally keep the nodes in the model but change their meaning, which is just as bad (or worse if the human doesn't notice the changed meaning).]{.c11}\n\n
\n\n\n\n[\\[43\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref43){#ElicitingLatentKnowledge.xhtml#ftnt43} [ At least in the relevant computational budget.]{.c11}\n\n
\n\n\n\n[\\[44\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref44){#ElicitingLatentKnowledge.xhtml#ftnt44} [ This discussion highlights the relationship between ELK and interpretability, which we discuss in ]{.c21} [ [Appendix: interpretability](#ElicitingLatentKnowledge.xhtml#h.w3tt0jawudyv){.c9} ]{.c13 .c21} [.]{.c11}\n\n
\n\n\n\n[\\[45\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref45){#ElicitingLatentKnowledge.xhtml#ftnt45} [ Or any other model of the world that the breaker proposes.]{.c11}\n\n
\n\n\n\n[\\[46\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref46){#ElicitingLatentKnowledge.xhtml#ftnt46} [ Or rather, a reporter that does its best-effort inference in the best human-understandable Bayes net. We'll continue to refer to this as a \"human simulator\" for ease.]{.c11}\n\n
\n\n\n\n[\\[47\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref47){#ElicitingLatentKnowledge.xhtml#ftnt47} [ Though we mean \"inference\" in a very broad sense that e.g. captures reasoning deductively from premises stated in first order logic.]{.c11}\n\n
\n\n\n\n[\\[48\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref48){#ElicitingLatentKnowledge.xhtml#ftnt48} [ And if it did occur it seems like an unusually good candidate for a case where doing science (and in particular tracking how the new structures implement the old structures) outcompetes gradient descent, and on top of that a case where translation is likely to be relatively easy to pick out with suitable regularization.]{.c21}\n\n
\n\n\n\n[\\[49\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref49){#ElicitingLatentKnowledge.xhtml#ftnt49} [ It might involve heuristics about how to think that are intimately interwoven with object level beliefs, or dual ways of looking at familiar structures, or reasoning directly about a messy tapestry of correlations in a way that captures important regularities but lacks hierarchical structure. But most of our concern is with models that we just don't have the language to talk about easily despite usefully reflecting reality. Our broader concern is that optimistic stories about the familiarity of AI cognition may be lacking in imagination. (We also consider those optimistic stories plausible, we just really don't think we know enough to be confident.)]{.c11}\n\n
\n\n\n\n[\\[50\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref50){#ElicitingLatentKnowledge.xhtml#ftnt50} [ This may seem useless given how many different questions we potentially care about, but if we could answer even a single question correctly then it might give us some leverage, e.g. we could train a general reporter who needs to agree with the (now trusted) very narrow reporters.]{.c11}\n\n
\n\n\n\n[\\[51\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref51){#ElicitingLatentKnowledge.xhtml#ftnt51} [ It may seem unlikely for honesty to be exactly the best policy. But we would be happy to merely show that honesty was a ]{.c21} [useful]{.c21 .c23} [ strategy, i.e. such that the loss-minimizing reporter at least internally specifies how to do direct translation. We think that would address the hardest part of the problem, which is finding any question at all to which direct translation is the answer. We discuss this possibility in ]{.c21} [ [Section: regularization and imitative generalization](#ElicitingLatentKnowledge.xhtml#h.a54v0atju0fd){.c9} ]{.c13 .c21} [.]{.c11}\n\n
\n\n\n\n[\\[52\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref52){#ElicitingLatentKnowledge.xhtml#ftnt52} [ Or using something like ]{.c21} [ [adaptive computation time](https://www.google.com/url?q=https://arxiv.org/abs/1603.08983&sa=D&source=editors&ust=1646948967015033&usg=AOvVaw3gwm3dxqfTVgsoQPlxJ9ZJ){.c9} ]{.c13 .c21} [ to determine the layer.]{.c11}\n\n
\n\n\n\n[\\[53\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref53){#ElicitingLatentKnowledge.xhtml#ftnt53} [ We impose this penalty separately for every question; the reporter wants to minimize the average time it takes to return an answer, including both the computation it does and the computation done by the predictor in order to compute all of the activations that the reporter looks at. ]{.c11}\n\n
\n\n\n\n[\\[54\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref54){#ElicitingLatentKnowledge.xhtml#ftnt54} [ More precisely, as soon as the expected improvement in answer quality from thinking longer is low enough compared to the expected cost from using more computation.]{.c11}\n\n
\n\n\n\n[\\[55\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref55){#ElicitingLatentKnowledge.xhtml#ftnt55} [ It's not really important what form arguments take; you can imagine using a coarser probabilistic model, e.g. a faster but less accurate simulator, that sometimes has enough confidence to answer.]{.c11}\n\n
\n\n\n\n[\\[56\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref56){#ElicitingLatentKnowledge.xhtml#ftnt56} [ Technically it can do even better---it should stop and output \"the diamond is in the room\" as soon as it finds an argument of the form \"The human is very unlikely to be very confident that the diamond is not in the room.\"]{.c11}\n\n
\n\n\n\n[\\[57\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref57){#ElicitingLatentKnowledge.xhtml#ftnt57} [ Or perhaps a set of actions for the SmartVault itself perform this kind of tampering, e.g. because it's the only kind of tampering that will lead our bad reporter to say \"the diamond remains in the room.\"]{.c11}\n\n
\n\n\n\n[\\[58\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref58){#ElicitingLatentKnowledge.xhtml#ftnt58} [ ]{.c21} [This may sound like an arbitrary \"hack,\" bu]{.c21} [t it has the following natural justification. We are worried about a counterexample where the human translator has to use a bunch of parameters to specify how human concepts are implemented in the AI's model. But if we already know what the human model looks like, then telling you how the two models relate also tells you a lot about what the AI's model looks like. That suggests that we could have the reporter's parameters do \"double duty,\" using them to both define the reporter and help define the predictor itself. If that worked really well, then specifying the (predictor) + (direct translator) might not actually take many more parameters than specifying the predictor alone, and in particular might end up taking fewer parameters than (predictor) + (human simulator).]{.c11}\n\n
\n\n\n\n[\\[59\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref59){#ElicitingLatentKnowledge.xhtml#ftnt59} [ It can just perform one random spot check before answering each question, and thereby answer ]{.c21} [some]{.c21 .c23} [ questions inconsistently unless the predictor is exactly correct. In fact doing a few random spot checks can easily make this reporter ]{.c21} [more]{.c21 .c23} [ sensitive to details of the predictor than the intended reporter.]{.c11}\n\n
\n\n\n\n[\\[60\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref60){#ElicitingLatentKnowledge.xhtml#ftnt60} [ It's not surprising that this method breaks down when the predictor is simple, because in this case the basic optimistic intuition---that the parameters of the reporter could do \"double duty\" and help us compress the predictor---also breaks down.]{.c11}\n\n
\n\n\n\n[\\[61\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref61){#ElicitingLatentKnowledge.xhtml#ftnt61} [ This can only occur if the predictor has massive \"weight sharing,\" where the same parameters get used over and over again in many different parts of the predictor's Bayes net.]{.c11}\n\n
\n\n\n\n[\\[62\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref62){#ElicitingLatentKnowledge.xhtml#ftnt62} [ One complication in this counterexample is that a good predictor will also have to specify a distribution over ]{.c21} [initial conditions---]{.c21 .c23} [it needs to know not only how atoms behave, but also that there is a high prior probability on the room containing particular configurations of atoms that represent humans. And this distribution over initial conditions is necessarily quite complex, since e.g. it includes everything the predictor knows about human psychology. We discuss this issue in more detail in ]{.c21} [ [Appendix: weight sharing](#ElicitingLatentKnowledge.xhtml#h.3fyocqpbzqj){.c9} ]{.c13 .c21} [. Overall we don't think that this approach can resolve the problem, but the question isn't completely settled.]{.c11}\n\n
\n\n\n\n[\\[63\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref63){#ElicitingLatentKnowledge.xhtml#ftnt63} [ This is analogous to the complexity class MIP where multiple non-communicating provers all answer questions from a weak verifier. If a single prover is interacting with a verifier, then it is possible for them to figure out the \"most convincing thing I could say'' in polynomial space, and therefore it is impossible for them to convince the verifier of any claim that couldn't be verified in polynomial space. But once there are two ]{.c21} [provers]{.c21} [ the game changes completely: the two ]{.c21} [provers]{.c21} [ need to effectively agree on a consistent story that works across all of the different inputs that they might be presented with (since neither of them knows what the other has been asked), and this generally cannot be done even in exponential time. This means that there is a ]{.c21} [much]{.c21 .c23} [ broader class of problems for which \"honesty is the best policy\" for two non-communicating provers. Interestingly, two ]{.c21} [cooperating]{.c21 .c23} [ provers who can't communicate are much more powerful than two ]{.c21} [competing]{.c21 .c23} [ provers.]{.c11}\n\n
\n\n\n\n[\\[64\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref64){#ElicitingLatentKnowledge.xhtml#ftnt64} [ In some sense this is just continuing the thread from ]{.c21} [ [Section: speed](#ElicitingLatentKnowledge.xhtml#h.goyuzwqyv9m8){.c9} ]{.c13 .c21} [, and using a richer set of consistency checks as a plausible source of computational difficulty for bad reporters. And as mentioned in that section, this can be interesting even if we can't get all the way to honesty being the ]{.c21} [best]{.c21 .c23} [ policy; as long as direct translation becomes a useful computational expedient, then the optimal reporter will at least know how to do direct translation even if it sometimes does something different, and we think that might address the hardest part of the problem, as discussed in ]{.c21} [ [Appendix: regularization and imitative generalization](#ElicitingLatentKnowledge.xhtml#h.a54v0atju0fd){.c9} ]{.c13 .c21} [.]{.c21}\n\n
\n\n\n\n[\\[65\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref65){#ElicitingLatentKnowledge.xhtml#ftnt65} [ In this report we have focused on eliciting knowledge from a generative model because it is the cleanest and simplest case of the problem, but the problem statement can be translated almost verbatim to model-free RL agents or any other system that is trained by gradient descent and has acquired some \"knowledge\" that helps it achieve a low loss on the training set.]{.c11}\n\n
\n\n\n\n[\\[66\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref66){#ElicitingLatentKnowledge.xhtml#ftnt66} [ We could either use this reward signal directly for model-free RL, or optimize it using search and prediction. In general, we could use this reward signal any time we might have used an objective that would incentivize misaligned power-seeking. ]{.c11}\n\n
\n\n\n\n[\\[67\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref67){#ElicitingLatentKnowledge.xhtml#ftnt67} [ We think this procedure could be used to construct an aligned version of any consequences-based reward signal, simply by swapping out \"making cakes\" with whatever other consequence we want.]{.c11}\n\n
\n\n\n\n[\\[68\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref68){#ElicitingLatentKnowledge.xhtml#ftnt68} [ We'd be fine with an AI describing these events as chance, by which it simply means everything that the human does not model and simply treats as noise. Or if the human model is richer it may be better for the AI to appeal to concepts like \"Something beyond my ken has happened to bring about this outcome.\" But these differences don't matter much, and we are evaluating solutions to ELK based on whether they get the basics right.]{.c11}\n\n
\n\n\n\n[\\[69\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref69){#ElicitingLatentKnowledge.xhtml#ftnt69} [ Though most of the time I might prefer to think longer before deciding to delegate, even if I suspect someone else will ultimately be wiser.]{.c11}\n\n
\n\n\n\n[\\[70\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref70){#ElicitingLatentKnowledge.xhtml#ftnt70} [ For simplicity this discussion also talks about an AI acting on behalf of a single human, potentially interacting with other humans' AIs. But it seems like the discussion applies almost verbatim to AI systems that represent some group of humans or whatever other decision-making process is trying to use AI (e.g. a firm, bureaucracy, group of friends, neighborhood...)]{.c11}\n\n
\n\n\n\n[\\[71\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref71){#ElicitingLatentKnowledge.xhtml#ftnt71} [ We are going to talk as if M was considering all of these worlds explicitly. In practice, M is probably usually using heuristics that allow it to predict features of worlds without considering them in detail. When we describe M making a prediction about what a human would say in a given world, you should imagine it using the same kinds of heuristics to make those predictions even if it isn't thinking about that world in detail. Analyzing this situation carefully seems important but is far outside the scope of this appendix; we hope that the cursory discussion here can at least communicate why we are optimistic about these ideas.]{.c11}\n\n
\n\n\n\n[\\[72\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref72){#ElicitingLatentKnowledge.xhtml#ftnt72} [ I'll talk about humans posing questions that are answered with ELK, but it would be much better to use machine assistance to help identify important considerations and reason about them. For example, you could imagine a debate between two powerful AI systems about which of the two people I should delegate to. The debate then \"bottoms out\" with ELK in the sense that each debater ultimately justifies their predictions about what will happen by asking questions to M using ELK.]{.c11}\n\n
\n\n\n\n[\\[73\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref73){#ElicitingLatentKnowledge.xhtml#ftnt73} [ I.e. whoever's decisions I ]{.c21} [most]{.c21 .c23} [ trust. I could also prefer to delegate to a distribution, and that may be desirable under certain conditions where I think there is \"adverse selection\" and a person I pick is unusually unlikely to choose badly.]{.c11}\n\n
\n\n\n\n[\\[74\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref74){#ElicitingLatentKnowledge.xhtml#ftnt74} [ In reality we should get a utility function at ]{.c21} [every stage]{.c21 .c23} [, and each human H]{.c21} [n]{.c21 .c26} [ should be helping pick worlds based on ]{.c21} [both]{.c21 .c23} [ who to delegate to and how much they like what's happening in the world. Rather than having two discrete phases of \"pick who to delegate to\" and \"pick what world to bring about\" those can then happen at the same time, with predictions about each of them becoming more and more refined as we iterate further.]{.c11}\n\n
\n\n\n\n[\\[75\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref75){#ElicitingLatentKnowledge.xhtml#ftnt75} [ The first two of these assumptions are basically what Paul has called \"]{.c21} [ [strategy stealing](https://www.google.com/url?q=https://www.alignmentforum.org/posts/nRAMpjnb6Z4Qv3imF/the-strategy-stealing-assumption&sa=D&source=editors&ust=1646948967010924&usg=AOvVaw0hJjtzmHWtYzL0gtR0dQxM){.c9} ]{.c13 .c21} [.\"]{.c11}\n\n
\n\n\n\n[\\[76\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref76){#ElicitingLatentKnowledge.xhtml#ftnt76} [ This is a special case of \"decoupled RL\" as proposed in ]{.c21} [ [Everitt 2018](https://www.google.com/url?q=https://openresearch-repository.anu.edu.au/bitstream/
/1/Tom%2520Everitt%2520Thesis%25202018.pdf&sa=D&source=editors&ust=1646948967012863&usg=AOvVaw0x4OL5p0Gq-4yB2i-Qom_V){.c9} ]{.c13 .c21} [, a proposal designed ]{.c21}\n\n \n\n\n\n[\\[77\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref77){#ElicitingLatentKnowledge.xhtml#ftnt77} [ When Eliezer says \"best possible\" he may mean something a little more complex than \"best predictions on the training data''\\-\\--he may be talking about e.g. what is the most natural or simplest way of predicting, or which prediction would ]{.c21} [in fact]{.c21 .c23} [ do better if we extended the training data (with the expectation that sophisticated ML systems will converge to correctly modeling the full distribution rather than the training distribution, potentially for more subtle reasons like \"they will use modeling strategies that tend to ]{.c21} [actually]{.c21 .c23} [ do the best thing\"). That said, any of those alternative readings would be consistent with our decision to focus on inductive biases.]{.c11}\n\n
\n\n\n\n[\\[78\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref78){#ElicitingLatentKnowledge.xhtml#ftnt78} [ I do think a sufficiently sophisticated AI may be able to convince a human to be uncertain under lrary conditions. In general when discussing these procedures I am imagining a technique like iterated amplification or debate in which we are effectively leveraging alignment as an inductive invariant in order to protect against this kind of manipulation. We won't go into details here, but hopefully this hope is plausible.]{.c21}\n\n
\n\n\n\n[\\[79\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref79){#ElicitingLatentKnowledge.xhtml#ftnt79} [ In fact every model we've discussed performs learned optimization when it does inference in the predictor's Bayes net. What we are really interested in here are cases where the model performs optimization ]{.c21} [which itself requires solving ELK]{.c21 .c23} [ in the same way that the outer SGD requires solving ELK, presumably because it involves learning new potentially-opaque models.]{.c11}\n\n
\n\n\n\n[\\[80\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref80){#ElicitingLatentKnowledge.xhtml#ftnt80} [ In general we don't really expect either the AI or the human to use Bayes nets, and an earlier version of this report discussed a much broader range of examples (including deduction in propositional logic), but as far as we can tell none of them qualitatively change the situation. If we were able to solve ELK for Bayes nets we would consider some other kinds of world models, and if early steps were successful (as anticipated) we would begin searching for the worst case model.]{.c11}\n\n
\n\n\n\n[\\[81\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref81){#ElicitingLatentKnowledge.xhtml#ftnt81} [ Of course we'd want to check this kind of claim once handling simpler cases, but rough heuristic arguments and guesses seem useful when we are trying to identify the hardest part of the problem.]{.c11}\n\n
\n\n\n\n[\\[82\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref82){#ElicitingLatentKnowledge.xhtml#ftnt82} [ This may sound like a lot, but we're talking about a predictor which is already solving a learning problem for every input. If ELK is competitive---if learning the reporter doesn't take much extra time beyond learning the predictor---then having to solve ELK won't slow it down much.]{.c11}\n\n
\n\n\n\n[\\[83\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref83){#ElicitingLatentKnowledge.xhtml#ftnt83} [ If ELK involves human effort then it's not obvious that the reporter can actually implement this strategy. Instead, it needs to learn the relevant facts about humans and then use an alternative version of ELK that directly leverages that information in place of human labor.]{.c11}\n\n
\n\n\n\n[\\[84\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref84){#ElicitingLatentKnowledge.xhtml#ftnt84} [ This identification is optimized so that humans using it produce a good outer reporter. The notion of \"good\" for the outer reporter would most likely be whatever objective we obtained from ELK, but in particular it would need to make good predictions on the outside. The hope is that \"identify the inner learner\" is easier to learn than \"do SGD to learn a good inner reporter\" (since the latter needs to identify the inner learner as well), and that the humans can take it from there.]{.c11}\n\n
\n\n\n\n[\\[85\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref85){#ElicitingLatentKnowledge.xhtml#ftnt85} [ Because there is so much more computation in the optimization to find the reporter than in the final pass to answer the question.]{.c11}\n\n
\n\n\n\n[\\[86\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref86){#ElicitingLatentKnowledge.xhtml#ftnt86} [ ]{.c68} [So it seems more likely that we will search for an approach to ontology identification and find it is reflective, than that we will search for something reflective and find that it solves ontology identification.]{.c21}\n\n
\n\n\n\n[\\[87\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref87){#ElicitingLatentKnowledge.xhtml#ftnt87} [ Viewed properly the two kinds of learned optimization discussed in this section should probably be handled in the same way. But our current understanding, both of the question and of the answer, is sufficiently incomplete that we aren't confident about how to unify them. So for now it seems useful to keep thinking about multiple distinct concrete kinds of learned optimization. In practice we would like to try to solve ELK for either one of the examples discussed in this section, then consider the other, and finally try to move on to the hardest kind of learned optimization we can think of. We discuss both of these examples explicitly here in order to provide additional clarity about how we are thinking about learned optimization and to explain why we think the problem is likely to be soluble---we expect some readers will be most naturally concerned about one of these framings and others will be most naturally concerned about the other (and that most readers won't have any strong views about this topic and so probably won't even read this appendix).]{.c11}\n\n
\n\n\n\n[\\[88\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref88){#ElicitingLatentKnowledge.xhtml#ftnt88} [ Though note that the inner model is no longer being optimized for a generative loss, it is being optimized for the IG loss. As we describe in ]{.c21} [ [Appendix: problem statement](#ElicitingLatentKnowledge.xhtml#h.jk61tc933p1){.c9} ]{.c13 .c21} [, we think the ELK problem statement applies just as well to any kind of model optimized by gradient descent, not just generating models.]{.c11}\n\n
\n\n\n\n[\\[89\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref89){#ElicitingLatentKnowledge.xhtml#ftnt89} [ Indeed, it seems plausible to us that the techniques in this report are actually quite close to solving ontology identification and most of the complication comes from something that should be best viewed as this kind of learned optimization. We discuss this possibility more in ]{.c21} [ [Appendix: regularization and imitative generalization](#ElicitingLatentKnowledge.xhtml#h.a54v0atju0fd){.c9} ]{.c13 .c21} [.]{.c11}\n\n
\n\n\n\n[\\[90\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref90){#ElicitingLatentKnowledge.xhtml#ftnt90} [ The algorithm run on the cognitive scratchspace could be something much more alien than inference in a Bayes net, for example the learned optimizer may itself design a new AI and run it on a computational aid. (The inner model could ]{.c21} [also]{.c21 .c23} [ be much more exotic, but it seems increasingly plausible if we imagine a learned optimizer who is smarter than the humans who built the original system.) In full generality, the learned model may effectively be asked to solve the alignment problem for some kind of AI very different from machine learning.]{.c11}\n\n[]{.c11}\n\n[We are currently viewing this as a problem for future people (and AI)\\-\\--we expect there to eventually be more effective paradigms for building AI, and some of those will look quite different from ML. We will need to solve alignment for all of those new kinds of AI in time to avoid trouble. Our view is that we just want to put future humans and AIs in a good position to solve these problems, and ensure that aligned AI systems are motivated to solve them. This view is described in somewhat more detail in ]{.c21} [ [A possible stance for alignment research](https://www.google.com/url?q=https://ai-alignment.com/a-possible-stance-for-ai-control-research-fe9cf717fc1b&sa=D&source=editors&ust=1646948967028440&usg=AOvVaw25RFgqiDC0d9fmZTWtP5O9){.c9} ]{.c13 .c21} [.]{.c11}\n\n[]{.c11}\n\n[From this perspective the only question is whether our AI is adequately motivated to try to solve the alignment problem for any new cognitive algorithms that it develops. We believe that imitative generalization would clearly meet this bar, but our other two solutions may not (it's very hard to say). Our current position is that this is an important issue, but that it is considerably more exotic than the other problems we are thinking about and we would like to return to it once we know how we will solve the foreseeable problems (since then we can think productively about whether our solutions will generalize).]{.c11}\n\n
\n\n\n\n[\\[91\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref91){#ElicitingLatentKnowledge.xhtml#ftnt91} [ Technically this isn't really necessary, since e.g. the predictor might describe even simpler initial conditions from which humans evolve or some other learning procedure which can produce humans from simpler data. But those cases can plausibly be handled in the same way as other learned optimization, see ]{.c21} [ [Appendix: learned optimizers](#ElicitingLatentKnowledge.xhtml#h.3f3phmjt4uvn){.c9} ]{.c13 .c21} [.]{.c11}\n\n
\n\n\n\n[\\[92\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref92){#ElicitingLatentKnowledge.xhtml#ftnt92} [ We'll use l2 norm throughout to capture a complexity regularizer, but this is probably not the most realistic strategy and it would need to be done carefully (e.g. with attention to the architecture).]{.c11}\n\n
\n\n\n\n[\\[93\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref93){#ElicitingLatentKnowledge.xhtml#ftnt93} [ This is very similar to the algorithm structure introduced in ]{.c21} [ [Answering questions honestly instead of predicting human answers](https://www.google.com/url?q=https://www.alignmentforum.org/posts/QqwZ7cwEA2cxFEAun/teaching-ml-to-answer-questions-honestly-instead-of&sa=D&source=editors&ust=1646948967049236&usg=AOvVaw1qNmWpSeCHd3QgU-O72_9E){.c9} ]{.c13 .c21} [ and explored in ]{.c21} [ [this followup](https://www.google.com/url?q=https://www.alignmentforum.org/posts/gEw8ig38mCGjia7dj/answering-questions-honestly-instead-of-predicting-human&sa=D&source=editors&ust=1646948967049714&usg=AOvVaw2Y7rQQ9H_IZJSh2FDL9_KO){.c9} ]{.c13 .c21} [.]{.c11}\n\n
\n\n\n\n[\\[94\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref94){#ElicitingLatentKnowledge.xhtml#ftnt94} [ This is also important for correctness of the algorithm---otherwise the reporter can learn to make the consistency optimization problem intractable and thereby improve its own loss. A real algorithm along these lines would need to be based on a detailed local analysis, effectively looking at how much the reporter compresses each step of SGD rather than trying to compress the end result.]{.c11}\n\n
\n\n\n\n[\\[95\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref95){#ElicitingLatentKnowledge.xhtml#ftnt95} [ Realistically we are more interested in conditional probabilities. We could estimate such probabilities by just taking joint distributions and conditioning them, but to be efficient we would need to use some more effective strategy for sampling worlds where the condition is satisfied.]{.c11}\n\n
\n\n\n\n[\\[96\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref96){#ElicitingLatentKnowledge.xhtml#ftnt96} [ 1% would obviously be an unrealistically large probability for random actions, but this doesn't change the example.]{.c11}\n\n
\n\n\n\n[\\[97\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref97){#ElicitingLatentKnowledge.xhtml#ftnt97} [ This might take a lot of work.]{.c11}\n\n
\n\n\n\n[\\[98\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref98){#ElicitingLatentKnowledge.xhtml#ftnt98} [ The human simulator is using the predictor's probabilities for future video, so it's reported probabilities can be computed by P]{.c21} [H]{.c21 .c26} [(tampering \\| X, diamond appears safe) \\* P]{.c21} [AI]{.c21 .c26} [(diamond appears safe \\| X) = 50% \\* 50% = 25%.]{.c11}\n\n
\n\n\n\n[\\[99\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref99){#ElicitingLatentKnowledge.xhtml#ftnt99} [ In reality the two events wouldn't be independent---maybe the SmartVault only has resources to do one or the other. But regardless of what the direct translator says, the human simulator will always imply a ]{.c21} [larger]{.c21 .c23} [ negative correlation, for any X such that P]{.c21} [AI]{.c21 .c26} [(diamond looks safe\\|X) \\> P]{.c21} [H]{.c21 .c26} [(diamond looks safe\\|X).]{.c11}\n\n
\n\n\n\n[\\[100\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref100){#ElicitingLatentKnowledge.xhtml#ftnt100} [ In some cases the correlations are negative and in some cases they are positive. For the human simulator, we can predict the direction of the correlation conditioned on any event X by first checking whether the human over- or under-estimates the probability of the diamond appearing safe conditioned on X. We can then penalize whichever direction is expected of the human simulator. We can check whether the human model over- or under-estimates the probability by simply using the reporter on a number of inputs sampled randomly.]{.c11}\n\n
\n\n\n\n[\\[101\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref101){#ElicitingLatentKnowledge.xhtml#ftnt101} [ This cheating strategy would actually appear \"more consistent\" than direct translation, because it can artificially introduce correlations that point in the opposite direction from what was expected of human imitation. Ideally a consistency loss would assign the minimum possible loss to direct translation, although that seems difficult given that the human's Bayes net actually does have systematic errors. We have very little idea whether this is a dealbreaker for consistency-based approaches.]{.c11}\n\n
\n\n\n\n[\\[102\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref102){#ElicitingLatentKnowledge.xhtml#ftnt102} [ Or perhaps matching some properties of them\\-\\--maybe the human thinks that P(A\\|B) \\> P(A\\| not B), or that P(A\\|B) is in the interval \\[0.2, 0.6\\] but they aren't sure where, or that A and C are close to conditionally independent given B, or so on.]{.c11}\n\n
\n\n\n\n[\\[103\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref103){#ElicitingLatentKnowledge.xhtml#ftnt103} [ Or information from which it was able to successfully figure out how to do direct translation.]{.c11}\n\n
\n\n\n\n[\\[104\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref104){#ElicitingLatentKnowledge.xhtml#ftnt104} [ We could go slightly further in this direction by specifying a property that the reporter ought to satisfy and then paying for the complexity of the property plus the complexity of the reporter given consistency and that property. And finally we can soften the learned predicate by using a learned regularizer instead.]{.c11}\n\n
\n\n\n\n[\\[105\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref105){#ElicitingLatentKnowledge.xhtml#ftnt105} [ You could either imagine specifying the \"facts about diamond\" as part of the reporter specification, so that the reporter is basically the description of a learning algorithm it wants to run, or having the reporter just leave some \"holes\" that then get filled in to make things consistent as judged by the human overseer. Both approaches have advantages and disadvantages, but the core difficulty discussed in this section applies to both of them.]{.c11}\n\n
\n\n\n\n[\\[106\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref106){#ElicitingLatentKnowledge.xhtml#ftnt106} [ Most of the approaches in this report could potentially be meta-learned by an ML system that was only trying to minimize simplicity. We can view the proposal in this section as a tweak to the architecture of the reporter: an inductive bias in favor of doing this bilevel optimization, and more importantly the ability for the reporter to compress itself / do a bunch of computation before it actually sees the situation about which it is supposed to answer questions (so that it doesn't have to redo that potentially-very-long computation every time).]{.c11}\n\n
\n\n\n\n[\\[107\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref107){#ElicitingLatentKnowledge.xhtml#ftnt107} [ Slightly more precisely, the human simulator is consistent because of the combination: (a) it searches for consistent stories, reproducing all the relationships \"behind the scenes\" that the human expected, (b) the observations in fact have the correlations which the human expects.]{.c11}\n\n
\n\n\n\n[\\[108\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref108){#ElicitingLatentKnowledge.xhtml#ftnt108} [ If we had time we would also like to run experiments with different methods to understand when and where they work, and to \"red team\" particular proposals by evaluating them with weaker training data against held out overseers, and so on. But we still think that generating lots of candidates is valuable grist for experiments and that combining many options into an ensemble is likely to be important given uncertainty about how to extrapolate from experiments. Without ensembling we would probably advocate a ]{.c21} [much]{.c21 .c23} [ more conservative approach than what is described in this section.]{.c11}\n\n
\n\n\n\n[\\[109\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref109){#ElicitingLatentKnowledge.xhtml#ftnt109} [ More generally, if any of them report that an observation or inference is mistaken, then we shouldn't trust that inference.]{.c11}\n\n
\n\n\n\n[\\[110\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref110){#ElicitingLatentKnowledge.xhtml#ftnt110} [ It's also worth varying architectures, optimization algorithms, ]{.c21} [etc.]{.c21 .c23}\n\n
\n\n\n\n[\\[111\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref111){#ElicitingLatentKnowledge.xhtml#ftnt111} [ Of course \"complexity\" can be defined in many different ways, which we should try varying.]{.c11}\n\n
\n\n\n\n[\\[112\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref112){#ElicitingLatentKnowledge.xhtml#ftnt112} [ We considered titling this report \"narrow ontology identification\" but after discussion with other researchers in alignment (including at MIRI) decided that the differences in focus were large enough that it was worth using a new term that would be more evocative for an audience thinking primarily about ML. We also think that it is very hard to state ontology identification precisely as a problem (since we don't have a well-defined way to separate \"your AI learns a model of the world and does inference in it\" from the kinds of cases described in ]{.c21} [ [Appendix: learned optimizers](#ElicitingLatentKnowledge.xhtml#h.3f3phmjt4uvn){.c9} ]{.c13 .c21} [) ]{.c21} [and so slightly prefer the broader problem ELK. The argument on the flip side is that this statement of ELK is broad enough that it likely requires resolving many other difficulties, and so in practice \"ontology identification\" may be a more productively narrow research focus.]{.c11}\n\n
\n\n\n\n[\\[113\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref113){#ElicitingLatentKnowledge.xhtml#ftnt113} [ Consider a world in state X where the sensors have been tampered to look to the human like state Y. The bisimulation objective is optimized when X is mapped to Y, rather than to \"a situation where the sensors look like Y,\" because this exactly reproduces the human dynamics. But that's the behavior we want to avoid.]{.c11}\n\n
\n\n\n\n[\\[114\\]](#ElicitingLatentKnowledge.xhtml#ftnt_ref114){#ElicitingLatentKnowledge.xhtml#ftnt114} [ Other researchers are interested in different aspects of this problem, for example what kinds of interpretations are needed for existing models, and how do we search for explanations of what neural networks are doing?]{.c11}\n\n
\n", "url": "n/a", "filename": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/md_ebooks/Eliciting Latent Knowledge-by Paul Christiano, Ajeya Cotra, Mark Xu-date 2021-12-14.md", "id": "f7cff585bcf09bc96abcaa2cdf44e39c"}
+{"source": "markdown.ebooks", "source_type": "markdown", "title": "EAI 2022 AGI Safety Fundamentals alignment curriculum", "authors": "EleutherAI, Richard Ngo", "date_published": "2022-05-16", "text": "### **EleutherAI Alignment 101** **Curriculum**\n\nOverview\n--------\n\nMeeting Times: {Fill-in}\n\nThe first iteration of the reading group has [[concluded]{.ul}](https://docs.google.com/document/d/1W-V7hCr9JEnoi_SG6occL8SztkcmXhW9-FbP7tOki6c/edit?usp=sharing). Future iterations will probably happen at some point.\n\nEach week the group will meet for 1.5 hours to discuss the readings and exercises. Broadly speaking, the first half of the course explores the motivations and arguments underpinning the field of AGI safety, while the second half focuses on proposals for technical solutions.\n\nThe main focus each week will be on the core readings and one exercise **of your choice** out of the two exercises listed, for which you should allocate **around 2 hours preparation time.** Most people find some concepts from the readings confusing, but **that's totally fine** - resolving those uncertainties is what the discussion groups are for. Approximate times taken to read each piece in depth are listed next to them. **Note that in some cases only a small section of the linked reading is assigned**. In several cases, blog posts about machine learning papers are listed instead of the papers themselves; you're only expected to read the blog posts, but for those with strong ML backgrounds reading the paper versions might be worthwhile.\n\nThis curriculum is forked from Richard Ngo's AGI Safety Fundamentals curriculum, but has since accumulated a lot of changes.\n\n> [EleutherAI Alignment 101 Curriculum](#eleutherai-alignment-101-curriculum) 1\n\n**[Overview](#overview) 1**\n\n**[Full curriculum](#full-curriculum) 2**\n\n> [Week 1: Intro and Motivation](#week-1-intro-and-motivation) 2\n>\n> [Week 2: Goals and misalignment: Outer Alignment](#week-2-goals-and-misalignment-outer-alignment) 5\n>\n> [Week 3: Goals and misalignment: Inner Alignment](#week-3-goals-and-misalignment-inner-alignment) 7\n>\n> [Week ?: Timelines and Threat models](#week-timelines-and-threat-models) 10\n>\n> [Week 4: Learning from humans](#week-4-learning-from-humans) 12\n>\n> [Week 5: Factored Cognition for Outer Alignment](#week-5-factored-cognition-for-outer-alignment) 16\n>\n> [Week 6: Interpretability & ELK](#week-6-interpretability-elk) 19\n>\n> [Week 7: Agent Foundations and Embedded Agency](#week-7-agent-foundations-and-embedded-agency) 21\n>\n> [Week 8: Wrap Up & Bigger Picture](#week-8-wrap-up-bigger-picture) 23\n>\n> [Week TBD: AI governance, and careers in alignment research](#week-tbd-ai-governance-and-careers-in-alignment-research) 24\n>\n> [Week TBD (four weeks later): Projects](#week-tbd-four-weeks-later-projects) 27\n>\n> [Tentative Ideas](#tentative-ideas) 27\n>\n> [Projects overview](#projects-overview) 27\n>\n> [Timings](#timings) 27\n>\n> [Format](#format) 27\n>\n> [Ideas](#ideas) 27\n>\n> [Further resources](#further-resources) 29\n\nFull curriculum\n---------------\n\n### Week 1: Intro and Motivation\n\nThe first week focuses on a brief introduction to the basic ideas of AGI and is intended to provide motivation for why we care about alignment and AGI. TODO: write more here\n\nSlides: [[Week 1.5 EAI-ARG]{.ul}](https://docs.google.com/presentation/d/1cc0oUOItfOyQYcPBp7I9-zHFEUfrpcoZSCE3EVsiJXQ/edit?usp=sharing)\n\nTODO: Notebooks for each week\n\nCore readings:\n\n1. [[Deadly Truth of General AI (Rob Miles, 2015)]{.ul}](https://www.youtube.com/watch?v=tcdVC4e6EV4) (10 mins)\n\n2. [[\"There's no fire alarm for AGI\" (Yudkowsky, 2017)]{.ul}](https://intelligence.org/2017/10/13/fire-alarm/) (35 mins)\n\n a. Provides the case for why we should work on alignment even if it's very hard to forecast when AGI will happen, and why we shouldn't say \"we'll start working on AGI once it's closer\".\n\n3. [[The Bitter Lesson (Sutton, 2019)]{.ul}](http://www.incompleteideas.net/IncIdeas/BitterLesson.html) (15 mins)\n\n a. The harsh reality of recent AI progress is that general methods which scale with increased compute outperform (expert) human-knowledge-based approaches.\n\n4. [[AGI safety from first principles (Ngo, 2020)]{.ul}](https://drive.google.com/file/d/1uK7NhdSKprQKZnRjU58X7NLA1auXlWHt/view) **(from section 1 to end of 2.1)** (20 mins)\n\n - Narrow task-based AI and general AI form the tails of a spectrum of artificially intelligent systems. This short read distinguishes their features.\n\n5. [[\"We choose to align AI\" (Wentworth, 2022)]{.ul}](https://www.lesswrong.com/posts/BseaxjsiDPKvGtDrm/we-choose-to-align-ai) (5 mins)\n\n - A fun and short motivational post\n\nFurther readings for Week 1:\n\n1. [[AI: racing towards the brink (Harris and Yudkowsky, 2018)]{.ul}](https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/) (110 mins) ([[audio here]{.ul}](https://www.youtube.com/watch?v=aS9cK0J5JXI))\n\n - Transcript of a podcast conversation between Sam Harris and Eliezer Yudkowsky. Probably the best standalone resource for introducing AGI risk; covers many of the topics from this week and next week.\n\n2. [[General intelligence (Yudkowsky, 2017)]{.ul}](https://arbital.com/p/general_intelligence/) (25 mins)\n\n - Yudkowsky provides a compelling explanation of the importance of the concept of general intelligence.\n\n3. More is different for AI (Steinhardt, 2022) ([[introduction]{.ul}](https://bounded-regret.ghost.io/more-is-different-for-ai/), [[second post]{.ul}](https://bounded-regret.ghost.io/future-ml-systems-will-be-qualitatively-different/), [[third post]{.ul}](https://bounded-regret.ghost.io/thought-experiments-provide-a-third-anchor/)) (20 mins)\n\n - Steinhardt argues that, in machine learning, novel behaviours tend to emerge at larger scales, which are difficult to predict using standard approaches. Particularly recommended for those who are concerned that discussions of AGI are ungrounded or speculative.\n\n4. [[The power of intelligence (Yudkowsky, 2007)]{.ul}](https://intelligence.org/2007/07/10/the-power-of-intelligence/) (10 mins)\n\n - One possible objection is that AGI is a confused concept, because \"intelligence\" is not a single unified property. Yudkowsky addresses this point, again drawing on the analogy from humans.\n\n5. [[Summary of Drexler's Reframing Superintelligence report (Shah, 2019)]{.ul}](https://www.alignmentforum.org/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as) (10 mins)\n\n - It's also worth considering ways the field of AI might advance without leading to AGI, e.g. via developing many narrow AIs which can each perform a small set of tasks very well. Drexler lays out one way this might happen in his Reframing Superintelligence report (summarised by Shah).\n\n6. [[Understanding human intelligence through human limitations (Griffiths, 2020)]{.ul}](https://arxiv.org/abs/2009.14050) (40 mins)\n\n - Griffiths provides a framework for thinking about ways in which machine intelligence might differ from human intelligence.\n\n7. [[AI and compute: how much longer can computing power drive AI progress? (Lohn and Musser, 2022)]{.ul}](https://cset.georgetown.edu/wp-content/uploads/AI-and-Compute-How-Much-Longer-Can-Computing-Power-Drive-Artificial-Intelligence-Progress.pdf) (30 mins)\n\n - This and the next two readings focus on forecasting progress in AI, via looking at trends in compute and algorithms, and surveying expert opinions.\n\n8. [[AI and efficiency (Hernandez and Brown, 2020)]{.ul}](https://openai.com/blog/ai-and-efficiency/) (15 mins)\n\n - See above.\n\n9. [[When will AI exceed human performance? Evidence from AI experts (Grace et al., 2017)]{.ul}](https://arxiv.org/pdf/1705.08807.pdf) (15 mins)\n\n - See above.\n\nExercises:\n\n1. A crucial feature of AGI is that it will possess cognitive skills which are useful across a range of tasks, rather than just the tasks it was trained to perform. Which cognitive skills did humans evolve because they were useful in our ancestral environments, which have remained useful in our modern environment? Which have become less useful?\n\n2. What are the most plausible ways for the hypothesis \"we will eventually build AGIs which have transformative impacts on the world\" to be false? How likely are they?\n\nNotes:\n\n1. Instead of AGI, some people use the terms \"human-level AI\" or \"strong AI\". \"Superintelligence\" refers to AGI which is far beyond human-level intelligence. The opposite of general AI is called *narrow* AI. In his \"Most important century\" series, Karnofsky focuses on AI which will automate the process of scientific and technological advancement (which he gives the acronym PASTA) - this seems closely related to the concept of AGI, but without some additional connotations that the latter carries.\n\n2. Most of the content in this curriculum doesn't depend on strong claims about when AGI will arise, so try to avoid focusing disproportionately on the reading about timelines during discussions. However, I expect that it would be useful for participants to consider which evidence would change their current expectations in either direction. Note that the forecasts produced by the biological anchors method are fairly consistent with the survey of expert opinions carried out by Grace et al. (2017).\n\nDiscussion prompts:\n\n1. Ngo (2020) opens with a definition of intelligence as the ability to perform well at a wide range of cognitive tasks. What are some advantages and disadvantages of this definition?\n\n2. Here's an alternative way of describing general intelligence: whatever mental skills humans have that allow us to build technology and civilization (in contrast to other animals). What do you think about this characterisation?\n\n3. One intuition for how to think about very smart AIs: imagine speeding up human intellectual development by a factor of X. What do you expect a human civilization to know by 2100 or 2200? If an AI could do the same quality of research, but 10 or 100 times faster, how would you use it?\n\n4. How frequently do humans build technologies where some of the details of why they work aren't understood by anyone? Would it be very surprising if we built AGI without understanding very much about how its thinking process works?\n\n5. Thinking about AGI involves reasoning about entities smarter than us, and a future technology that doesn't exist yet. What problems does this introduce, and how should we respond to them?\n\n### \n\n### Week 2: Goals and misalignment: Outer Alignment\n\nThis week we'll focus on how and why AGIs might develop goals that are *misaligned* with those of humans, in particular when they've been trained using machine learning. We cover several core ideas from alignment: instrumental convergence, the orthogonality thesis, and Goodhart's Law.\n\nSlides: [[Week 2 EAI-ARG]{.ul}](https://docs.google.com/presentation/d/1kRnZxYRlJcJIIESEw5wINxWuHf0mben-Yih1sDjKdiU/edit?usp=sharing)\n\nCore readings:\n\n1. [[Why Would AI Want to do Bad Things? Instrumental Convergence]{.ul}](https://www.youtube.com/watch?v=ZeecOKBus3Q) (10 mins)\n\n2. [[Intelligence and Stupidity: The Orthogonality Thesis]{.ul}](https://www.youtube.com/watch?v=hEUO6pjwFOo) (15 mins)\n\n- **Text alternative for the two videos above, for those who don't like videos:**\n\n - [[Superintelligence, Chapter 7: The superintelligent will (Bostrom, 2014)]{.ul}](https://drive.google.com/file/d/1FVl9W2gW5_8ODYNZJ4nuFg79Z-_xkHkJ/view?usp=sharing) (35 mins)\n\n```{=html}\n\n```\n- Bostrom outlines 2 theses on the relationship between intelligence and motivation in an artificial agent; namely the orthogonality thesis and instrumental convergence.\n\n3. [[AI \"Stop Button\" Problem - Computerphile]{.ul}](https://www.youtube.com/watch?v=3TYT1QfdfsM) (20 mins)\n\n- **Text alternative for the video above, for those who don't like videos:**\n\n - **todo:**\n\n4. [[Specification gaming: the flip side of AI ingenuity (Krakovna et al., 2020)]{.ul}](https://medium.com/@deepmindsafetyresearch/specification-gaming-the-flip-side-of-ai-ingenuity-c85bdb0deeb4) (15 mins)\n\n - Specification Gaming is one example of Goodhart's Law in action in practice\n\nExercises:\n\n1. By some definitions, a chess AI has the goal of winning. When is it useful to describe it that way? What are the key differences between human goals and the \"goals\" of a chess AI?\n\nNotes:\n\nDiscussion prompts:\n\n1. Christiano (2018) defined alignment as follows: \"an AI A is aligned with an operator H if A is trying to do what H wants it to do\". Some questions about this:\n\n a. What's the most natural way to interpret \"what the human wants\" - what they say, or what they think, or what they would think if they thought about it for much longer?\n\n b. How should we define an AI being aligned to a group of humans, rather than an individual?\n\n2. Does it make sense to talk about corporations and countries having goals? Does it matter that these consist of many different people, or can we treat them as agents with goals in a similar way to individual humans?\n\n3. Did Bostrom miss any important convergent instrumental goals? (His current list: self-preservation, goal-content integrity, cognitive enhancement, technological perfection, resource acquisition.) One way of thinking about this might be to consider which goals humans regularly pursue and why.\n\n4. By some definitions, a chess AI has the goal of winning. When is it useful to describe it that way? What are the key differences between human goals and the \"goals\" of a chess AI?\n\n a. The same questions, but for corporations and countries instead of chess AIs. Does it matter that these consist of many different people, or can we treat them as agents with goals in a similar way to individual humans?\n\n5. Suppose that we want to build a highly intelligent AGI that is myopic, in the sense that it only cares about what happens over the next day or week. Would such an agent still have convergent instrumental goals? What factors might make it easier or harder to train a myopic AGI than a non-myopic AGI?\n\n### \n\n### Week 3: Goals and misalignment: Inner Alignment\n\nWhen the models learned by our ML systems are themselves optimizers, it's possible for their objective to become misaligned with our objective.\n\nHubinger et al. (2019a) argue that even an agent trained on the \"right\" reward function might acquire undesirable goals - the problem of *inner alignment*. Carlsmith (2021) explores in more detail what it means for an agent to be goal-directed in a worrying way, and gives reasons why such agents seem likely to arise.\n\nIn the worst case, inner misalignment can lead to deceptive alignment, where our models pretend to be aligned so we approve of them, before performing a treacherous turn in deployment.\n\n{width=\"4.076560586176728in\" height=\"2.6131791338582677in\"}\n\nSlides: [[Week 2.5 EAI-ARG]{.ul}](https://docs.google.com/presentation/d/10zHGheCrnoCsMmy54JMaJUayJmu2QtoDx-HiqBkvM9Q/edit?usp=sharing)\n\nCore readings:\n\n1. [[The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment]{.ul}](https://www.youtube.com/watch?v=bJLcIBixGj8&feature=youtu.be) (25 mins)\n\n - This video gives a more accessible introduction to the inner alignment problem, as discussed in Hubinger et al. (2019a).\n\n2. [[Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think\\...]{.ul}](https://www.youtube.com/watch?v=IeWljQw3UgQ) (10 min)\n\n3. [[Introduction to Risks from Learned Optimisation (Hubinger et al., 2019a)]{.ul}](https://www.alignmentforum.org/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction) (20 mins)\n\n4. [[Risks from Learned Optimisation: The Inner Alignment Problem (Hubinger et al., 2019b)]{.ul}](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB/p/pL56xPoniLvtMDQ4J) (25 mins)\n\n - Follows on from Hubinger et al. (2019a) to explain the inner alignment problem in more depth.\n\n5. [[Risks from Learned Optimisation: Deceptive alignment (Hubinger et al., 2019)]{.ul}](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment) (30 mins)\n\nFurther readings for both week 2 and 2.5:\n\n6. [[Why alignment could be hard with modern deep learning (Cotra, 2021)]{.ul}](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/) (25 mins)\n\n - Cotra presents one broad framing for why achieving alignment might be hard, tying together the ideas from the core readings in a more accessible way.\n\n7. [[Optimal policies tend to seek power (Turner et al., 2021)]{.ul}](https://neurips.cc/virtual/2021/poster/28400) (15 mins)\n\n - Turner et al. help to flesh out the arguments from Bostrom (2014) by formalising the notion of power-seeking in the reinforcement learning context, and proving that many agents end up power-seeking. (See also the corresponding [[blog post]{.ul}](https://www.alignmentforum.org/posts/6DuJxY8X45Sco4bS2/seeking-power-is-often-convergently-instrumental-in-mdps) and [[paper]{.ul}](https://arxiv.org/abs/1912.01683).)\n\n8. [[Distinguishing claims about training vs deployment (Ngo, 2021)]{.ul}](https://www.alignmentforum.org/posts/L9HcyaiWBLYe7vXid/distinguishing-claims-about-training-vs-deployment) (15 mins)\n\n - Ngo updates many of the concepts in Bostrom (2014) to reflect the context of modern machine learning.\n\n9. [[Objective robustness in deep reinforcement learning (Koch et al., 2021)]{.ul}](https://arxiv.org/abs/2105.14111) (30 mins)\n\n - Koch et al. provide some toy examples where agents learn to score highly on proxies for their training reward function, rather than generalising their intended objective to new environments.\n\n10. [[AGI safety from first principles (Ngo, 2020)]{.ul}](https://drive.google.com/file/d/1uK7NhdSKprQKZnRjU58X7NLA1auXlWHt/view) **(only section 3: Goals and Agency)** (30 mins)\n\n - This and the next reading explore the concept of goal-directedness. Their arguments are far from conclusive, but do suggest that building goal-directed agents will be the default outcome unless we specifically try to do otherwise.\n\n11. [[Why tool AIs want to be agent AIs (Branwen, 2016)]{.ul}](https://www.gwern.net/Tool-AI) (45 mins)\n\n - See above.\n\n12. [[Ngo and Yudkowsky on alignment difficulty (Ngo and Yudkowsky, 2021)]{.ul}](https://www.alignmentforum.org/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty) (150 mins)\n\n - Ngo and Yudkowsky have an in-depth debate on the underlying reasons we might expect alignment to be easy or hard. The debate is very long, and takes place at a high level of abstraction - it's mainly recommended for those who are already very comfortable with the other arguments discussed this week.\n\n13. [[Is power-seeking AI an existential risk? (Carlsmith, 2021)]{.ul}](https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit#heading=h.lvsab2uecgk4) **(only sections 2: Timelines and 3: Incentives)** (25 mins)\n\n14. [[Clarifying \"AI alignment\" (Christiano, 2018)]{.ul}](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6) (10 mins)\n\n15. [[Inner Alignment: Explain Like I'm 12 Edition (Rafael Harth, 2020)]{.ul}](https://www.lesswrong.com/posts/AHhCrJ2KpTjsCSwbt/inner-alignment-explain-like-i-m-12-edition) (15 mins)\n\n16. [[Deceptively Aligned Mesa-Optimizers: It's Not Funny If I Have To Explain It]{.ul}](https://astralcodexten.substack.com/p/deceptively-aligned-mesa-optimizers)\n\n17. [[Goodhart Taxonomy - LessWrong]{.ul}](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy) (week 2 further reading)\n\nExercises:\n\n1. Why is it not appropriate to describe the specification gaming agents from last week's reading in Krakovna et al. (2020) as displaying inner alignment failures?\n\nNotes:\n\n1. The core idea of inner alignment is that, although the reward function is used to update the agent's behaviour based on how well it performs tasks during training, agents don't need to refer to the reward function while carrying out any given task (e.g. playing an individual game of Starcraft). So the motivations which drive the agent's behaviour during tasks need not be closely related to their reward function. The best thought experiments to help understand this are cases where the reward function is strongly correlated with some proxy objective - e.g. rewarding agents for surviving, which then leads them to acquire the goal of eating food (as humans did). More generally the example of humans is a very useful one when discussing inner alignment.\n\nDiscussion prompts:\n\n1. To what extent are humans inner-misaligned with respect to evolution? How can you tell, and what might similar indicators look like in AGIs?\n\n### \n\n### Week ?: Timelines and Threat models\n\nHow might misaligned AGIs cause existential catastrophes, and how might we stop them? Two threat models are outlined in Christiano (2019) - the first focusing on outer misalignment, the second on inner misalignment. Muehlhauser and Salamon (2012) outline a core intuition for why we might be unable to prevent these risks: that progress in AI will at some point speed up dramatically. A third key intuition - that misaligned agents will try to deceive humans - is explored by Hubinger et al. (2019).\n\nHow might we prevent these scenarios? Christiano (2020) gives a broad overview of the landscape of different contributions to making AIs aligned, with a particular focus on some of the techniques we'll be covering in later weeks.\n\nSlides: [[Week 3 EAI-ARG]{.ul}](https://docs.google.com/presentation/d/1SC8WWOoeJIyROiWvP3zMasY3FHBvwS_cL_be9hAoxdU/edit?usp=sharing)\n\nCore readings:\n\n1. [[What failure looks like (Christiano, 2019)]{.ul}](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like) (20 mins)\n\n2. [[Intelligence explosion: evidence and import (Muehlhauser and Salamon, 2012)]{.ul}](https://drive.google.com/file/d/1QxMuScnYvyq-XmxYeqBRHKz7cZoOosHr/view?usp=sharing) **(only pages 10-15)** (15 mins)\n\n3. [[AI alignment landscape (Christiano, 2020)]{.ul}](https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment) (35 mins)\n\nFurther readings:\n\n1. [[Unsolved problems in ML safety (Hendrycks et al., 2021)]{.ul}](https://arxiv.org/abs/2109.13916) (50 mins)\n\n - Hendryks et al. provide an overview of open problems in safety which focuses more on links to mainstream ML.\n\n2. [[Takeoff speeds (Christiano, 2018)]{.ul}](https://sideways-view.com/2018/02/24/takeoff-speeds/) (35 mins)\n\n - In response to Yudkowsky's (2015) argument that there will be a sharp \"intelligence explosion\", Christiano argues that the rate of progress will instead increase continuously over time. However, there is less distance between these positions than there may seem: Christiano still expects self-improving AI to eventually cause incredibly rapid growth.\n\n3. [[Clarifying \"What failure looks like\" (part 1) (Clarke, 2020)]{.ul}](https://www.lesswrong.com/posts/v6Q7T335KCMxujhZu/clarifying-what-failure-looks-like-part-1) (30 mins)\n\n - Part 1 of What Failure Looks Like depends heavily on coordination failures driven by capitalism, in a way that is quite different from earlier AI risk narratives from Bostrom and Yudkowsky. Clarke (2020) clarifies this scenario and its underlying assumptions.\n\n4. [[What multipolar failure looks like (Critch, 2021)]{.ul}](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic) (45 mins)\n\n - This and the next reading give further threat scenarios also motivated by the possibility of serious coordination failures.\n\n5. [[Another outer alignment failure story (Christiano, 2021)]{.ul}](https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story) (20 mins)\n\n - See above.\n\n6. [[Long-term growth as a sequence of exponential modes (Hanson, 2000)]{.ul}](https://mason.gmu.edu/~rhanson/longgrow.pdf) (40 mins)\n\n - Hanson uses historical data to predict a shift to a new \"growth mode\" in which the economy doubles every few weeks, which he considers a more plausible outcome of AI progress than an intelligence explosion. Again, although there has been [[substantial debate between Hanson and Yudkowsky]{.ul}](https://intelligence.org/ai-foom-debate/) about the plausibility of an intelligence explosion, don't overestimate the extent of their disagreement: both of them expect technological progress to speed up to a much greater extent than most forecasters.\n\n7. [[Eight claims about multi-agent AGI safety (Ngo, 2019)]{.ul}](https://www.alignmentforum.org/posts/dSAJdi99XmqftqXXq/eight-claims-about-multi-agent-agi-safety) (10 mins)\n\n - Ngo distinguishes eight different claims about how multi-agent dynamics might affect the safety of either training or deploying AGIs.\n\n8. [[Value is fragile (Yudkowsky, 2009)]{.ul}](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile) (15 mins)\n\n - Yudkowsky argues that even small inaccuracies in aligning AIs to human values may lead to catastrophic consequences.\n\nExercises:\n\n1. Christiano's \"influence-seeking systems\" threat model in *What Failure Looks Like* is in some ways analogous to profit-seeking companies. What are the most important mechanisms preventing companies from catastrophic misbehaviour? Which of those would and wouldn't apply to influence-seeking AIs?\n\n2. What are the individual tasks involved in machine learning research (or some other type of research important for technological progress)? Identify the parts of the process which have already been automated, the parts of the process which seem like they could plausibly soon be automated, and the parts of the process which seem hardest to automate.\n\nDiscussion prompts:\n\n1. What are the biggest vulnerabilities in human civilisation that might be exploited by misaligned AGIs? To what extent do they depend on the development of other technologies more powerful than those which exist today?\n\n2. Does the distinction between \"paying the alignment tax\" and \"reducing the alignment tax\" make sense to you? Give a concrete example of each case. Are there activities which fall into both of these categories, or are ambiguous between them?\n\n3. Most of the readings so far have been framed in the current paradigm of deep learning. Is this reasonable? To what extent are they undermined by the possibility of future paradigm shifts in AI?\n\n### \n\n### Week 4: Learning from humans\n\nThis week, we look at four techniques for training AIs on human data (all falling under \"learn from teacher\" in [[Christiano's AI alignment landscape]{.ul}](https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38) from last week). From a safety perspective, each of them improves on standard reinforcement learning techniques in some ways, but also has weaknesses which prevent it from solving the whole alignment problem. Next week, we'll look at some ways to make these techniques more powerful and scalable; this week focuses on understanding each of them. Participants who are already familiar with these techniques should read some of the further readings instead.\n\nThe first technique, behavioural cloning, is essentially an extension of supervised learning to settings where an AI must take actions over time - as discussed by Levine (2021). The second, reward modelling, allows humans to give feedback on the behaviour of reinforcement learning agents, which is then used to determine the rewards they receive; this is used by Christiano et al. (2017) and Steinnon et al. (2020). The third, inverse reinforcement learning (IRL for short), attempts to identify what goals a human is pursuing based on their behaviour.\n\nA notable variant of IRL is *cooperative* IRL (CIRL for short), introduced by Hadfield-Menell et al. (2016). CIRL focuses on cases where the human and AI interact in a shared environment, and therefore the best strategy for the human is often to help the AI learn what goal the human is pursuing.\n\nFinally, Christiano (2015) argues that learning human values is likely a hard enough problem that the techniques discussed so far won't be sufficient to solve it - a possibility which motivates the techniques discussed in subsequent weeks. While these techniques are flawed, understanding them is nonetheless important as background knowledge.\n\nSlides: [[Week 4 EAI-ARG]{.ul}](https://docs.google.com/presentation/d/1aC8uZxLE1kiUqu0dZYoUL4XTNnmu41aMkg8dWuACL4s/edit?usp=sharing)\n\nCore readings:\n\n1. [[Imitation learning lecture: part 1 (Levine, 2021a)]{.ul}](https://youtu.be/kGc8jOy5_zY) (20 mins)\n\n2. [[Training AI Without Writing A Reward Function, with Reward Modelling]{.ul}](https://www.youtube.com/watch?v=PYylPRX6z4Q)\n\n- **Text alternatives for the video above, for people who don't like videos:**\n\n - [[Deep RL from human preferences blog post (Christiano et al., 2017)]{.ul}](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/) (10 mins)\n\n - [[Learning to summarise with human feedback blog post (Stiennon et al., 2020)]{.ul}](https://openai.com/blog/learning-to-summarize-with-human-feedback/) (20 mins)\n\n3. Inverse reinforcement learning (IRL)\n\n a. For those who don't already understand IRL:\n\n - [[Inverse reinforcement learning example (Udacity, 2016)]{.ul}](https://www.youtube.com/watch?v=h7uGyBcIeII) (5 mins)\n\n - [[Learning from humans: what is inverse reinforcement learning? (Alexander, 2018)]{.ul}](https://thegradient.pub/learning-from-humans-what-is-inverse-reinforcement-learning/) (25 mins)\n\n b. For those who already understand IRL:\n\n - [[Cooperative inverse reinforcement learning (Hadfield-Menell et al., 2016)]{.ul}](https://arxiv.org/abs/1606.03137) (40 mins) and/or [[Cooperatively Learning Human Values -- The Berkeley Artificial Intelligence Research Blog]{.ul}](https://bair.berkeley.edu/blog/2017/08/17/cooperatively-learning-human-values/) (10 mins)\n\n4. [[The easy goal inference problem is still hard (Christiano, 2015)]{.ul}](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/h9DesGT3WT9u2k7Hr) (10 mins)\n\n5. [[Humans aren't agents - what then for value learning?]{.ul}](https://www.lesswrong.com/posts/DsEuRrsenZ6piGpE6/humans-aren-t-agents-what-then-for-value-learning) (5 mins)\n\nFurther readings:\n\n1. [[Reward-rational (implicit) choice: a unifying formalism for reward learning (Jeon et al., 2020)]{.ul}](https://arxiv.org/abs/2002.04833) (60 mins)\n\n - The task of aiming to identify human preferences from human data is known as *reward learning*. Both reward modelling and inverse reinforcement learning are examples of reward learning, using different types of data. In response to the proliferation of different types of reward learning, Jeon et al. (2020) proposes a unifying framework.\n\n2. [[Humans can be assigned any values whatsoever (Armstrong, 2018)]{.ul}](https://www.alignmentforum.org/posts/ANupXf8XfZo2EJxGv/humans-can-be-assigned-any-values-whatsoever) (15 mins)\n\n - A key challenge for reward learning is to account for ways in which humans are less than perfectly rational. Armstrong argues that this will be difficult, because there are many possible combinations of preferences and biases that can lead to any given behaviour, and the simplest is not necessarily the most accurate.\n\n3. [[Learning the preferences of bounded agents (Evans et al., 2015)]{.ul}](https://stuhlmueller.org/papers/preferences-nipsworkshop2015.pdf) (25 mins)\n\n - Evans et al. discuss a few biases that humans display, and ways to account for them when learning values.\n\n4. [[An EPIC way to evaluate reward functions (Gleave et al., 2021)]{.ul}](https://deepmindsafetyresearch.medium.com/an-epic-way-to-evaluate-reward-functions-c2c6d41b61cc) (15 mins) (see also [[a recorded presentation]{.ul}](https://slideslive.com/38953511/quantifying-differences-in-reward-functions))\n\n - Gleave et al. provide a way to evaluate the quality of learned reward functions.\n\n5. [[Learning human objectives by evaluating hypothetical behaviours (Reddy et al., 2020)]{.ul}](https://deepmind.com/blog/article/learning-human-objectives-by-evaluating-hypothetical-behaviours) (10 mins)\n\n - Reddy et al. present a technique that allows agents to learn about unsafe actions without actually taking them, by having humans evaluate hypothetical behaviors.\n\n6. [[Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations (Brown et al., 2019)]{.ul}](http://proceedings.mlr.press/v100/brown20a/brown20a.pdf) (40 mins)\n\n - Brown et al. provide a clever technique to allow imitation learning agents to surpass the performance of human demonstrators.\n\n7. [[Summary of assistance games (Flint, 2020)]{.ul}](https://www.alignmentforum.org/posts/qPoaA5ZSedivA4xJa/our-take-on-chai-s-research-agenda-in-under-1500-words) (15 mins)\n\n - CIRL is just one example of a broader framework called *assistance* primarily researched by Berkeley's Center for Human-Compatible AI (CHAI for short), which features ongoing interactions between humans and AIs - as explained by Flint.\n\n8. [[A general language assistant as a laboratory for alignment (Askell et al., 2021)]{.ul}](https://arxiv.org/abs/2112.00861) (sections 1 and 2) (40 mins)\n\n - Askell et al. focus on another way of learning from humans: having humans design prompts which encourage aligned behaviour, and then fine-tuning on those prompts (via a method they call context distillation).\n\n9. [[The MineRL BASALT Competition on Learning from Human Feedback (Shah et al. 2021)]{.ul}](https://arxiv.org/abs/2107.01969) (only section 1) (25 mins)\n\n - Shah et al. present a competition to train agents using human feedback to perform complex tasks in a Minecraft environment.\n\n10. [[Stop Button Solution? - Computerphile]{.ul}](https://www.youtube.com/watch?v=9nktr1MgS-A)\n\n11. [[Scalable agent alignment via reward modeling: a research direction]{.ul}](https://arxiv.org/pdf/1811.07871.pdf)\n\n - Provides a good lay of the land for reward modeling approaches circa 2018 and discusses the limitations of alternative value alignment approaches (see Section 7).\n\nExercises:\n\n1. Imagine using reward modelling, as described in the second reading from this week, to train an AI to perform a complex task like building a castle in Minecraft. What sort of problems would you encounter?\n\n2. Stiennon et al. (2020) note that \"optimizing our reward model eventually leads to sample quality degradation\". Explain why the curves in the corresponding graph are shaped the way they are. How could we prevent performance from decreasing so much?\n\nNotes:\n\n1. Learning how to perform well on a task based on examples of human performance on that task is known as *imitation learning*. Behavioural cloning is the simplest type of imitation learning. Using IRL to learn a human reward function, then training an agent on it, is a more complex type of imitation learning.\n\n2. The techniques discussed this week showcase a tradeoff between power and alignment: behavioural cloning provides the fewest incentives for misbehaviour, but is also hardest to use to go beyond human-level ability. Whereas reward modelling can reward agents for unexpected behaviour that leads to good outcomes (as long as humans can recognise them) - but this also means that those agents might find and be rewarded for manipulative or deceptive actions. Christiano et al. (2017) provide an example of an agent learning to deceive the human evaluator; and Stiennon et al. (2020) provide an example of an agent learning to \"deceive\" its reward model. Lastly, while IRL could in theory be used even for tasks that humans can't evaluate, it relies most heavily on assumptions about human rationality in order to align agents.\n\nDiscussion prompts:\n\n1. What are the key similarities and differences between behavioural cloning, reward modelling, and inverse reinforcement learning?\n\n2. What types of human preferences can these techniques most easily learn? What types would be hardest to learn?\n\n3. How might using reward modelling lead to misaligned AGIs? What are some of the fundamental limitations of reward modelling and inverse reinforcement learning that prevent them from scaling to AGI?\n\n### \n\n### Week 5: Factored Cognition for Outer Alignment\n\nOne category of research direction in technical AGI safety involves training AIs to do complex tasks by decomposing those tasks into simpler ones where humans can more easily understand and evaluate AI behavior. This week we'll cover three closely-related algorithms (all falling under \"build a better teacher\" in [[Christiano's AI alignment landscape]{.ul}](https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment)). The extent to which these approaches scale to AGI is an open question in alignment.\n\nWu et al. (2021) applies reward modelling recursively in order to solve more difficult tasks. Recursive reward modelling can be considered a special case of a more general class of techniques called *iterated amplification* (also known as *iterated distillation and amplification/IDA*), which is described in Ought (2019). A more technical description of iterated amplification is given by Christiano et al. (2018), along with some small-scale experiments. One useful way of thinking about IDA-like techniques is that, in the limit, it aims to instantiate HCH, a theoretical structure described by Christiano (2016). Whether HCH is aligned is a topic of dispute between different alignment researchers.\n\nThe third technique we'll discuss this week is *Debate*, as proposed by Irving and Amodei (2018). Unlike the other two techniques, Debate focuses on evaluating claims made by language models, rather than supervising AI behaviour over time. You'll spend some time during this week's session trying out [[a toy implementation of Debate]{.ul}](https://debate-game.openai.com/) (as explained in the curriculum notes).\n\nSlides: [[Week 5 EAI-ARG]{.ul}](https://docs.google.com/presentation/d/1rfYQUn2uEgc4cn6DZyEt69oHi9WK1KB8ZLqVwAXJ014/edit?usp=sharing)\n\nCore readings:\n\n1. [[A guide to Iterated Amplification & Debate]{.ul}](https://www.lesswrong.com/posts/vhfATmAoJcN8RqGg6/a-guide-to-iterated-amplification-and-debate) (30 mins)\n\n2. Factored cognition (Ought, 2019) ([[introduction]{.ul}](https://ought.org/research/factored-cognition) and [[scalability section]{.ul}](https://ought.org/research/factored-cognition/scalability)) (20 mins)\n\n3. [[Supervising strong learners by amplifying weak experts (Christiano et al., 2018)]{.ul}](https://arxiv.org/abs/1810.08575) (35 mins)\n\n4. [[AI safety via debate blog post (Irving and Amodei, 2018)]{.ul}](https://openai.com/blog/debate/) (15 mins)\n\n5. [[Humans consulting HCH (Christiano, 2016)]{.ul}](https://ai-alignment.com/humans-consulting-hch-f893f6051455) (5 mins)\n\n```{=html}\n\n```\n6. [[Scalable agent alignment via reward modeling \\| by DeepMind Safety Research \\| Medium]{.ul}](https://deepmindsafetyresearch.medium.com/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84)\n\nFurther readings:\n\n7. [[Iterated Distillation and Amplification (Cotra, 2018)]{.ul}](https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616) (20 mins)\n\n - Another way of understanding Iterated Amplification is by analogy to AlphaGo: as Cotra discusses, AlphaGo's tree search is an amplification step which is then distilled into its policy network.\n\n8. [[An overview of 11 proposals for building safe advanced AI (Hubinger, 2020)]{.ul}](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai) (introduction then proposals 3, 8, and 9. Proposals 2 and 7 may be useful as background for these.) (25 mins)\n\n - Iterated amplification doesn't specify *how* a human amplified by a model M should supervise the training of the next version of M. In theory we could use any of the techniques discussed last week - for example behavioural cloning, or reward modelling. Hubinger discusses several different options.\n\n9. [[Progress on AI safety via debate (Barnes et al., 2020)]{.ul}](https://www.alignmentforum.org/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1) (60 mins)\n\n - Hubinger's (2020) discussion of Debate mentions the cross-examination variant of Debate, intended to make strategic ambiguity more difficult. Barnes et al. give more details on this, and some interesting human debate experiments.\n\n10. [[Debate update: obfuscated arguments problem (Barnes and Christiano, 2020)]{.ul}](https://www.alignmentforum.org/posts/PJLABqQ962hZEqhdB/debate-update-obfuscated-arguments-problem) (15 mins)\n\n - Barnes and Christiano discuss one of the key difficulties with applying Debate in practice.\n\n11. [[Scalable agent alignment via reward modelling (Leike et al., 2018)]{.ul}](https://arxiv.org/abs/1811.07871) (80 mins)\n\n - Leike et al. provide a research agenda for how reward modelling might scale up to solving the alignment problem, which describes recursive reward modelling as well as various challenges which will need to be solved.\n\n12. [[Challenges to Christiano's capability amplification proposal - LessWrong]{.ul}](https://www.lesswrong.com/posts/S7csET9CgBtpi7sCh/challenges-to-christiano-s-capability-amplification-proposal) (35 min)\n\n13. [[How to Keep Improving When You're Better Than Any Teacher - Iterated Distillation and Amplification]{.ul}](https://www.youtube.com/watch?v=v9M2Ho9I9Qo)\n\n14. [[Recursively summarising books with human feedback (Wu et al., 2021)]{.ul}](https://arxiv.org/abs/2109.10862) **(ending after section 4.1.2: Findings)** (35 mins)\n\nExercises:\n\n1. Wu et al. (2021) use a combination of behavioural cloning and reinforcement learning to train a summarisation model; this combination was also used to train AlphaGo and AlphaStar. Explain why that's better than using either by itself.\n\n2. A complex task like running a factory can be broken down into subtasks in a fairly straightforward way, allowing a large team of workers to perform much better than even an exceptionally talented individual. Describe a task where teams have much less of an advantage over the best individuals. Why doesn't your task benefit as much from being broken down into subtasks? How might we change that?\n\nNotes:\n\n1. During this week's discussion session, try playing [[OpenAI's implementation of the Debate game]{.ul}](https://debate-game.openai.com/). The instructions on the linked page are fairly straightforward, and each game should be fairly quick. Note in particular the example GIF on the webpage, and the instructions that \"the debaters should take turns, restrict themselves to short statements, and not talk too fast (otherwise, the honest player wins too easily).\"\n\n2. What makes AI Debate different from debates between humans? One crucial point is that in debates between humans, we prioritise the most important or impactful claims made - whereas *any* incorrect statement from an AI debater loses them the debate. This is a demanding standard (aimed at making debates between superhuman debaters easier to judge).\n\nDiscussion prompts:\n\n1. To what extent does the honest debater have an advantage in Debate? How might we modify the rules to give the honest debater a bigger advantage?\n\n a. One approach which has been suggested is called \"cross-examination\", which involves saving copies of each of the AI debaters throughout the debate, so that it's possible to go back later on and ask them to clarify what they meant (of course this can't be done with humans, since we can't just copy ourselves).\n\n2. Recursive reward modelling is one type of iterated amplification. Another is \"imitative amplification\", where we use behavioural cloning rather than reward modelling at each step. How should we expect them to differ?\n\n3. What might Debate look like when applied to complex technical questions?\n\n4. Debate is limited to question-answering, and can't train agents to take actions. How important do you expect this limitation to be?\n\n### \n\n### Week 6: Interpretability & ELK\n\nOur current methods of training capable neural networks give us very little insight into how or why they function. In this week we cover two broad research directions which aim to change this by developing a more scientific understanding of machine cognition, with the long-term goal of solving (or dissolving) the inner alignment problem.\n\nThe first is work on mechanistic interpretability, most notably pursued by Olah et al. (2020), which studies neural circuits to identify the functions they carry out. As background, first read Olah et al. (2017) for a discussion of how the techniques of feature visualisation work. Olah's overall perspective on how this research contributes to AGI safety is summarised by Hubinger et al. (2019).\n\nFinally, ELK is at the forefront of Paul Christiano's research agenda and is an important unifying framework behind many of the concepts so far.\n\n[[https://twitter.com/nabla_theta/status/1502106478890479619]{.ul}](https://twitter.com/nabla_theta/status/1502106478890479619)\n\nSlides: [[Week 6 EAI-ARG]{.ul}](https://docs.google.com/presentation/d/1PgXyJ_xfOHUQwo2jcNfmM2LEoCthopRQ7mf_PkePOR0/edit?usp=sharing)\n\nCore readings:\n\n1. [[Feature visualisation (Olah et al, 2017)]{.ul}](https://distill.pub/2017/feature-visualization/) (20 mins)\n\n2. [[Zoom In: an introduction to circuits (Olah et al., 2020)]{.ul}](https://distill.pub/2020/circuits/zoom-in/) (35 mins)\n\n3. [[Chris Olah's views on AGI safety (Hubinger, 2019)]{.ul}](https://www.alignmentforum.org/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety) (20 mins)\n\n4. [[A Mathematical Framework for Transformer Circuits]{.ul}](https://transformer-circuits.pub/2021/framework/index.html) (65 mins)\n\n5. [[Eliciting latent knowledge (Christiano et al., 2021)]{.ul}](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit#) **(up to the end of the Ontology Identification section on page 38, plus the [[Mechanistic Interpretability]{.ul}](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit#heading=h.w3tt0jawudyv) appendix section)** (60 mins)\n\nFurther readings:\n\n1. [[Thread: Circuits (Cammarata et al., 2020)]{.ul}](https://distill.pub/2020/circuits/)\n\n - A series of short articles building on Zoom In, exploring different circuits in the InceptionV1 vision network.\n\n2. [[A mathematical framework for transformer circuits (Elhage et al., 2021)]{.ul}](https://transformer-circuits.pub/2021/framework/index.html) (90 mins)\n\n - Elhage et al. build on previous circuits work to analyse transformers, the neural network architecture used by most of today's cutting-edge models. For a deeper dive into the topic, see the [[associated videos]{.ul}](https://transformer-circuits.pub/2021/videos/index.html).\n\n3. [[Rewriting a deep generative model (Bau et al., 2020)]{.ul}](https://rewriting.csail.mit.edu/) (20 mins)\n\n - Bau et al. find a way to change individual associations within a neural network, which allows them to replace specific components of an image. For work along similar lines in language models, [[see here]{.ul}](https://openreview.net/forum?id=mMECu_poAs).\n\n4. [[Value loading in the human brain: a worked example (Byrnes, 2021)]{.ul}](https://www.alignmentforum.org/posts/iMM6dvHzco6jBMFMX/value-loading-in-the-human-brain-a-worked-example) (25 mins)\n\n - In addition to interpretability research on neural networks, another approach to developing more interpretable AI involves studying human and animal brains. Byrnes gives an example of applying ideas from neuroscience to better understand AI.\n\n5. [[Interpretability beyond feature attribution: quantitative testing with concept attribution vectors (Kim et al., 2018)]{.ul}](https://arxiv.org/abs/1711.11279) (35 mins)\n\n - Kim et al. introduce a technique for interpreting a neural net's internal state in terms of human concepts.\n\n6. [[Clusterability in neural networks (Filan et al., 2021)]{.ul}](https://arxiv.org/abs/2103.03386) (25 mins)\n\n - Filan et al. present a technique for identifying modular structure within neural networks, and demonstrate that this type of structure arises during training.\n\n7. [[A Longlist of Theories of Impact for Interpretability - LessWrong (Neel Nanda. 2022)]{.ul}](https://www.lesswrong.com/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability)\n\nExercises:\n\n- Interpretability work on artificial neural networks is closely related to interpretability work on biological neural networks (aka brains). Describe two ways in which the former is easier than the latter, and two ways in which it's harder.\n\n- Think about why a complexity penalty on the reporter in ELK to disincentivize human simulation would not necessarily work in the worst case. Check your answer with the [[complexity penalty section]{.ul}](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit#heading=h.lltpmkloasiz) in the report (no peeking! Coming up with the answer on your own is important).\n\nNotes:\n\n1. Hubinger (2019) discusses the possibility of giving feedback on the process by which an agent chooses actions - in other words, supervising not just its behaviour but also its thoughts. Some researchers expect that this type of feedback will be crucial in making those scalable oversight techniques work for very advanced agents - but we currently have little idea how to do so at scale.\n\nDiscussion prompts:\n\n- The second core reading discusses how Chris Olah wants machine learning to be \"a field which focuses on deliberate design where understanding models is prioritized and the way that people make progress is through deeply understanding their systems\". How plausible/feasible is this? What might it look like to deeply understand the development of neural networks, apart from the sort of interpretability work discussed this week?\n\n- Were you surprised by the results and claims in Zoom In? Do you believe the Circuits hypothesis? If true, what are its most important implications?\n\n- In what ways are ELK and interpretability connected? If ELK is difficult in practice, what does this mean for interpretability?\n\n### \n\n### Week 7: Agent Foundations and Embedded Agency\n\nThe agent foundations research agenda pursued by the Machine Intelligence Research Institute (MIRI) aims to create rigorous mathematical frameworks to describe how AIs should reason about their real-world environments. Wentworth (2022) provides some motivation for the general class of approaches, Soares (2015) gives a high-level explanation of their approach, and Demski and Garrabrant (2018) identify a range of open problems and links between them.\n\nSlides: [[Week 7 EAI-ARG]{.ul}](https://docs.google.com/presentation/d/1Jt3Ngiil8lPRuLdyBSjMnqXg_pw2aifaNyKIC2IPcw8/edit?usp=sharing)\n\nCore readings:\n\n1. [[Why Agent Foundations? An Overly Abstract Explanation (Wentworth, 2022)]{.ul}](https://www.lesswrong.com/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation) (15 mins)\n\n2. [[Embedded agency (Demski and Garrabrant, 2018)]{.ul}](https://intelligence.org/2018/10/29/embedded-agents/) (60 mins; **read section 1 carefully and skim the rest**)\n\n - The important takeaway from this is to get a feel for the kinds of issues that come up with the full alignment problem. These are issues that are typically neglected in more prosaic work, but having an understanding of the things that make the full alignment problem hard is useful background knowledge.\n\n - I strongly recommend reading this in its entirety if you have the time (several hours), though be warned, there's a lot of meat in there.\n\nFurther Readings (warning: some of these posts are pretty intense):\n\n8. [[The rocket alignment problem (Yudkowsky, 2018)]{.ul}](https://intelligence.org/2018/10/03/rocket-alignment/) (35 mins)\n\n - The underlying intuition driving MIRI's approach is that the alignment problem is very difficult, and therefore rigorous mathematical frameworks will be needed in order to solve it. Yudkowsky uses an extended analogy between building AGI and building rockets to convey this intuition.\n\n9. [[Logical induction blog post (Garrabrant et al., 2016)]{.ul}](https://intelligence.org/2016/09/12/new-paper-logical-induction/) (15 mins)\n\n - This is the biggest result in agent foundations research thus far; it provides an algorithm for assigning probabilities to logical statements (like mathematical claims).\n\n10. [[Parametric Bounded Löb's Theorem and Robust Cooperation of Bounded Agents (Critch, 2016)]{.ul}](https://arxiv.org/abs/1602.04184) (40 mins)\n\n - Critch finds an algorithm by which agents which can observe each other's source code can reliably cooperate.\n\n11. [[An introduction to the infra-Bayesianism sequence (Kosoy and Diffractor, 2020)]{.ul}](https://www.alignmentforum.org/posts/zB4f7QqKhBHa5b37a/introduction-to-the-infra-bayesianism-sequence) (30 mins)\n\n - Kosoy and Diffractor present a formalism for bayesian inference with incomplete models. This post is extremely intense\n\n12. [[Progress on causal influence diagrams (Everitt et al., 2021)]{.ul}](https://www.alignmentforum.org/posts/Cd7Hw492RqooYgQAS/progress-on-causal-influence-diagrams) (15 mins)\n\n - Everitt et al. formally describe the incentives of reinforcement learning agents using causal influence diagrams.\n\n13. [[Intuitive Introduction to Functional Decision Theory]{.ul}](https://www.lesswrong.com/s/q2WQMoQSexx7xqqZR) (30 mins)\n\n14. [[MIRI's approach (Soares, 2015)]{.ul}](https://intelligence.org/2015/07/27/miris-approach/) (25 mins)\n\nExercises:\n\n1. Find another example of real world True Names in mathematics, science, etc other than the mutual information example mentioned in the Wentworth post. Compare it to potentially goodhartable versions of the same concept.\n\nDiscussion prompts:\n\n1. What are the best examples throughout history of scientists discovering mathematical formalisms that allowed them to deeply understand a phenomenon that they were previously confused about? (The easiest examples are from physics; how about others from outside physics?) To what extent should these make us optimistic about agent foundations research developing a mathematical understanding of intelligence?\n\n### \n\n### Week 8: Wrap Up & Bigger Picture\n\n6. [[Existential Risk: Analyzing Human Extinction Scenarios and Related Events (X-Risks) (Bostrom, 2002]{.ul}](https://www.nickbostrom.com/existential/risks.pdf)) (**up to and including section 3)** (20 min)\n\n a. Understanding the magnitude of x-risks is important to understanding why alignment is so important.\n\n7. [[Risks of Astronomical Suffering (S-Risks)]{.ul}](https://www.lesswrong.com/tag/risks-of-astronomical-suffering-s-risks) (5 min)\n\n a. S-risks are the worst case scenario of misalignment - even worse than x-risks.\n\n8. [[\"There's No Rule That Says We'll Make It\" (Miles, 2022)]{.ul}](https://www.youtube.com/watch?v=JD_iA7imAPs) (10 mins)\n\n a. Lots of people understand on an intellectual level that things aren't guaranteed to work out, but don't really *internalize* it. We could fail at alignment.\n\n9. [[\"Most important century\" series summary (Karnofsky, 2021b)]{.ul}](https://www.cold-takes.com/most-important-century/#Summary) (15 mins)\n\n10. [[\"Critch on career advice for junior AI-x-risk-concerned researchers\"]{.ul}](https://www.lesswrong.com/posts/7uJnA3XDpTgemRH2c/critch-on-career-advice-for-junior-ai-x-risk-concerned) (5 mins)\n\n a. A short caution against \"just standing nearby\" AI capabilities research as a common failure case for junior x-risk-concerned researchers.\n\n```{=html}\n\n```\n1. [[\"It Looks Like You're Trying To Take Over The World\" (gwern, 2022)]{.ul}](https://www.gwern.net/Clippy) (20 mins)\n\n - An entertaining short story that outlines one potential way misalignment and takeoff could happen with more prosaic methods.\n\n### \n\n### ~~Week **TBD**: AI governance, and careers in alignment research~~\n\n~~The last week of curriculum content is split between looking into the field of AI governance, and thinking about next steps for pursuing careers in alignment research. For the latter, see Ngo (forthcoming). For the former, start with Dafoe (2020), which gives a thorough overview of AI governance and ways in which it might be important, particularly focusing on the framing of AI governance as field-building. An alternative framing - of AI governance as an attempt to prevent cooperation failures - is explored by Clifton (2019). Finally, Khan (2021), Macro Polo (2020), and Shevlane (2022) give brief introductions to three key factors affecting the AI strategic landscape.~~\n\n~~In the taxonomy of AI governance given by Clarke (2022) in the optional readings (diagram below) this week's governance readings focus on strategy research, tactics research and field-building, not on developing, advocating or implementing specific policies. Those interested in exploring AI governance in more detail, including looking into individual policies, should look at [[the curriculum for the parallel AI governance track of this course]{.ul}](https://docs.google.com/document/d/1F4lq6yB9SCINuo190MeTSHXGfF5PnPk693JToszRttY/edit?usp=sharing).~~\n\n{width=\"6.5in\" height=\"1.1805555555555556in\"}\n\n~~Core readings:~~\n\n1. ~~Placeholder: reading on careers in alignment (to be added later) (Ngo, forthcoming)~~\n\n2. ~~[[AI Governance: Opportunity and Theory of Impact (Dafoe, 2020)]{.ul}](https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact) (25 mins)~~\n\n3. ~~[[Cooperation, conflict and transformative AI: sections 1 & 2 (Clifton, 2019)]{.ul}](https://www.alignmentforum.org/s/p947tK8CoBbdpPtyK/p/KMocAf9jnAKc2jXri) (25 mins)~~\n\n4. ~~[[The semiconductor supply chain (Khan, 2021)]{.ul}](https://cset.georgetown.edu/publication/the-semiconductor-supply-chain/) **(up to page 15)** (15 mins)~~\n\n5. ~~[[The global AI talent tracker (Macro Polo, 2020)]{.ul}](https://macropolo.org/digital-projects/the-global-ai-talent-tracker/) (5 mins)~~\n\n6. ~~[[Sharing powerful AI models (Shevlane, 2022)]{.ul}](https://www.governance.ai/post/sharing-powerful-ai-models) (10 mins)~~\n\n~~Further readings:~~\n\n*~~On strategic AI governance considerations:~~*\n\n1. ~~[[Deciphering China's AI dream (Ding, 2018)]{.ul}](https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf) (95 mins) ([[see also his podcast on this topic]{.ul}](https://80000hours.org/podcast/episodes/jeffrey-ding-china-ai-dream/))~~\n\n - ~~Ding gives an overview of Chinese AI policy, one of the key factors affecting the landscape of possible approaches to AI governance.~~\n\n2. ~~[[Leo Szilard and the danger of nuclear weapons: a case study in risk mitigation (Grace, 2015)]{.ul}](https://intelligence.org/files/SzilardNuclearWeapons.pdf) (60 mins)~~\n\n - ~~Grace (2015) discusses the analogies between attempts to prevent misuse of nuclear weapons, and attempts to ensure good outcomes from AGI.~~\n\n3. ~~[[AI, the space race, and prestige (Barnhart, 2021)]{.ul}](https://docs.google.com/document/d/1T4FIbV32pHow72pd_aOrGPJuV8YZ8PwP5Mhwjt6ppy4/edit#) **(all except section II: Exacerbating Conditions)** (40 mins)~~\n\n - ~~Barnhart (2021) provides a case study of the space race, identifying aspects which are analogous to our current situation.~~\n\n4. ~~[[The vulnerable world hypothesis (Bostrom, 2019)]{.ul}](https://www.nickbostrom.com/papers/vulnerable.pdf) (ending at the start of the section on 'Preventive policing') (60 mins)~~\n\n - ~~Bostrom provides a background framing for thinking about technological risks: the process of randomly sampling new technologies, some of which might prove catastrophic.~~\n\n5. ~~[[Sharing the world with digital minds (Shulman and Bostrom, 2020)]{.ul}](https://nickbostrom.com/papers/digital-minds.pdf) (50 mins)~~\n\n - ~~Shulman and Bostrom explore the moral status of digital minds, and the possibility that their experiences may be of great moral importance.~~\n\n6. ~~[[AI Governance: a research agenda (Dafoe, 2018)]{.ul}](https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf) (120 mins)~~\n\n - ~~Dafoe outlines an overarching research agenda linking many areas of AI governance.~~\n\n*~~On useful approaches to AI governance:~~*\n\n7. ~~[[Cooperative AI: machines must learn to find common ground (Dafoe et al., 2021)]{.ul}](https://www.nature.com/articles/d41586-021-01170-0) (15 mins)~~\n\n - ~~This and the next reading describe two research directions which mix technical work and governance work: cooperative AI, and truthful AI.~~\n\n8. ~~[[Truthful AI: Developing and governing AI that does not lie (Evans et al., 2021)]{.ul}](https://www.alignmentforum.org/posts/aBixCPqSnTsPsTJBQ/truthful-ai-developing-and-governing-ai-that-does-not-lie) (20 mins)~~\n\n - ~~See above.~~\n\n9. ~~[[Some AI governance research ideas (Anderljung and Carlier, 2021)]{.ul}](https://docs.google.com/document/d/13LJhP3ksrcEBKxYFG5GkJaC2UoxHKUYAHCRdRlpePEc/edit) (60 mins)~~\n\n - ~~This and the next two readings provide lists of research directions which have either been promising so far, or which may be useful to look into in the future.~~\n\n10. ~~[[Our AI governance grantmaking so far (Muehlhauser, 2020)]{.ul}](https://www.openphilanthropy.org/blog/ai-governance-grantmaking) (15 mins)~~\n\n - ~~See above.~~\n\n11. ~~[[The longtermist AI governance landscape: a basic overview (Clarke, 2022)]{.ul}](https://forum.effectivealtruism.org/posts/ydpo7LcJWhrr2GJrx/the-longtermist-ai-governance-landscape-a-basic-overview) (15 mins)~~\n\n - ~~See above.~~\n\n~~Exercises:~~\n\n1. ~~Explain the importance of the ability to make credible commitments for Clifton's (2019) game-theoretic analysis of cooperation failures.~~\n\n2. ~~In what ways has humanity's response to other threats apart from AI (e.g. nuclear weapons, pandemics) been better than we would have expected beforehand? In what ways has it been worse? What can we learn from this?~~\n\n~~Notes:~~\n\n1. ~~\"Accident\" risks, as discussed in Dafoe (2020), include the standard risks due to misalignment which we've been discussing for most of the course. I don't usually use the term, because \"deliberate\" misbehaviour from AIs is quite different from standard accidents.~~\n\n~~Discussion prompts:~~\n\n1. ~~How worried are you about misuse vs structural vs accident risk?~~\n\n2. ~~Do you expect AGI to be developed by a government or corporation (or something else)? What are the key ways that this difference would affect AI governance?~~\n\n3. ~~What are the main ways in which technical work could make AI governance easier or harder?~~\n\n4. ~~What are the biggest ways you expect AI to impact the world in the next 10 years? How will these affect policy responses aimed at the decade after that?~~\n\n5. ~~How likely do you think it is that we build and deploy many AIs that have net negative conscious experiences (as discussed in Shulman and Bostrom (2020))?~~\n\n6. ~~It seems important for regulators and policy-makers to have a good technical understanding of AI and its implications. In what cases should people with technical AI backgrounds consider entering these fields?~~\n\n### \n\n### Week **TBD** (four weeks later): Projects\n\n#### Tentative Ideas\n\nFrom Leo on EAI discord: \"at the end of the 8 weeks we let everyone pick a direction they're interested in learning more about, give them one or two months to just read up on as much as possible about what they're interested in. this could also double as flex time for people who fall behind on the reading or join late to catch up. probably multiple people per direction for redundancy because even at that point there's going to be some flaking, and so all the people working on the same thing will work together. of course throughout this thing we can have mentorship and stuff to make sure nobody gets stuck. maybe as an intermediate product we have blog posts that we then synthesize into a paper\"\n\n#### Projects overview\n\nThe final part of the AGI safety fundamentals course will be projects where you get to dig into something related to the course. The project is a chance for you to explore your interests, so try to find something you're excited about! The goal of this project is to help you practice taking an intellectually productive stance towards AGI safety - to go beyond just reading and discussing existing ideas, and take a tangible step towards contributing to the field yourself. This is particularly valuable because it's such a new field, with lots of room to explore.\n\n#### Timings\n\nWe've allocated four weeks between the last week of the curriculum content and the sessions where people present their projects. As a rough guide, spending 10-15 hours on the project during that time seems reasonable, but we're happy for participants to be flexible about spending more or less time on it. You may find it useful to write up a rough project proposal in the first week of working on your project and to send it to your cohort for feedback.\n\n#### Format\n\nThe format of the project is very flexible. The default project will probably be a piece of writing, roughly the length and scope of a typical blog post; we'd encourage participants to put these online after finishing them (although this is entirely optional). We'd also be happy for people to spend time gaining familiarity with machine learning (e.g. via the practical projects discussed below). Projects in the form of presentations are also possible, but we slightly discourage them; we'd prefer if you spend more time creating some piece of writing or code, then just casually talk through it with your cohort, rather than spending time on trying to make a polished presentation. We expect most projects to be individual ones; but feel free to do a collaborative project if you'd like to.\n\n#### Ideas\n\nSome project ideas (all just suggestions; feel free to design your own):\n\n- Pick a reading you found difficult but insightful. Distill down the key ideas and write up an accessible summary of the arguments, their weaknesses and your overall view. (Here are some examples of this being done well for [[Iterated Amplification]{.ul}](https://www.lesswrong.com/posts/PT8vSxsusqWuN7JXp/my-understanding-of-paul-christiano-s-iterated-amplification) and [[Inner Alignment]{.ul}](https://www.lesswrong.com/s/BRrcuwiF2SYWTQfqA/p/AHhCrJ2KpTjsCSwbt), although we don't expect projects to be as comprehensive as these.)\n\n- Pick a reading from the curriculum that you disagreed with, and critique it.\n\n- Pick one of the weeks of the curriculum; summarise and evaluate the key claims made in the core readings for that week and your key takeaways.\n\n- Make a set of forecasts about the future of AI, in a way that's concrete enough that you will be able to judge whether you were right or not, and predict what would significantly change your mind.\n\n- Practical projects:\n\n - (For those without much ML experience): Train a neural network on some standard datasets. For help, see the [[fast.ai course]{.ul}](https://course.fast.ai/) or the [[PyTorch tutorials]{.ul}](https://pytorch.org/tutorials/).\n\n - (For those with ML experience but without much RL experience): Train a deep reinforcement learning agent on a standard environment. For help, see [[Spinning Up in Deep RL]{.ul}](https://spinningup.openai.com/en/latest/).\n\n - Download a large language model (e.g. [[GPT-2]{.ul}](https://lambdalabs.com/blog/run-openais-new-gpt-2-text-generator-code-with-your-gpu/) or [[GPT-J-6B]{.ul}](https://github.com/kingoflolz/mesh-transformer-jax/)) or image-generating model (e.g. [[VQGAN+CLIP]{.ul}](https://docs.google.com/document/d/1Lu7XPRKlNhBQjcKr8k8qRzUzbBW7kzxb5Vu72GMRn2E/edit)), and try to find alignment failures - cases where the model is *capable* of doing what you intend, but doesn't. As one example (discussed in section 7.2 of [[this paper]{.ul}](https://arxiv.org/abs/2107.03374)), when the user gives it a prompt containing a subtle bug, the Codex language model may \"deliberately\" introduce further bugs into the code it writes, in order to match the style of the user prompt.\n\n - (For those with extensive ML and RL experience, looking for a longer project): Replicate the [[TREX paper]{.ul}](https://arxiv.org/abs/1904.06387) (easier) or the [[Deep Reinforcement Learning from Human Preferences paper]{.ul}](https://arxiv.org/pdf/1706.03741.pdf) (harder) in a simpler environment (e.g. [[cartpole]{.ul}](http://gym.openai.com/envs/CartPole-v1/)). See if you can train the agent to do something in that environment which you can't write an explicit reward function for.\n\n- If you're considering a career in machine learning, put together a career plan for what that might look like, with a particular focus on the most important skills for you to acquire.\n\n- Pick a key underlying belief which would impact your AGI safety research interests, or whether to research AGI safety at all. Review the literature around this question, and write up a post giving *your* overall views on it, and the strongest arguments for and against.\n\n - E.g. 'AGI is likely within the next 50 years' or 'Iterated Amplification is likely to produce competitive and aligned AGI'.\n\n### \n\n### Further resources\n\nML courses (free online):\n\n- [[Fast.ai]{.ul}](https://www.fast.ai/) courses\n\n- [[Stanford computer vision course (2017)]{.ul}](http://cs231n.stanford.edu/2017/syllabus.html)\n\n- [[NYU deep learning course]{.ul}](https://atcold.github.io/pytorch-Deep-Learning/)\n\n- [[Spinning up in deep RL]{.ul}](https://spinningup.openai.com/en/latest/index.html)\n\nML textbooks (all free online except the first):\n\n- [[Grokking deep learning (Trask)]{.ul}](https://www.manning.com/books/grokking-deep-learning)\n\n- [[Neural networks and deep learning (Nielsen)]{.ul}](http://neuralnetworksanddeeplearning.com/)\n\n- [[Deep Learning (Goodfellow, Bengio and Courville)]{.ul}](https://www.deeplearningbook.org/)\n\n- [[Reinforcement learning: an introduction (Sutton and Barto, 2nd edition)]{.ul}](http://incompleteideas.net/book/the-book-2nd.html)\n\n- [[Mathematics for machine learning (Deisenroth, Faisal and Ongg)]{.ul}](https://mml-book.github.io/)\n\n- [[Notes on contemporary machine learning for physicists (Kaplan)]{.ul}](https://sites.krieger.jhu.edu/jared-kaplan/files/2019/04/ContemporaryMLforPhysicists.pdf)\n\nAI safety resources:\n\n- [[Annotated Bibliography of Recommended Materials]{.ul}](https://humancompatible.ai/bibliography) - UC Berkeley Center for Human-Compatible AI (CHAI)\n\n```{=html}\n\n```\n- [[Alignment forum curated sequences]{.ul}](https://www.alignmentforum.org/library)\n\n- [[Alignment newsletter]{.ul}](https://rohinshah.com/alignment-newsletter/) - Rohin Shah\n\n- [[AI safety videos]{.ul}](https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg) - Rob Miles\n\n- [[Lots of Links]{.ul}](https://www.aisafetysupport.org/resources/lots-of-links) - AI Safety Support\n\n- [[Longer list of further resources]{.ul}](https://www.eacambridge.org/agi-further-resources) (including financial support, career resources, etc)\n\nOther:\n\n- [[GB Hamming, \"You and Your Research\" (June 6, 1995)]{.ul}](https://www.youtube.com/watch?v=a1zDuOPkMSw)\n\n- [[\"How To Get Into Independent Research On Alignment/Agency\" (Wentworth, 2021)]{.ul}](https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency) (20 mins)\n", "url": "n/a", "filename": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/md_ebooks/EAI 2022 AGI Safety Fundamentals alignment curriculum-by EleutherAI, Richard Ngo-date 2022-05-16.md", "id": "f8d858c2173c4b54740712d6b0a9d080"}
+{"source": "markdown.ebooks", "source_type": "markdown", "title": "AI Foom Debate", "authors": "Robin Hanson, Eliezer Yudkowsky", "date_published": "2013-01-01", "text": "# The Hanson-Yudkowsky AI-Foom Debate {style=\"text-align:center\"}\n\n## Robin Hanson and Eliezer Yudkowsky {.sigil_not_in_toc style=\"text-align:center\"}\n\n> ::: {.center}\n> Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University.\n>\n> Eliezer Yudkowsky is a Research Fellow at the Machine Intelligence Research Institute and is the foremost researcher on Friendly AI and recursive self-improvement.\n>\n> \n>\n> Published in 2013 by the> Machine Intelligence Research Institute,> Berkeley 94704> United States of America> [intelligence.org](http://intelligence.org)\n>\n> \n>\n> Released under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported license.> [CC BY-NC-SA 3.0](http://creativecommons.org/licenses/by-nc-sa/3.0/) \n>\n> \n>\n> [isbn-10:]{.textsc} > [isbn-13:]{.textsc} 978-1-939311-04-7> [(epub)]{.textsc}\n>\n> \n>\n> The Machine Intelligence Research Institute gratefully acknowledges each of the authors for their ideas and contributions toward this important topic. Special thanks to Carl Shulman and James Miller for their guest posts in the debate.\n>\n> All chapters and comments are written by and copyright their respective authors. Book cover created by Weni Pratiwi and Alex Vermeer.\n> :::\n\n[]{#AI-FOOM-Debateli1.html}\n\n## []{#AI-FOOM-Debateli1.html#x2-1000}Contents {.likechapterHead}\n\n::: {.tableofcontents}\n[[Foreword](../Text/AI-FOOM-Debatech1.html#x3-2000)]{.chapterToc}[I [Prologue](../Text/AI-FOOM-Debatepa1.html#x4-3000I)]{.partToc}[1. [Fund *UberTool*?](../Text/AI-FOOM-Debatech2.html#x5-40001)---Robin Hanson]{.chapterToc}[2. [Engelbart as *UberTool*?](../Text/AI-FOOM-Debatech3.html#x6-50002)---Robin Hanson]{.chapterToc}[3. [Friendly Teams](../Text/AI-FOOM-Debatech4.html#x7-60003)---Robin Hanson]{.chapterToc}[4. [Friendliness Factors](../Text/AI-FOOM-Debatech5.html#x8-70004)---Robin Hanson]{.chapterToc}[5. [The Weak Inside View](../Text/AI-FOOM-Debatech6.html#x9-80005)---Eliezer Yudkowsky]{.chapterToc}[6. [Setting the Stage](../Text/AI-FOOM-Debatech7.html#x10-90006)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[7. [The First World Takeover](../Text/AI-FOOM-Debatech8.html#x11-100007)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[8. [Abstraction, Not Analogy](../Text/AI-FOOM-Debatech9.html#x12-110008)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[9. [Whence Your Abstractions?](../Text/AI-FOOM-Debatech10.html#x13-120009)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[II [Main Sequence](../Text/AI-FOOM-Debatepa2.html#x14-13000II)]{.partToc}[10. [AI Go Foom](../Text/AI-FOOM-Debatech11.html#x15-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[11. [Optimization and the Intelligence Explosion](../Text/AI-FOOM-Debatech12.html#x16-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[12. [Eliezer's Meta-level Determinism](../Text/AI-FOOM-Debatech13.html#x17-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[13. [Observing Optimization](../Text/AI-FOOM-Debatech14.html#x18-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[14. [Life's Story Continues](../Text/AI-FOOM-Debatech15.html#x19-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[15. [Emulations Go Foom](../Text/AI-FOOM-Debatech16.html#x20-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[16. [Brain Emulation and Hard Takeoff](../Text/AI-FOOM-Debatech17.html#x21-)---Carl Shulman]{.chapterToc}[17. [Billion Dollar Bots](../Text/AI-FOOM-Debatech18.html#x22-)---James Miller]{.chapterToc}[18. [Surprised by Brains](../Text/AI-FOOM-Debatech19.html#x23-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[19. [\"Evicting\" Brain Emulations](../Text/AI-FOOM-Debatech20.html#x24-)---Carl Shulman]{.chapterToc}[20. [Cascades, Cycles, Insight . . .](../Text/AI-FOOM-Debatech21.html#x25-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[21. [When Life Is Cheap, Death Is Cheap](../Text/AI-FOOM-Debatech22.html#x26-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[22. [. . . Recursion, Magic](../Text/AI-FOOM-Debatech23.html#x27-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[23. [Abstract/Distant Future Bias](../Text/AI-FOOM-Debatech24.html#x28-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[24. [Engelbart: Insufficiently Recursive](../Text/AI-FOOM-Debatech25.html#x29-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[25. [Total Nano Domination](../Text/AI-FOOM-Debatech26.html#x30-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[26. [Dreams of Autarky](../Text/AI-FOOM-Debatech27.html#x31-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[27. [Total Tech Wars](../Text/AI-FOOM-Debatech28.html#x32-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[28. [Singletons Rule OK](../Text/AI-FOOM-Debatech29.html#x33-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[29. [Stuck In Throat](../Text/AI-FOOM-Debatech30.html#x34-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[30. [Disappointment in the Future](../Text/AI-FOOM-Debatech31.html#x35-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[31. [I Heart Cyc](../Text/AI-FOOM-Debatech32.html#x36-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[32. [Is the City-ularity Near?](../Text/AI-FOOM-Debatech33.html#x37-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[33. [Recursive Self-Improvement](../Text/AI-FOOM-Debatech34.html#x38-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[34. [Whither Manufacturing?](../Text/AI-FOOM-Debatech35.html#x39-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[35. [Hard Takeoff](../Text/AI-FOOM-Debatech36.html#x40-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[36. [Test Near, Apply Far](../Text/AI-FOOM-Debatech37.html#x41-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[37. [Permitted Possibilities and Locality](../Text/AI-FOOM-Debatech38.html#x42-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[38. [Underconstrained Abstractions](../Text/AI-FOOM-Debatech39.html#x43-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[39. [Beware Hockey-Stick Plans](../Text/AI-FOOM-Debatech40.html#x44-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[40. [Evolved Desires](../Text/AI-FOOM-Debatech41.html#x45-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[41. [Sustained Strong Recursion](../Text/AI-FOOM-Debatech42.html#x46-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[42. [Friendly Projects vs. Products](../Text/AI-FOOM-Debatech43.html#x47-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[43. [Is That Your True Rejection?](../Text/AI-FOOM-Debatech44.html#x48-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[44. [Shared AI Wins](../Text/AI-FOOM-Debatech45.html#x49-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[45. [Artificial Mysterious Intelligence](../Text/AI-FOOM-Debatech46.html#x50-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[46. [Wrapping Up](../Text/AI-FOOM-Debatech47.html#x51-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[47. [True Sources of Disagreement](../Text/AI-FOOM-Debatech48.html#x52-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[48. [The Bad Guy Bias](../Text/AI-FOOM-Debatech49.html#x53-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[49. [Disjunctions, Antipredictions, Etc.](../Text/AI-FOOM-Debatech50.html#x54-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[50. [Are AIs *Homo Economicus*?](../Text/AI-FOOM-Debatech51.html#x55-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[51. [Two Visions Of Heritage](../Text/AI-FOOM-Debatech52.html#x56-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[52. [The Mechanics of Disagreement](../Text/AI-FOOM-Debatech53.html#x57-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[III [Conclusion](../Text/AI-FOOM-Debatepa3.html#x58-57000III)]{.partToc}[53. [What Core Argument?](../Text/AI-FOOM-Debatech54.html#x59-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[54. [What I Think, If Not Why](../Text/AI-FOOM-Debatech55.html#x60-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[55. [Not Taking Over the World](../Text/AI-FOOM-Debatech56.html#x61-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[IV [Postscript](../Text/AI-FOOM-Debatepa4.html#x62-61000IV)]{.partToc}[56. [We Agree: Get Froze](../Text/AI-FOOM-Debatech57.html#x63-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[57. [You Only Live Twice](../Text/AI-FOOM-Debatech58.html#x64-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[58. [Hanson-Yudkowsky Jane Street Debate 2011](../Text/AI-FOOM-Debatech59.html#x65-)]{.chapterToc}[---Robin Hanson and Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[59. [Debating Yudkowsky](../Text/AI-FOOM-Debatech60.html#x66-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[60. [Foom Debate, Again](../Text/AI-FOOM-Debatech61.html#x67-)]{.chapterToc}[---Robin Hanson]{style=\"line-height: 24px;\"}[61. [AI-Foom Debate Summary](../Text/AI-FOOM-Debatech62.html#x68-)---Kaj Sotala]{.chapterToc}[62. [Intelligence Explosion Microeconomics](../Text/AI-FOOM-Debatech63.html#x69-)]{.chapterToc}[---Eliezer Yudkowsky]{style=\"line-height: 24px;\"}[[Bibliography](../Text/AI-FOOM-Debateli2.html#Q1-70-112)]{.chapterToc}\n:::\n\n[]{#AI-FOOM-Debatech1.html}\n\n## []{#AI-FOOM-Debatech1.html#x3-2000}Foreword {.chapterHead}\n\n{.dink}\n\nIn late 2008, economist Robin Hanson and AI theorist Eliezer Yudkowsky conducted an online debate about the future of artificial intelligence, and in particular about whether generally intelligent AIs will be able to improve their own capabilities very quickly (a.k.a. \"foom\"). James Miller and Carl Shulman also contributed guest posts to the debate.\n\nThe original debate took place in a long series of blog posts, which are collected here. This book also includes a transcript of a 2011 in-person debate between Hanson and Yudkowsky on this subject, a summary of the debate written by Kaj Sotala, and a 2013 technical report on AI takeoff dynamics (\"intelligence explosion microeconomics\") written by Yudkowsky.\n\nComments from the authors are included at the end of each chapter, along with a link to the original post. The curious reader is encouraged to use these links to view the original posts and all comments. This book contains minor updates, corrections, and additional citations.\n\n[]{#AI-FOOM-Debatepa1.html}\n\n# []{#AI-FOOM-Debatepa1.html#x4-3000I}[Part I ]{.titlemark}Prologue {.partHead}\n\n``{=html}\n\n{.dink}\n\n[]{#AI-FOOM-Debatech2.html}\n\n## []{#AI-FOOM-Debatech2.html#x5-40001}[Chapter 1]{.titlemark} Fund *UberTool*? {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [12 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nSome companies specialize in making or servicing tools, and some even specialize in redesigning and inventing tools. All these tool companies use tools themselves. Let us say that tool type A \"aids\" tool type B if tools of type A are used when improving tools of type B. The aiding graph can have cycles, such as when A aids B aids C aids D aids A.\n\nSuch tool aid cycles contribute to progress and growth. Sometimes a set of tool types will stumble into conditions especially favorable for mutual improvement. When the aiding cycles are short and the aiding relations are strong, a set of tools may improve together especially quickly. Such favorable storms of mutual improvement usually run out quickly, however, and in all of human history [no more than three](http://www.overcomingbias.com/2008/06/meta-is-max---i.html) storms have had a large and sustained enough impact to substantially change world economic growth rates.^[1](#AI-FOOM-Debatech2.html#enz.1)^[]{#AI-FOOM-Debatech2.html#enz.1.backref}\n\nImagine you are a venture capitalist reviewing a proposed business plan. *UberTool Corp* has identified a candidate set of mutually aiding tools, and plans to spend millions pushing those tools through a mutual improvement storm. While *UberTool* may sell some minor patents along the way, *UberTool* will keep its main improvements to itself and focus on developing tools that improve the productivity of its team of tool developers.\n\nIn fact, *UberTool* thinks that its tool set is so fantastically capable of mutual improvement, and that improved versions of its tools would be so fantastically valuable and broadly applicable, *UberTool* does not plan to stop their closed self-improvement process until they are in a position to suddenly burst out and basically \"take over the world.\" That is, at that point their costs would be so low they could enter and dominate most industries.\n\nNow given such enormous potential gains, even a very tiny probability that *UberTool* could do what they planned might entice you to invest in them. But even so, just what exactly would it take to convince you *UberTool* had even such a tiny chance of achieving such incredible gains?\n\n[]{#AI-FOOM-Debatech2.html#likesection.1}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/fund-ubertool.html#comment-518242019): . . . I'll offer my own intuitive answer to the above question: You've got to be doing something that's the same order of Cool as the invention of \"animal brains, human brains, farming, and industry.\" I think this is the wrong list, really; \"farming\" sets too low a standard. And certainly venture capitalists have a tendency and a motive to exaggerate how neat their projects are.\n>\n> But if, without exaggeration, you find yourself saying, \"Well, that looks like a much larger innovation than farming\"---so as to leave some safety margin---then why shouldn't it have at least that large an impact?\n>\n> However, I would be highly skeptical of an *UberTool Corp* that talked about discounted future cash flows and return on investment. I would be suspicious that they weren't acting the way I would expect someone to act if they really believed in their *UberTool*.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/11/fund-ubertool.html) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech2.html#enz.1} [1](#AI-FOOM-Debatech2.html#enz.1.backref). []{#AI-FOOM-Debatech2.html#cite.0.Hanson.2008h}Robin Hanson, \"In Innovation, Meta is Max,\" *Overcoming Bias* (blog), June 15, 2008, .\n\n[]{#AI-FOOM-Debatech3.html}\n\n## []{#AI-FOOM-Debatech3.html#x6-50002}[Chapter 2]{.titlemark} Engelbart as *UberTool*? {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [13 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nYesterday I [described](../Text/AI-FOOM-Debatech2.html#x5-40001) *UberTool*, an imaginary company planning to push a set of tools through a mutual-improvement process; their team would improve those tools, and then use those improved versions to improve them further, and so on through a rapid burst until they were in a position to basically \"take over the world.\" I asked what it would take to convince you their plan was reasonable, and got lots of thoughtful answers.\n\nDouglas Engelbart is the person I know who came closest to enacting such a *UberTool* plan. His seminal 1962 paper, \"[Augmenting Human Intellect: A Conceptual Framework](http://www.dougengelbart.org/pubs/augment-3906.html),\" proposed using computers to create such a rapidly improving tool set.^[1](#AI-FOOM-Debatech3.html#enz.2)^[]{#AI-FOOM-Debatech3.html#enz.2.backref} He understood not just that computer tools were especially open to mutual improvement, but also a lot about what those tools would look like. [Wikipedia](http://en.wikipedia.org/w/index.php?title=Douglas_Engelbart&oldid=251218108):\n\n> \\[Engelbart\\] is best known for inventing the computer mouse . . . \\[and\\] as a pioneer of human-computer interaction whose team developed hypertext, networked computers, and precursors to GUIs.^[2](#AI-FOOM-Debatech3.html#enz.3)^[]{#AI-FOOM-Debatech3.html#enz.3.backref}\n\nDoug led a team who developed a rich set of tools including a working hypertext publishing system. His 1968 \"[Mother of all Demos](http://en.wikipedia.org/w/index.php?title=The_Mother_of_All_Demos&oldid=242319216)\" to a thousand computer professionals in San Francisco\n\n> featured the first computer mouse the public had ever seen, as well as introducing interactive text, video conferencing, teleconferencing, email and hypertext \\[= the web\\].^[3](#AI-FOOM-Debatech3.html#enz.4)^[]{#AI-FOOM-Debatech3.html#enz.4.backref}\n\nNow to his credit, Doug never suggested that his team, even if better funded, might advance so far so fast as to \"take over the world.\" But he did think it could go far (his [Bootstrap Institute](http://dougengelbart.org/) still pursues his vision), and it is worth pondering just how far it was reasonable to expect Doug's group could go.\n\nTo review, soon after the most powerful invention of his century appeared, Doug Engelbart understood what few others did---not just that computers could enable fantastic especially-mutually-improving tools, but lots of detail about what those tools would look like. Doug correctly saw that computer tools have many synergies, offering tighter than usual loops of self-improvement. He envisioned a rapidly self-improving team focused on developing tools to help them develop better tools, and then actually oversaw a skilled team pursuing his vision for many years. This team created working systems embodying dramatically prescient features, and wowed the computer world with a dramatic demo.\n\nWasn't this a perfect storm for a tool-takeoff scenario? What odds would have been reasonable to assign to Doug's team \"taking over the world\"?\n\n[]{#AI-FOOM-Debatech3.html#likesection.2}\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/11/engelbarts-uber.html) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech3.html#enz.2} [1](#AI-FOOM-Debatech3.html#enz.2.backref). []{#AI-FOOM-Debatech3.html#cite.0.Engelbart.1962}Douglas C. Engelbart, *Augmenting Human Intellect: A Conceptual Framework*, technical report (Menlo Park, CA: Stanford Research Institute, October 1962), .\n\n[]{#AI-FOOM-Debatech3.html#enz.3} [2](#AI-FOOM-Debatech3.html#enz.3.backref). []{#AI-FOOM-Debatech3.html#cite.0.WP.Douglas-Engelbart}*Wikipedia*, s.v. \"Douglas Engelbart,\" accessed November 12, 2008, .\n\n[]{#AI-FOOM-Debatech3.html#enz.4} [3](#AI-FOOM-Debatech3.html#enz.4.backref). []{#AI-FOOM-Debatech3.html#cite.0.WP.Mother-of-all-Demos}*Wikipedia*, s.v. \"The Mother of All Demos,\" accessed October 1, 2008, .\n\n[]{#AI-FOOM-Debatech4.html}\n\n## []{#AI-FOOM-Debatech4.html#x7-60003}[Chapter 3]{.titlemark} Friendly Teams {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [15 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nWednesday I [described](../Text/AI-FOOM-Debatech2.html#x5-40001) *UberTool*, an imaginary firm planning to push a set of tools through a rapid mutual-improvement burst until they were in a position to basically \"take over the world.\" I asked when such a plan could be reasonable.\n\nThursday I [noted](../Text/AI-FOOM-Debatech3.html#x6-50002) that Doug Engelbart understood in '62 that computers were the most powerful invention of his century, and could enable especially-mutually-improving tools. He understood lots of detail about what those tools would look like long before others, and oversaw a skilled team focused on his tools-improving-tools plan. That team pioneered graphic user interfaces and networked computers and in '68 introduced the world to the mouse, videoconferencing, email, and the web.\n\nI asked if this wasn't ideal for an *UberTool* scenario, where a small part of an old growth mode \"takes over\" most of the world via having a head start on a new faster growth mode. Just as humans displaced chimps, farmers displaced hunters, and industry displaced farming, would a group with this much of a head start on such a general better tech have a decent shot at displacing industry folks? And if so, shouldn't the rest of the world have worried about how \"friendly\" they were?\n\nIn fact, while Engelbart's ideas had important legacies, his team didn't come remotely close to displacing much of anything. He lost most of his funding in the early 1970s, and his team dispersed. Even though Engelbart understood key elements of tools that today greatly improve team productivity, his team's tools did not seem to have enabled them to be radically productive, even at the task of improving their tools.\n\nIt is not so much that Engelbart missed a few key insights about what computer productivity tools would look like. I doubt it would have made much difference had he traveled in time to see a demo of modern tools. The point is that most tools require lots more than a few key insights to be effective---they also require thousands of small insights that usually accumulate from a large community of tool builders and users.\n\nSmall teams have at times suddenly acquired disproportionate power, and I'm sure their associates who anticipated this possibility used the usual human ways to consider that team's \"friendliness.\" But I can't recall a time when such sudden small team power came from an *UberTool* scenario of rapidly mutually improving tools.\n\nSome say we should worry that a small team of AI minds, or even a single mind, will find a way to rapidly improve themselves and take over the world. But what makes that scenario reasonable if the *UberTool* scenario is not?\n\n[]{#AI-FOOM-Debatech4.html#likesection.3}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/englebart-not-r.html#comment-518250164): What, in your perspective, distinguishes Doug Engelbart from the two previous occasions in history where a world takeover successfully occurred? I'm not thinking of farming or industry, of course.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/englebart-not-r.html#comment-518250234): Eliezer, I discussed what influences transition inequality [here](http://www.overcomingbias.com/2008/06/singularity-out.html).^[1](#AI-FOOM-Debatech4.html#enz.5)^[]{#AI-FOOM-Debatech4.html#enz.5.backref} . . .\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/11/englebart-not-r.html) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech4.html#enz.5} [1](#AI-FOOM-Debatech4.html#enz.5.backref). []{#AI-FOOM-Debatech4.html#cite.0.Hanson.2008b}Robin Hanson, \"Outside View of the Singularity,\" *Overcoming Bias* (blog), June 20, 2008, .\n\n[]{#AI-FOOM-Debatech5.html}\n\n## []{#AI-FOOM-Debatech5.html#x8-70004}[Chapter 4]{.titlemark} Friendliness Factors {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [16 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nImagine several firms competing to make the next generation of some product, like a lawn mower or cell phone. What factors influence variance in their product quality (relative to cost)? That is, how much better will the best firm be relative to the average, second best, or worst? Larger variance factors should make competitors worry more that this round of competition will be their last. Here are a few factors:\n\n1. [**Resource Variance**---The more competitors vary in resources, the more performance varies.]{#AI-FOOM-Debatech5.html#x8-7002x1}\n2. [**Cumulative Advantage**---The more prior wins help one win again, the more resources vary.]{#AI-FOOM-Debatech5.html#x8-7004x2}\n3. [**Grab It First**---If the cost to grab and defend a resource is much less than its value, the first to grab can gain a further advantage.]{#AI-FOOM-Debatech5.html#x8-7006x3}\n4. [**Competitor Count**---With more competitors, the best exceeds the second best less, but exceeds the average more.]{#AI-FOOM-Debatech5.html#x8-7008x4}\n5. [**Competitor Effort**---The longer competitors work before their performance is scored, or the more resources they spend, the more scores vary.]{#AI-FOOM-Debatech5.html#x8-7010x5}\n6. [**Lumpy Design**---The more quality depends on a few crucial choices, relative to many small choices, the more quality varies.]{#AI-FOOM-Debatech5.html#x8-7012x6}\n7. [**Interdependence**---When firms need inputs from each other, winner gains are also supplier gains, reducing variance.]{#AI-FOOM-Debatech5.html#x8-7014x7}\n8. [**Info Leaks**---The more info competitors can gain about others' efforts, the more the best will be copied, reducing variance.]{#AI-FOOM-Debatech5.html#x8-7016x8}\n9. [**Shared Standards**---Competitors sharing more standards and design features in info, process, or product can better understand and use info leaks.]{#AI-FOOM-Debatech5.html#x8-7018x9}\n10. [**Legal Barriers**---May prevent competitors from sharing standards, info, inputs.]{#AI-FOOM-Debatech5.html#x8-7020x10}\n11. [**Anti-Trust**---Social coordination may prevent too much winning by a few.]{#AI-FOOM-Debatech5.html#x8-7022x11}\n12. [**Sharing Deals**---If firms own big shares in each other, or form a co-op, or just share values, they may mind less if others win. Lets them tolerate more variance, but also share more info.]{#AI-FOOM-Debatech5.html#x8-7024x12}\n13. [**Niche Density**---When each competitor can adapt to a different niche, they may all survive.]{#AI-FOOM-Debatech5.html#x8-7026x13}\n14. [**Quality Sensitivity**---Demand/success may be very sensitive, or not very sensitive, to quality.]{#AI-FOOM-Debatech5.html#x8-7028x14}\n15. [**Network Effects**---Users may prefer to use the same product regardless of its quality.]{#AI-FOOM-Debatech5.html#x8-7030x15}\n16. [\\[*What factors am I missing? Tell me and I'll extend the list.*\\]]{#AI-FOOM-Debatech5.html#x8-7032x16}\n\nSome key innovations in history were associated with very high variance in competitor success. For example, our form of life seems to have eliminated all trace of any other forms on Earth. On the other hand, farming and industry innovations [were associated with](http://www.overcomingbias.com/2008/06/singularity-ine.html) much less variance. I attribute this mainly to info becoming [much leakier](http://www.overcomingbias.com/2008/06/singularity-out.html), in part due to more shared standards, which seems to bode well for our future.\n\nIf you worry that one competitor will severely dominate all others in the next really big innovation, forcing you to worry about its \"friendliness,\" you should want to promote factors that reduce success variance. (Though if you cared mainly about the winning performance level, you'd want more variance.)\n\n[]{#AI-FOOM-Debatech5.html#likesection.4}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/friendliness-fa.html#comment-518249090):\n>\n> > If you worry that the next really big innovation will be \"unfriendly\" in the sense of letting one competitor severely dominate all others . . .\n>\n> This simply isn't the way I use the word \"unFriendly.\" I use it to refer to terminal values and to final behaviors. A single mind that is more powerful than any other on the playing field, but doesn't run around killing people or telling them what to do, can be quite Friendly in both the intuitive sense and the benevolent-terminal-values sense.\n>\n> Calling this post \"Friendliness Factors\" rather than \"Local vs. Global Takeoff\" is needlessly confusing. And I have to seriously wonder---is this the way you had thought I defined \"Friendly AI\"? If so, this would seem to indicate very little familiarity with my positions at all.\n>\n> Or are you assuming that a superior tactical position automatically equates to \"dominant\" behavior in the unpleasant sense, hence \"unFriendly\" in the intuitive sense? This will be true for many possible goal systems, but not ones that have terminal values that assign low utilities to making people unhappy.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/friendliness-fa.html#comment-518249122): Eliezer, yes, sorry---I've just reworded that sentence.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/friendliness-fa.html#comment-518249203): Okay, with that rewording---i.e., \"These are factors that help determine why, how much, what kind of, and how soon you need to worry about Friendliness\"---I agree with all factors you have listed. I would add the following:\n>\n> - **Structure Variance**---the more differently designed competitors are, the more they will vary. Behaves much the same way as Resource Variance and may mitigate against Shared Standards.\n> - **Recursivity**---the speed at which the \"output\" of a competitor, in some sense, becomes a resource input or a variant structure.\n>\n> These factors and the curve of self-optimization implied in Cumulative Advantage are where I put most of my own attention, and it's what I think accounts for human brains taking over but Doug Engelbart failing to do so.\n>\n> Another factor:\n>\n> - **Shared Values/Smooth Payoffs**---the more that \"competitors\" (which are, in this discussion, being described more like runners in a race than business competitors) share each others' values, and the more they are thinking in terms of relatively smooth quantitative payouts and less in terms of being the first to reach the Holy Grail, the more likely they are to share info.\n>\n> (I.e., this is why Doug Engelbart was more likely to share the mouse with fellow scientists than AI projects with different values are to cooperate.)\n>\n> Others who think about these topics often put their focus on:\n>\n> - **Trust-busting**---competitors in aggregate, or a social force outside the set of competitors, try to impose upper limits on power, market share, outlaw certain structures, etc. Has subfactors like Monitoring effectiveness, Enforcement effectiveness and speed, etc.\n> - **Ambition**---competitors that somehow manage not to want superior positions will probably not achieve them.\n> - **Compacts**---competitors that can create and keep binding agreements to share the proceeds of risky endeavors will be less unequal afterward.\n> - **Reproduction**---if successful competitors divide and differentiate they are more likely to create a clade.\n>\n> Probably not exhaustive, but that's what's coming to mind at the moment.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/friendliness-fa.html#comment-518249255):\n>\n> - **Rivalness/Exclusivity**---a good design can in principle be used by more than one actor, unless patents prevent it. Versus one AI that takes over all the poorly defended computing power on the Internet may then defend it against other AIs.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/friendliness-fa.html#comment-518249284): . . . I edited the list to include many of your suggestions. Not sure I understand \"recursivity.\" I don't see that AIs have more cumulative advantage than human tool teams, and I suspect this CA concept is better broken into components.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/11/friendliness-fa.html) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech6.html}\n\n## []{#AI-FOOM-Debatech6.html#x9-80005}[Chapter 5]{.titlemark} The Weak Inside View {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [18 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Followup to:** [The Outside View's Domain](http://lesswrong.com/lw/ri/the_outside_views_domain/)When I met Robin in Oxford for a recent conference, we had a preliminary discussion on the Intelligence Explosion---this is where Robin suggested using [production functions](http://lesswrong.com/lw/vd/intelligence_in_economics/). And at one point Robin said something like, \"Well, let's see whether your theory's predictions fit previously observed growth-rate curves,\" which surprised me, because I'd never thought of that at all.\n\nIt had never occurred to me that my view of optimization ought to produce quantitative predictions. It seemed like something only an economist would try to do, as 'twere. (In case it's not clear, sentence one is self-deprecating and sentence two is a compliment to Robin---EY)\n\nLooking back, it's not that I made a choice to deal only in qualitative predictions, but that it didn't really occur to me to do it any other way.\n\nPerhaps I'm prejudiced against the Kurzweilian crowd, and their Laws of Accelerating Change and the like. Way back in the distant beginning that feels like a different person, I went around talking about Moore's Law and the extrapolated arrival time of \"human-equivalent hardware\" à la Moravec. But at some point I figured out that if you weren't exactly reproducing the brain's algorithms, porting cognition to fast serial hardware and to human design instead of evolved adaptation would toss the numbers out the window---and that how much hardware you needed depended on how smart you were---and that sort of thing.\n\nBetrayed, I decided that the whole Moore's Law thing was silly and a corruption of futurism, and I restrained myself to qualitative predictions (and retrodictions) thenceforth.\n\n[]{#AI-FOOM-Debatech6.html#likesection.5} Though this is to some extent [an argument produced after the conclusion](http://lesswrong.com/lw/js/the_bottom_line/), I would explain my reluctance to venture into *quantitative* futurism via the following trichotomy:\n\n- On problems whose pieces are individually *precisely* predictable, you can use the Strong Inside View to calculate a final outcome that has never been seen before---plot the trajectory of the first moon rocket before it is ever launched, or verify a computer chip before it is ever manufactured.\n- On problems that are drawn from a barrel of causally similar problems, where human optimism runs rampant and unforeseen troubles are common, the [Outside View beats the Inside View](http://lesswrong.com/lw/jg/planning_fallacy/). Trying to visualize the course of history piece by piece will turn out to not (for humans) work so well, and you'll be better off assuming a probable distribution of results similar to previous historical occasions---without trying to adjust for all the reasons why *this* time will be different and better.\n- But on problems that are new things under the Sun, where there's a huge change of context and a structural change in underlying causal forces, the [Outside View also fails](http://lesswrong.com/lw/ri/the_outside_views_domain/)---try to use it, and you'll just get into arguments about what is the proper domain of \"similar historical cases\" or what conclusions can be drawn therefrom. In this case, the best we can do is use the Weak Inside View---visualizing the causal process---to produce *loose, qualitative conclusions about only those issues where there seems to be lopsided support*.\n\nSo to me it seems \"obvious\" that my view of optimization is only strong enough to produce loose, qualitative conclusions, and that it can only be matched to its retrodiction of history, or wielded to produce future predictions, on the level of [qualitative physics](http://lesswrong.com/lw/ti/qualitative_strategies_of_friendliness/).\n\n\"Things should speed up here,\" I could maybe say. But not \"The doubling time of this exponential should be cut in half.\"\n\nI aspire to a deeper understanding of *intelligence* than this, mind you. But I'm not sure that even perfect Bayesian enlightenment would let me predict *quantitatively* how long it will take an AI to solve various problems in advance of it solving them. That might just rest on features of an unexplored solution space which I can't guess in advance, even though I understand the process that searches.\n\nRobin keeps asking me what I'm getting at by talking about some reasoning as \"deep\" while other reasoning is supposed to be \"surface.\" One thing which makes me worry that something is \"surface\" is when it involves generalizing a level N feature across a shift in level N - 1 causes.\n\nFor example, suppose you say, \"Moore's Law has held for the last sixty years, so it will hold for the next sixty years, even after the advent of superintelligence\" (as Kurzweil seems to believe, since he draws his graphs well past the point where you're buying a billion times human brainpower for \\$1,000).\n\nNow, if the Law of Accelerating Change were an exogenous, ontologically fundamental, precise physical law, then you wouldn't expect it to change with the advent of superintelligence.\n\nBut to the extent that you believe Moore's Law depends on human engineers, and that the timescale of Moore's Law has something to do with the timescale on which human engineers think, then extrapolating Moore's Law across the advent of superintelligence is extrapolating it across a shift in the previous causal generator of Moore's Law.\n\nSo I'm worried when I see generalizations extrapolated *across* a change in causal generators not themselves described---i.e., the generalization itself is on the level of the outputs of those generators and doesn't describe the generators directly.\n\nIf, on the other hand, you extrapolate Moore's Law out to 2015 because it's been reasonably steady up until 2008---well, Reality is still allowed to say, \"So what?\" to a greater extent than we can expect to wake up one morning and find Mercury in Mars's orbit. But I wouldn't bet against you, if you just went ahead and drew the graph.\n\nSo what's \"surface\" or \"deep\" depends on what kind of context shifts you try to extrapolate past.\n\nRobin Hanson [said](http://www.overcomingbias.com/2008/06/singularity-out.html):\n\n> Taking a long historical long view, [we see](http://www.overcomingbias.com/2008/06/economics-of-si.html) steady total growth rates punctuated by rare transitions when new faster growth modes appeared with little warning.^[1](#AI-FOOM-Debatech6.html#enz.6)^[]{#AI-FOOM-Debatech6.html#enz.6.backref} We know of perhaps four such \"singularities\": animal brains (\\~600 MYA), humans (\\~2 MYA), farming (\\~10 kYA), and industry (\\~0.2 kYA). The statistics of previous transitions suggest we are perhaps overdue for another one, and would be substantially overdue in a century. The next transition would change the growth rate rather than capabilities directly, would take a few years at most, and the new doubling time would be a week to a month.^[2](#AI-FOOM-Debatech6.html#enz.7)^[]{#AI-FOOM-Debatech6.html#enz.7.backref}\n\nWhy do these transitions occur? Why have they been similar to each other? Are the same causes still operating? Can we expect the next transition to be similar for the same reasons?\n\nOne may of course say, \"I don't know, I just look at the data, extrapolate the line, and venture this guess---the data is more sure than any hypotheses about causes.\" And that will be an interesting projection to make, at least.\n\nBut you shouldn't be surprised at all if Reality says, \"So what?\" I mean---real estate prices went up for a long time, and then they went down. And that didn't even require a tremendous shift in the underlying nature and causal mechanisms of real estate.\n\nTo stick my neck out further: I am *liable to trust the Weak Inside View over a \"surface\" extrapolation*, if the Weak Inside View drills down to a deeper causal level and the balance of support is sufficiently lopsided.\n\nI will go ahead and say, \"I don't care if you say that Moore's Law has held for the last *hundred* years. Human thought was a primary causal force in producing Moore's Law, and your statistics are all over a domain of human neurons running at the same speed. If you substitute better-designed minds running at a million times human clock speed, the rate of progress ought to speed up---*qualitatively* speaking.\"\n\nThat is, the prediction is without giving precise numbers or supposing that it's still an exponential curve; computation might spike to the limits of physics and then stop forever, etc. But I'll go ahead and say that the rate of technological progress ought to *speed up*, given the said counterfactual intervention on underlying causes to increase the thought speed of engineers by a factor of a million. I'll be downright indignant if Reality says, \"So what?\" and has the superintelligence make *slower* progress than human engineers instead. It really does seem like an argument so strong that even Reality ought to be persuaded.\n\nIt would be interesting to ponder what kind of historical track records have prevailed in such a clash of predictions---trying to extrapolate \"surface\" features across shifts in underlying causes without speculating about those underlying causes, versus trying to use the Weak Inside View on those causes and arguing that there is \"lopsided\" support for a qualitative conclusion; in a case where the two came into conflict . . .\n\n. . . kinda hard to think of what that historical case would be, but perhaps I only lack history.\n\nRobin, how surprised would you be if your sequence of long-term exponentials just . . . didn't continue? If the next exponential was too fast, or too slow, or something other than an exponential? To what degree would you be indignant, if Reality said, \"So what?\"\n\n[]{#AI-FOOM-Debatech6.html#likesection.6}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/vz/the_weak_inside_view/p1s): It seems reasonable to me to assign a \\~^1^/~4~--^1^/~2~ probability to the previous series not continuing roughly as it has. So it would be only one or two bits of surprise for me.\n>\n> I suspect it is near time for you to reveal to us your \"weak inside view,\" i.e., the analysis that suggests to you that hand-coded AI is likely to appear in the next few decades, and that it is likely to appear in the form of a single machine suddenly able to take over the world.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/vz/the_weak_inside_view/) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech6.html#enz.6} [1](#AI-FOOM-Debatech6.html#enz.6.backref). []{#AI-FOOM-Debatech6.html#cite.0.Hanson.2008}Robin Hanson, \"Economics of the Singularity,\" *IEEE Spectrum* 45, no. 6 (2008): 45--50, doi:[10.1109/MSPEC.](http://dx.doi.org/10.1109/MSPEC.).\n\n[]{#AI-FOOM-Debatech6.html#enz.7} [2](#AI-FOOM-Debatech6.html#enz.7.backref). Hanson, [\"Outside View of the Singularity](../Text/AI-FOOM-Debatech4.html#cite.0.Hanson.2008b).\"\n\n[]{#AI-FOOM-Debatech7.html}\n\n## []{#AI-FOOM-Debatech7.html#x10-90006}[Chapter 6]{.titlemark} Setting the Stage {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [18 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nAs Eliezer and I begin to explore our differing views on intelligence explosion, perhaps I should summarize my current state of mind.\n\nWe seem to agree that:\n\n1. [Machine intelligence would be a development of almost unprecedented impact and risk, well worth considering now.]{#AI-FOOM-Debatech7.html#x10-9002x1}\n2. [Feasible approaches include direct hand-coding, based on a few big and lots of little insights, and on emulations of real human brains.]{#AI-FOOM-Debatech7.html#x10-9004x2}\n3. [Machine intelligence will, more likely than not, appear within a century, even if the progress rate to date does not strongly suggest the next few decades.]{#AI-FOOM-Debatech7.html#x10-9006x3}\n4. [Many people say silly things here, and we do better to ignore them than to try to believe the opposite.]{#AI-FOOM-Debatech7.html#x10-9008x4}\n5. [Math and deep insights (especially probability) can be powerful relative to trend fitting and crude analogies.]{#AI-FOOM-Debatech7.html#x10-9010x5}\n6. [Long-term historical trends are suggestive of future events, but not strongly so.]{#AI-FOOM-Debatech7.html#x10-9012x6}\n7. [Some should be thinking about how to create \"friendly\" machine intelligences.]{#AI-FOOM-Debatech7.html#x10-9014x7}\n\nWe seem to disagree modestly about the relative chances of the emulation and direct-coding approaches; I think the first and he thinks the second is more likely to succeed first. Our largest disagreement seems to be on the chances that a single hand-coded version will suddenly and without warning change from nearly powerless to overwhelmingly powerful; I'd put it as less than 1% and he seems to put it as over 10%.\n\nAt a deeper level, these differences seem to arise from disagreements about what sorts of abstractions we rely on, and on how much we rely on our own personal analysis. My style is more to apply standard methods and insights to unusual topics. So I accept at face value the apparent direct-coding progress to date, and the opinions of most old AI researchers that success there seems many decades off. Since reasonable trend projections suggest emulation will take about two to six decades, I guess emulation will come first.\n\nThough I have physics and philosophy training, and nine years as a computer researcher, I rely most heavily here on abstractions from folks who study economic growth. These abstractions help make sense of innovation and progress in biology and economies, and can make sense of historical trends, putting apparently dissimilar events into relevantly similar categories. (I'll post more on this soon.) These together suggest a single suddenly superpowerful AI is pretty unlikely.\n\nEliezer seems to instead rely on abstractions he has worked out for himself, not yet much adopted by a wider community of analysts, nor proven over a history of applications to diverse events. While he may yet convince me to value them as he does, it seems to me that it is up to him to show us how his analysis, using his abstractions, convinces him that, more likely than it might otherwise seem, hand-coded AI will come soon and in the form of a single suddenly superpowerful AI.\n\n[]{#AI-FOOM-Debatech7.html#likesection.7}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/setting-the-sta.html#comment-518245226): You give me too much credit. I. J. Good was the one who suggested the notion of an \"intelligence explosion\" due to the positive feedback of a smart mind making itself even smarter. Numerous other AI researchers believe something similar. I might try to describe the \"hard takeoff\" concept in a bit more detail but I am hardly its inventor!\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/setting-the-sta.html#comment-518245309): . . . I didn't mean to imply you had originated the hard takeoff concept. But previous descriptions have been pretty hand-wavy compared to the detail usually worked out when making an argument in the economic growth literature. I want to know what you think is the best presentation and analysis of it, so that I can critique that.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/11/setting-the-sta.html) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech8.html}\n\n## []{#AI-FOOM-Debatech8.html#x11-100007}[Chapter 7]{.titlemark} The First World Takeover {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [19 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nBefore Robin and I move on to talking about the Future, it seems to me wise to check if we have disagreements in our view of the Past. Which might be much easier to discuss---and maybe even resolve. So . . .\n\nIn the beginning was the Bang. For nine billion years afterward, nothing much happened.\n\nStars formed and burned for long periods or short periods depending on their structure, but \"successful\" stars that burned longer or brighter did not pass on their characteristics to other stars. The first replicators were yet to come.\n\nIt was the Day of the Stable Things, when your probability of seeing something was given by its probability of accidental formation times its duration. Stars last a long time; there are many helium atoms.\n\nIt was the Era of Accidents, before the dawn of optimization. You'd only expect to see something with forty [bits of optimization](http://lesswrong.com/lw/va/measuring_optimization_power/) if you looked through a trillion samples. Something with a thousand bits' worth of functional complexity? You wouldn't expect to find that in the whole universe.\n\nI would guess that, if you were going to be stuck on a desert island and you wanted to stay entertained as long as possible, then you should sooner choose to examine the complexity of the cells and biochemistry of a single Earthly butterfly, over all the stars and astrophysics in the visible universe beyond Earth.\n\nIt was the Age of Boredom.\n\nThe hallmark of the Age of Boredom was not lack of natural resources---it wasn't that the universe was low on hydrogen---but, rather, the lack of any *cumulative* search. If one star burned longer or brighter, that didn't affect the probability distribution of the next star to form. There was no search but blind search. Everything from scratch, not even looking at the [neighbors of previously successful points](http://lesswrong.com/lw/vp/worse_than_random/). Not hill climbing, not mutation and selection, not even discarding patterns already failed. Just a random sample from the same distribution, over and over again.\n\nThe Age of Boredom ended with the first replicator.\n\n(Or the first replicator to catch on, if there were failed alternatives lost to history---but this seems unlikely, given the Fermi Paradox; a replicator should be more improbable than that, or the stars would teem with life already.)\n\nThough it might be most dramatic to think of a single RNA strand a few dozen bases long, forming by pure accident after who-knows-how-many chances on who-knows-how-many planets, another class of hypotheses deals with catalytic hypercycles---chemicals whose presence makes it more likely for other chemicals to form, with the arrows happening to finally go around in a circle. If so, RNA would just be a crystallization of that hypercycle into a single chemical that could both take on enzymatic shapes and store information in its sequence for easy replication.\n\nThe catalytic hypercycle is worth pondering, since it reminds us that the universe wasn't quite drawing its random patterns from the *same* distribution every time---the formation of a long-lived star made it more likely for a planet to form (if not another star to form), and the formation of a planet made it more likely for amino acids and RNA bases to form in a pool of muck somewhere (if not more likely for planets to form).\n\nIn this flow of probability, patterns in one attractor leading to other attractors becoming stronger, there was finally born a *cycle*---perhaps a single strand of RNA, perhaps a crystal in clay, perhaps a catalytic hypercycle---and that was the dawn.\n\nWhat makes this cycle significant? Is it the large amount of *material* that the catalytic hypercycle or replicating RNA strand could absorb into its pattern?\n\nWell, but any given mountain on Primordial Earth would probably weigh vastly more than the total mass devoted to copies of the first replicator. What effect does mere mass have on optimization?\n\nSuppose the first replicator had a probability of formation of 10^-30^. If that first replicator managed to make 10,000,000,000 copies of itself (I don't know if this would be an overestimate or an underestimate for a tidal pool) then this would increase your probability of encountering the replicator pattern by a factor of 10^10^, the total probability going up to 10^-20^. (If you were observing \"things\" at random, that is, and not just on Earth but on all the planets with tidal pools.) So that was a kind of optimization-directed probability flow.\n\nBut vastly more important, in the scheme of things, was this---that the first replicator made copies of itself, and some of those copies were errors.\n\nThat is, *it explored the neighboring regions of the search space*---some of which contained better replicators---and then those replicators ended up with more probability flowing into them, and explored *their* neighborhoods.\n\nEven in the Age of Boredom there were always regions of attractor space that were the gateways to other regions of attractor space. Stars begot planets, planets begot tidal pools. But that's not the same as a replicator begetting a replicator---it doesn't search a *neighborhood*, find something that better matches a criterion (in this case, the criterion of effective replication), and then search *that* neighborhood, over and over. (x2)\n\nThis did require a certain amount of raw material to act as replicator feedstock. But the significant thing was not how much material was recruited into the world of replication; the significant thing was the search, and the material just carried out that search. If, somehow, there'd been some way of doing the same search without all that raw material---if there'd just been a little beeping device that determined how well a pattern *would* replicate, and incremented a binary number representing \"how much attention\" to pay to that pattern, and then searched neighboring points in proportion to that number---well, that would have searched just the same. It's not something that evolution *can* do, but if it happened, it would generate the same information.\n\nHuman brains routinely outthink the evolution of whole species, species whose net weights of biological material outweigh a human brain a million times over---the gun against a lion's paws. It's not the amount of raw material, it's the search.\n\nIn the evolution of replicators, the raw material happens to *carry out* the search---but don't think that the key thing is how much gets produced, how much gets consumed. The raw material is just a way of keeping score. True, even in principle, you do need *some* negentropy and *some* matter to *perform the computation*. But the same search could theoretically be performed with much less material---examining fewer copies of a pattern to draw the same conclusions, using more efficient updating on the evidence. Replicators *happen* to use the number of copies produced of themselves as a way of keeping score.\n\nBut what really matters isn't the production, it's the search.\n\nIf, after the first primitive replicators had managed to produce a few tons of themselves, you deleted all those tons of biological material, and substituted a few dozen cells here and there from the future---a single algae, a single bacterium---to say nothing of a whole multicellular *C. elegans* roundworm with a 302-neuron *brain*---then Time would leap forward by billions of years, even if the total mass of Life had just apparently shrunk. The *search* would have leapt ahead, and *production* would recover from the apparent \"setback\" in a handful of easy doublings.\n\nThe first replicator was the first great break in History---the first Black Swan that would have been unimaginable by any surface analogy. No extrapolation of previous trends could have spotted it---you'd have had to dive down into causal modeling, in enough detail to visualize the unprecedented search.\n\nNot that I'm saying I *would* have guessed, without benefit of hindsight---if somehow I'd been there as a disembodied and unreflective spirit, knowing only the previous universe as my guide---having no highfalutin concepts of \"intelligence\" or \"natural selection\" because those things didn't exist in my environment---and I had no mental mirror in which to see *myself* . And indeed, who *should* have guessed it with short of godlike intelligence? When all the previous history of the universe contained no break in History that sharp? The replicator was the *first* Black Swan.\n\nMaybe I, seeing the first replicator as a disembodied unreflective spirit, would have said, \"Wow, what an amazing notion---some of the things I see won't form with high probability, or last for long times---they'll be things that are good at copying themselves, instead. It's the new, third reason for seeing a lot of something!\" But would I have been imaginative enough to see the way to amoebas, to birds, to humans? Or would I have just expected it to hit the walls of the tidal pool and stop?\n\nTry telling a disembodied spirit who had watched the whole history of the universe *up to that point* about the birds and the bees, and they would think you were *absolutely and entirely out to lunch*. For nothing *remotely like that* would have been found anywhere else in the universe---and it would obviously take an exponential and *ridiculous* amount of time to accidentally form a pattern like that, no matter how good it was at replicating itself once formed---and as for it happening many times over in a connected ecology, when the first replicator in the tidal pool took such a long time to happen---why, that would just be *madness*. The [Absurdity Heuristic](http://lesswrong.com/lw/j6/why_is_the_future_so_absurd/) would come into play. Okay, it's neat that a little molecule can replicate itself---but this notion of a \"squirrel\" is *insanity*. So far beyond a Black Swan that you can't even call it a swan anymore.\n\nThat first replicator took over the world---in what sense? Earth's crust, Earth's magma, far outweighs its mass of Life. But Robin and I both suspect, I think, that the fate of the universe, and all those distant stars that outweigh us, will end up shaped by Life. So that the universe ends up hanging quite heavily on the existence of that first replicator, and *not* on the counterfactual states of any particular other molecules nearby . . . In that sense, a small handful of atoms once seized the reins of Destiny.\n\nHow? How did the first replicating pattern take over the world? Why didn't all those other molecules get an equal vote in the process?\n\nWell, that initial replicating pattern was doing *some* kind of search---*some* kind of optimization---and nothing else in the Universe was even *trying*. Really it was evolution that took over the world, not the first replicating pattern per se---you don't see many copies of it around anymore. But still, once upon a time the thread of Destiny was seized and concentrated and spun out from a small handful of atoms.\n\nThe first replicator did not set in motion a *clever* optimization process. Life didn't even have sex yet, or DNA to store information at very high fidelity. But the rest of the Universe had zip. In the kingdom of blind chance, the myopic optimization process is king.\n\nIssues of \"sharing improvements\" or \"trading improvements\" wouldn't even arise---there were no partners from outside. All the agents, all the actors of our modern world, are descended from that first replicator, and none from the mountains and hills.\n\nAnd that was the story of the First World Takeover, when a shift in the *structure* of optimization---namely, moving from no optimization whatsoever to natural selection---produced a stark discontinuity with previous trends and squeezed the flow of the whole universe's destiny through the needle's eye of a single place and time and pattern.\n\nThat's Life.\n\n[]{#AI-FOOM-Debatech8.html#likesection.8}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/w0/the_first_world_takeover/p1t): Eliezer, I can't imagine you really think I disagree with anything important in the above description. I do think it more likely than not that life started before Earth, and so it may have been much less than nine billion years when nothing happened. But that detail hardly matters to the overall picture here.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/w0/the_first_world_takeover/p22): Robin, I didn't imagine you would disagree with my history, but I thought you might disagree with my interpretation or emphasis.\n\n> [Robin Hanson](http://lesswrong.com/lw/w0/the_first_world_takeover/p25): Eliezer, as someone who has been married for twenty-one years, I know better than to try to pick fights about tone or emphasis when more direct and clear points of disagreement can be found. :)\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/w0/the_first_world_takeover/) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech9.html}\n\n## []{#AI-FOOM-Debatech9.html#x12-110008}[Chapter 8]{.titlemark} Abstraction, Not Analogy {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [19 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nI'm not that happy with framing our analysis choices here as \"[surface analogies](http://lesswrong.com/lw/rj/surface_analogies_and_deep_causes/)\" versus \"[inside views](../Text/AI-FOOM-Debatech6.html#x9-80005).\" More useful, I think, to see this as a choice of abstractions. An [abstraction](http://en.wikipedia.org/wiki/Abstraction) (Wikipedia) neglects some details to emphasize others. While random abstractions are useless, we have a rich library of useful abstractions tied to specific useful insights.\n\nFor example, consider the oldest known tool, the [hammer](http://en.wikipedia.org/wiki/Hammer) (Wikipedia). To understand how well an ordinary hammer performs its main function, we can abstract from details of shape and materials. To calculate the kinetic energy it delivers, we need only look at its length, head mass, and recoil energy percentage (given by its bending strength). To check that it can be held comfortably, we need the handle's radius, surface coefficient of friction, and shock absorption ability. To estimate error rates we need only consider its length and head diameter.\n\nFor other purposes, we can use other abstractions:\n\n- To see that it is not a good thing to throw at people, we can note it is heavy, hard, and sharp.\n- To see that it is not a good thing to hold high in a lightning storm, we can note it is long and conducts electricity.\n- To evaluate the cost to carry it around in a tool kit, we consider its volume and mass.\n- To judge its suitability as decorative wall art, we consider its texture and color balance.\n- To predict who will hold it when, we consider who owns it, and who they know.\n- To understand its symbolic meaning in a story, we use a library of common hammer symbolisms.\n- To understand its early place in human history, we consider its easy availability and the frequent gains from smashing open shells.\n- To predict when it is displaced by powered hammers, we can focus on the cost, human energy required, and weight of the two tools.\n- To understand its value and cost in our economy, we can focus on its market price and quantity.\n- \\[*I'm sure we could extend this list.*\\]\n\nWhether something is \"similar\" to a hammer depends on whether it has similar *relevant* features. Comparing a hammer to a mask based on their having similar texture and color balance is mere \"surface analogy\" for the purpose of calculating the cost to carry it around, but is a \"deep inside\" analysis for the purpose of judging its suitability as wall art. The issue is which abstractions are how useful for which purposes, not which features are \"deep\" vs. \"surface.\"\n\nMinds are so central to us that we have an enormous range of abstractions for thinking about them. Add that to our abstractions for machines and creation stories, and we have a truly enormous space of abstractions for considering stories about creating machine minds. The issue isn't so much whether any one abstraction is deep or shallow, but whether it is appropriate to the topic at hand.\n\nThe future story of the creation of designed minds must of course differ in exact details from everything that has gone before. But that does not mean that nothing before is informative about it. The whole point of abstractions is to let us usefully compare things that are different, so that insights gained about some become insights about the others.\n\nYes, when you struggle to identify relevant abstractions you may settle for analogizing, i.e., attending to commonly interesting features and guessing based on feature similarity. But not all comparison of different things is analogizing. Analogies are bad not because they use \"surface\" features, but because the abstractions they use do not offer enough relevant insight for the purpose at hand.\n\nI claim academic studies of innovation and economic growth offer relevant abstractions for understanding the future creation of machine minds, and that in terms of these abstractions the previous major transitions, such as humans, farming, and industry, are relevantly similar. Eliezer prefers \"optimization\" abstractions. The issue here is evaluating the suitability of these abstractions for our purposes.\n\n[]{#AI-FOOM-Debatech9.html#likesection.9}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/abstraction-vs.html#comment-518247615): . . . The dawn of life, considered as a *complete* event, could not have had its properties predicted by similarity to any other *complete* event before it.\n>\n> But you could, for example, have dropped down to modeling the world on the level of atoms, which would go on behaving similarly to all the other atoms ever observed. It's just that the compound of atoms wouldn't behave similarly to any other compound, with respect to the aspects we're interested in (Life Go FOOM).\n>\n> You could say, \"Probability is flowing between regions of pattern space, the same as before; but look, now there's a cycle; therefore there's this *new* thing going on called *search*.\" There wouldn't be any *search* in history to analogize to, but there would be (on a lower level of granularity) patterns giving birth to other patterns: stars to planets and the like.\n>\n> Causal modeling can tell us about things that are not similar *in their important aspect* to any other compound thing in history, provided that they are made out of sufficiently similar *parts* put together in a new structure.\n>\n> I also note that referring to \"humans, farming, and industry\" as \"the previous major transitions\" is precisely the issue at hand---is this an abstraction that's going to give us a good prediction of \"self-improving AI\" by direct induction/extrapolation, or not?\n>\n> I wouldn't begin to compare the shift from *non-recursive optimization to recursive optimization* to anything else except the dawn of life---and that's not suggesting that we could do inductive extrapolation, it's just a question of \"How large an event?\" There *isn't* anything directly similar to a self-improving AI, in my book; it's a new thing under the Sun, \"like replication once was,\" but not at all the same sort of hammer---if it was, it wouldn't be a new thing under the Sun.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/abstraction-vs.html#comment-518247708): Eliezer, have I completely failed to communicate here? You have previously said nothing is similar enough to this new event for analogy to be useful, so all we have is \"causal modeling\" (though you haven't explained what you mean by this in this context). This post is a reply saying, no, there are more ways using abstractions; analogy and causal modeling are two particular ways to reason via abstractions, but there are many other ways. But here again in the comments you just repeat your previous claim. Can't you see that my long list of ways to reason about hammers isn't well summarized by an analogy vs. causal modeling dichotomy, but is better summarized by noting they use different abstractions? I am of course open to different way to conceive of \"the previous major transitions.\" I have previously tried to conceive of them in terms of sudden growth speedups.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/11/abstraction-vs.html) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech10.html}\n\n## []{#AI-FOOM-Debatech10.html#x13-120009}[Chapter 9]{.titlemark} Whence Your Abstractions? {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [20 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Reply to:** [Abstraction, Not Analogy](../Text/AI-FOOM-Debatech9.html#x12-110008)Robin [asks](../Text/AI-FOOM-Debatech9.html#x12-110008):\n\n> Eliezer, have I completely failed to communicate here? You have previously said nothing is similar enough to this new event for analogy to be useful, so all we have is \"causal modeling\" (though you haven't explained what you mean by this in this context). This post is a reply saying, no, there are more ways using abstractions; analogy and causal modeling are two particular ways to reason via abstractions, but there are many other ways.\n\nWell . . . it shouldn't be surprising if [you've communicated less than you thought](http://lesswrong.com/lw/ke/illusion_of_transparency_why_no_one_understands/). Two people, both of whom know that disagreement is not allowed, have a persistent disagreement. It doesn't excuse anything, but---wouldn't it be *more* surprising if their disagreement rested on intuitions that were easy to convey in words, and points readily dragged into the light?\n\nI didn't think from the beginning that I was succeeding in communicating. Analogizing Doug Engelbart's mouse to a self-improving AI is for me such a flabbergasting notion---indicating such completely different ways of thinking about the problem---that I am trying to step back and find the differing sources of our differing intuitions.\n\n(Is that such an odd thing to do, if we're really following down the path of not agreeing to disagree?)\n\n\"Abstraction,\" for me, is a word that means a partitioning of possibility---a [boundary](http://lesswrong.com/lw/o0/where_to_draw_the_boundary/) around possible things, events, patterns. They are [in no sense neutral](http://lesswrong.com/lw/np/disputing_definitions/); they act as signposts saying \"lump these things together for predictive purposes.\" To use the word \"singularity\" as ranging over human brains, farming, industry, and self-improving AI is very nearly to finish your thesis right there.\n\nI wouldn't be surprised to find that, in a real AI, 80% of the actual computing crunch goes into drawing the right boundaries to make the actual reasoning possible. The question \"Where do abstractions come from?\" cannot be taken for granted.\n\nBoundaries are drawn by appealing to other boundaries. To draw the boundary \"human\" around things that wear clothes and speak language and have a certain shape, you must have previously noticed the boundaries around clothing and language. And your visual cortex already has a (damned sophisticated) system for categorizing visual scenes into shapes, and the shapes into categories.\n\nIt's very much worth distinguishing between boundaries drawn by noticing a set of similarities, and boundaries drawn by reasoning about causal interactions.\n\nThere's a big difference between saying, \"I predict that Socrates, *like other humans I've observed*, will fall into the class of 'things that die when drinking hemlock' \" and saying, \"I predict that Socrates, whose biochemistry I've observed to have this-and-such characteristics, will have his neuromuscular junction disrupted by the coniine in the hemlock---even though I've never seen that happen, I've seen lots of organic molecules and I know how they behave.\"\n\nBut above all---ask where the abstraction comes from!\n\nTo see a hammer is not good to hold high in a lightning storm, we draw on pre-existing objects that you're not supposed to hold electrically conductive things to high altitudes---this is a predrawn boundary, found by us in books; probably originally learned from experience and then further explained by theory. We just test the hammer to see if it fits in a pre-existing boundary, that is, a boundary we drew before we ever thought about the hammer.\n\nTo evaluate the cost to carry a hammer in a tool kit, you probably visualized the process of putting the hammer in the kit, and the process of carrying it. Its mass determines the strain on your arm muscles. Its volume and *shape*---not just \"volume,\" as you can see as soon as that is pointed out---determine the difficulty of fitting it into the kit. You said, \"volume and mass,\" but that was an approximation, and as soon as I say, \"volume and mass and shape,\" you say, \"Oh, of course that's what I meant\"---based on a causal visualization of trying to fit some weirdly shaped object into a toolkit, or, e.g., a thin ten-foot pin of low volume and high annoyance. So you're redrawing the boundary based on a causal visualization which shows that other characteristics can be relevant *to the consequence you care about*.\n\nNone of your examples talk about drawing *new* conclusions about the hammer by *analogizing it to other things* rather than directly assessing its characteristics in their own right, so it's not all that good an example when it comes to making predictions about self-improving AI by putting it into a group of similar things that includes farming or industry.\n\nBut drawing that particular boundary would already rest on *causal* reasoning that tells you which abstraction to use. Very much an Inside View, and a Weak Inside View, even if you try to go with an Outside View after that.\n\nUsing an \"abstraction\" that covers such massively different things will often be met by a differing intuition that makes a different abstraction, *based on a different causal visualization* behind the scenes. That's what you want to drag into the light---not just say, \"Well, I expect this Transition to resemble past Transitions.\"\n\nRobin [said](../Text/AI-FOOM-Debatech9.html#x12-110008):\n\n> I am of course open to different way to conceive of \"the previous major transitions.\" I have previously tried to conceive of them in terms of sudden growth speedups.\n\nIs that the root source for your abstraction---\"things that do sudden growth speedups\"? I mean . . . is that really what you want to go with here?\n\n[]{#AI-FOOM-Debatech10.html#likesection.10}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/w1/whence_your_abstractions/p2e): *Everything* is new to us at some point; we are always trying to make sense of new things by using the abstractions we have collected from trying to understand all the old things.\n>\n> We are always trying to use our best abstractions to directly assess their characteristics in their own right. Even when we use analogies that is the goal. I said the abstractions I rely on most here come from the economic growth literature. They are not just some arbitrary list of prior events.\n\n> [Robin Hanson](http://lesswrong.com/lw/w1/whence_your_abstractions/p2i): To elaborate, as I understand it a distinctive feature of your scenario is a sudden growth speedup, due to an expanded growth feedback channel. This is the growth of an overall capability of a total mostly autonomous system whose capacity is mainly determined by its \"knowledge,\" broadly understood. The economic growth literature has many useful abstractions for understanding such scenarios. These abstractions have been vetted over decades by thousands of researchers, trying to use them to understand other systems \"like\" this, at least in terms of these abstractions.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/w1/whence_your_abstractions/) for all comments.\n:::\n\n[]{#AI-FOOM-Debatepa2.html}\n\n# []{#AI-FOOM-Debatepa2.html#x14-13000II}[Part II ]{.titlemark}Main Sequence {.partHead}\n\n``{=html}\n\n{.dink}\n\n[]{#AI-FOOM-Debatech11.html}\n\n## []{#AI-FOOM-Debatech11.html#x15-}[Chapter 10]{.titlemark} AI Go Foom {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [10 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n> It seems to me that it is up to \\[Eliezer\\] to show us how his analysis, using his abstractions, convinces him that, more likely than it might otherwise seem, hand-coded AI will come soon and in the form of a single suddenly superpowerful AI.\n\nAs [this](../Text/AI-FOOM-Debatech7.html#x10-90006) didn't prod a response, I guess it is up to me to summarize Eliezer's argument as best I can, so I can then respond. Here goes:\n\n> A machine intelligence can directly rewrite its *entire* source code and redesign its entire physical hardware. While human brains can in principle modify themselves arbitrarily, in practice our limited understanding of ourselves means we mainly only change ourselves by thinking new thoughts. All else equal this means that machine brains have an advantage in improving themselves.\n>\n> A mind without arbitrary capacity limits, which focuses on improving itself, can probably do so indefinitely. The growth rate of its \"intelligence\" may be slow when it is dumb, but gets faster as it gets smarter. This growth rate also depends on how many parts of itself it can usefully change. So all else equal, the growth rate of a machine intelligence must be greater than the growth rate of a human brain.\n>\n> No matter what its initial disadvantage, a system with a faster growth rate eventually wins. So if the growth-rate advantage is large enough then yes, a single computer could well go in a few days from less than human intelligence to so smart it could take over the world. QED.\n\nSo, Eliezer, is this close enough to be worth my response? If not, could you suggest something closer?\n\n[]{#AI-FOOM-Debatech11.html#likesection.11}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/ai-go-foom.html#comment-518239388): Well, the format of my thesis is something like:\n>\n> > When you break down the history of optimization into things like optimization resources, optimization efficiency, and search neighborhood and come up with any reasonable set of curves fit to the observed history of optimization so far, including the very few points where object-level innovations have increased optimization efficiency, and then you try to fit the same curves to an AI that is putting a large part of its present idea-production flow into direct feedback to increase optimization efficiency (unlike human minds or any other process witnessed heretofore), then you get a curve which is either flat (below a certain threshold) or FOOM (above that threshold).\n>\n> If that doesn't make any sense, it's cuz I was rushed.\n>\n> Roughly . . . suppose you have a flat linear line, and this is what happens when you have a laborer pushing on a wheelbarrow at constant speed. Now suppose that the wheelbarrow's speed is proportional to the position to which it has been pushed so far. Folding a linear graph in on itself will produce an exponential graph. What we're doing is, roughly, taking the graph of humans being pushed on by evolution, and science being pushed on by humans, and folding that graph in on itself. The justification for viewing things this way has to do with asking questions like \"Why did [eurisko]{.textsc} run out of steam?\" and \"Why can't you keep running an optimizing compiler on its own source code to get something faster and faster?\" and considering the degree to which meta-level functions can get encapsulated or improved by object-level pressures, which determine the strength of the connections in the positive feedback loop.\n>\n> I was rushed, so don't blame me if that doesn't make sense either.\n>\n> Consider that as my justification for trying to answer the question in a post, rather than a comment.\n>\n> It seems to me that we are viewing this problem from *extremely* different angles, which makes it more obvious to each of us that the other is just plain wrong than that we trust in the other's rationality; and this is the result of the persistent disagreement. It also seems to me that you expect that you know what I will say next, and are wrong about this, whereas I don't feel like I know what you will say next. It's that sort of thing that makes me reluctant to directly jump to your point in opinion space having assumed that you already took mine fully into account.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/ai-go-foom.html#comment-518239851): . . . Your story seems to depend crucially on what counts as \"object\" vs. \"meta\" (= \"optimization efficiency\") level innovations. It seems as if you think object ones don't increase growth rates while meta ones do. The economic growth literature pays close attention to which changes increase growth rates and which do not. So I will be paying close attention to how you flesh out your distinction and how it compares with the apparently similar economic growth distinction.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/11/ai-go-foom.html) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech12.html}\n\n## []{#AI-FOOM-Debatech12.html#x16-}[Chapter 11]{.titlemark} Optimization and the Intelligence Explosion {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [23 June 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nLest anyone get the wrong impression, I'm juggling multiple balls right now and can't give the latest Intelligence Explosion debate as much attention as it deserves. But lest I annoy my esteemed co-blogger, here is a down payment on my views of the Intelligence Explosion---needless to say, all this is coming way out of order in the posting sequence, but here goes . . .\n\nAmong the topics I haven't dealt with yet, and will have to introduce here very quickly, is the notion of an optimization process. Roughly, this is the idea that your power as a mind is your ability to hit small targets in a large search space---this can be either the space of possible futures (planning) or the space of possible designs (invention). Suppose you have a car, and suppose we already know that your preferences involve travel. Now suppose that you take all the parts in the car, or all the atoms, and jumble them up at random. It's very unlikely that you'll end up with a travel artifact at all, even so much as a wheeled cart---let alone a travel artifact that ranks as high in your preferences as the original car. So, relative to your preference ordering, the car is an extremely *improbable* artifact; the power of an optimization process is that it can produce this kind of improbability.\n\nYou can view both intelligence and [natural selection](http://lesswrong.com/lw/kr/an_alien_god/) as special cases of *optimization*: Processes that hit, in a large search space, very small targets defined by implicit preferences. Natural selection prefers more efficient replicators. Human intelligences have more [complex preferences](http://lesswrong.com/lw/l3/thou_art_godshatter/). Neither evolution nor humans have consistent utility functions, so viewing them as \"optimization processes\" is understood to be an approximation. You're trying to get at the *sort of work being done*, not claim that humans or evolution do this work *perfectly*.\n\nThis is how I see the story of life and intelligence---as a story of improbably good designs being produced by optimization processes. The \"improbability\" here is improbability relative to a random selection from the design space, not improbability in an absolute sense---if you have an optimization process around, then \"improbably\" good designs become probable.\n\nObviously I'm skipping over a lot of background material here; but you can already see the genesis of a clash of intuitions between myself and Robin. Robin's looking at populations and resource utilization. I'm looking at production of improbable patterns.\n\nLooking over the history of optimization on Earth up until now, the first step is to conceptually separate the meta level from the object level---separate the *structure of optimization* from *that which is optimized*.\n\nIf you consider biology in the absence of hominids, then on the object level we have things like dinosaurs and butterflies and cats. On the meta level we have things like natural selection of asexual populations, and sexual recombination. The object level, you will observe, is rather more complicated than the meta level. Natural selection is not an *easy* subject and it involves math. But if you look at the anatomy of a whole cat, the cat has dynamics immensely more complicated than \"mutate, recombine, reproduce.\"\n\nThis is not surprising. Natural selection is an *accidental* optimization process that basically just started happening one day in a tidal pool somewhere. A cat is the *subject* of millions of years and billions of years of evolution.\n\nCats have brains, of course, which operate to learn over a lifetime; but at the end of the cat's lifetime that information is thrown away, so it does not accumulate. The [cumulative](http://lesswrong.com/lw/l6/no_evolutions_for_corporations_or_nanodevices/) effects of cat brains upon the world as optimizers, therefore, are relatively small.\n\nOr consider a bee brain, or a beaver brain. A bee builds hives, and a beaver builds dams; but they didn't figure out how to build them from scratch. A beaver can't figure out how to build a hive; a bee can't figure out how to build a dam.\n\nSo animal brains---up until recently---were not major players in the planetary game of optimization; they were *pieces* but not *players*. Compared to evolution, brains lacked both generality of optimization power (they could not produce the amazing range of artifacts produced by evolution) and cumulative optimization power (their products did not accumulate complexity over time). For more on this theme see \"[Protein Reinforcement and DNA Consequentialism](http://lesswrong.com/lw/l2/protein_reinforcement_and_dna_consequentialism/).\"^[1](#AI-FOOM-Debatech12.html#enz.8)^[]{#AI-FOOM-Debatech12.html#enz.8.backref}\n\n*Very recently*, certain animal brains have begun to exhibit both generality of optimization power (producing an amazingly wide range of artifacts, in timescales too short for natural selection to play any significant role) and cumulative optimization power (artifacts of increasing complexity, as a result of skills passed on through language and writing).\n\nNatural selection takes [hundreds of generations to do anything](http://lesswrong.com/lw/kt/evolutions_are_stupid_but_work_anyway/) and millions of years for *de novo* complex designs. Human programmers can design a complex machine with a hundred interdependent elements in a single afternoon. This is not surprising, since natural selection is an *accidental* optimization process that basically just started happening one day, whereas humans are *optimized* optimizers handcrafted by natural selection over millions of years.\n\nThe wonder of evolution is not how well it works, but that it works *at all* without being optimized. This is how optimization bootstrapped itself into the universe---starting, as one would expect, from an extremely inefficient accidental optimization process. Which is not the accidental first replicator, mind you, but the accidental first process of natural selection. Distinguish the object level and the meta level!\n\nSince the dawn of optimization in the universe, a certain structural commonality has held across both natural selection and human intelligence . . .\n\nNatural selection *selects on genes*, but, generally speaking, the genes do not turn around and optimize natural selection. The invention of sexual recombination is an exception to this rule, and so is the invention of cells and DNA. And you can see both the power and the *rarity* of such events by the fact that evolutionary biologists structure entire histories of life on Earth around them.\n\nBut if you step back and take a human standpoint---if you think like a programmer---then you can see that natural selection is *still* not all that complicated. We'll try bundling different genes together? We'll try separating information storage from moving machinery? We'll try randomly recombining groups of genes? On an absolute scale, these are the sort of bright ideas that any smart hacker comes up with during the first ten minutes of thinking about system architectures.\n\nBecause natural selection started out so inefficient (as a completely accidental process), this tiny handful of meta-level improvements feeding back in from the replicators---nowhere near as complicated as the structure of a cat---structure the evolutionary epochs of life on Earth.\n\nAnd *after* all that, natural selection is *still* a [blind idiot](http://lesswrong.com/lw/kr/an_alien_god/) of a god. Gene pools can [evolve to extinction](http://lesswrong.com/lw/l5/evolving_to_extinction/), despite all cells and sex.\n\nNow natural selection does feed on itself in the sense that each new adaptation opens up new avenues of further adaptation; but that takes place on the object level. The gene pool feeds on its own complexity---but only thanks to the protected interpreter of natural selection that runs in the background and is not itself rewritten or altered by the evolution of species.\n\nLikewise, human beings invent sciences and technologies, but we have not *yet* begun to rewrite the protected structure of the human brain itself. We have a prefrontal cortex and a temporal cortex and a cerebellum, just like the first inventors of agriculture. We haven't started to genetically engineer ourselves. On the object level, science feeds on science, and each new discovery paves the way for new discoveries---but all that takes place with a protected interpreter, the human brain, running untouched in the background.\n\nWe have meta-level inventions like science that try to instruct humans in how to think. But the first person to invent Bayes's Theorem did not become a Bayesian; they could not rewrite themselves, lacking both that knowledge and that power. Our significant innovations in the art of thinking, like writing and science, are so powerful that they structure the course of human history; but they do not rival the brain itself in complexity, and their effect upon the brain is comparatively shallow.\n\nThe present state of the art in [rationality training](http://lesswrong.com/lw/q9/the_failures_of_eld_science/) is not sufficient to turn an arbitrarily selected mortal into Albert Einstein, which shows the power of a few minor genetic quirks of brain design compared to all the self-help books ever written in the twentieth century.\n\nBecause the brain hums away invisibly in the background, people tend to overlook its contribution and take it for granted, and talk as if the simple instruction to \"test ideas by experiment\" or the p \\< 0.05 significance rule were the same order of contribution as an entire human brain. Try telling chimpanzees to test their ideas by experiment and see how far you get.\n\nNow . . . some of us *want* to intelligently design an intelligence that would be capable of intelligently redesigning itself, right down to the level of machine code.\n\nThe machine code at first, and the laws of physics later, would be a protected level of a sort. But that \"protected level\" would not contain the *dynamic of optimization*; the protected levels would not structure the work. The human brain does quite a bit of optimization on its own, and screws up on its own, no matter what you try to tell it in school. But this *fully wraparound recursive optimizer* would have no protected level that was *optimizing*. All the structure of optimization would be subject to optimization itself.\n\nAnd that is a sea change which breaks with the entire past since the first replicator, because it breaks the idiom of a protected meta level.\n\nThe history of Earth up until now has been a history of optimizers spinning their wheels at a constant rate, generating a constant optimization pressure. And creating optimized products, *not* at a constant rate, but at an accelerating rate, because of how object-level innovations open up the pathway to other object-level innovations. But that acceleration is taking place with a protected meta level doing the actual optimizing. Like a search that leaps from island to island in the search space, and good islands tend to be adjacent to even better islands, but the jumper doesn't change its legs. *Occasionally*, a few tiny little changes manage to hit back to the meta level, like sex or science, and then the history of optimization enters a new epoch and everything proceeds faster from there.\n\nImagine an economy without investment, or a university without language, a technology without tools to make tools. Once in a hundred million years, or once in a few centuries, someone invents a hammer.\n\nThat is what optimization has been like on Earth up until now.\n\nWhen I look at the history of Earth, I don't see a history of optimization *over time*. I see a history of *optimization power* in, and *optimized products* out. Up until now, thanks to the existence of almost entirely protected meta levels, it's been possible to split up the history of optimization into epochs, and, within each epoch, graph the cumulative *object-level* optimization *over time*, because the protected level is running in the background and is not itself changing within an epoch.\n\nWhat happens when you build a fully wraparound, recursively self-improving AI? Then you take the graph of \"optimization in, optimized out,\" and fold the graph in on itself. Metaphorically speaking.\n\nIf the AI is weak, it does nothing, because it is not powerful enough to significantly improve itself---like telling a chimpanzee to rewrite its own brain.\n\nIf the AI is powerful enough to rewrite itself in a way that increases its ability to make further improvements, and this reaches all the way down to the AI's full understanding of its own source code and its own design as an optimizer . . . then, even if the graph of \"optimization power in\" and \"optimized product out\" looks essentially the same, the graph of optimization over time is going to look completely different from Earth's history so far.\n\nPeople often say something like, \"But what if it requires exponentially greater amounts of self-rewriting for only a linear improvement?\" To this the obvious answer is, \"Natural selection exerted roughly constant optimization power on the hominid line in the course of coughing up humans; and this doesn't seem to have required exponentially more time for each linear increment of improvement.\"\n\nAll of this is still mere analogic reasoning. A full AGI thinking about the nature of optimization and doing its own AI research and rewriting its own source code is not *really* like a graph of Earth's history folded in on itself. It is a different sort of beast. These analogies are *at best* good for qualitative predictions, and even then I have a large amount of other beliefs not yet posted, which are telling me which analogies to make, *et cetera*.\n\nBut if you want to know why I might be reluctant to extend the graph of biological and economic growth *over time*, into the future and over the horizon of an AI that thinks at transistor speeds and invents self-replicating molecular nanofactories and *improves its own source code*, then there is my reason: You are drawing the wrong graph, and it should be optimization power in versus optimized product out, not optimized product versus time. Draw *that* graph, and the results---in what I would call common sense for the right values of \"common sense\"---are entirely compatible with the notion that a self-improving AI, thinking millions of times faster and armed with molecular nanotechnology, would *not* be bound to one-month economic doubling times. Nor bound to cooperation with large societies of equal-level entities with different goal systems, but that's a separate topic.\n\nOn the other hand, if the next Big Invention merely impinged *slightly* on the protected level---if, say, a series of intelligence-enhancing drugs, each good for five IQ points, began to be introduced into society---then I can well believe that the economic doubling time would go to something like seven years, because the basic graphs are still in place, and the fundamental structure of optimization has not really changed all that much, and so you are not generalizing way outside the reasonable domain.\n\nI *really* have a problem with saying, \"Well, I don't know if the next innovation is going to be a recursively self-improving AI superintelligence or a series of neuropharmaceuticals, but *whichever one is the actual case*, I predict it will correspond to an economic doubling time of one month.\" This seems like sheer Kurzweilian thinking to me, as if graphs of Moore's Law are the fundamental reality and all else a mere shadow. One of these estimates is way too slow and one of them is way too fast---he said, eyeballing his mental graph of \"optimization power in vs. optimized product out.\" If we are going to draw graphs at all, I see no reason to privilege graphs against *times*.\n\nI am juggling many balls right now, and am not able to prosecute this dispute properly. Not to mention that I would prefer to have this whole conversation at a time when I had previously done more posts about, oh, say, the notion of an \"optimization process\" . . . But let it at least not be said that I am dismissing ideas out of hand without justification, as though I thought them unworthy of engagement; for this I do not think, and I have my own complex views standing behind my Intelligence Explosion beliefs, as one might well expect.\n\nOff to pack, I've got a plane trip tomorrow.\n\n[]{#AI-FOOM-Debatech12.html#likesection.12}\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/rk/optimization_and_the_singularity/) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech12.html#enz.8} [1](#AI-FOOM-Debatech12.html#enz.8.backref). []{#AI-FOOM-Debatech12.html#cite.0.Yudkowsky.2007f}Eliezer Yudkowsky, \"Protein Reinforcement and DNA Consequentialism,\" *Less Wrong* (blog), November 13, 2007, .\n\n[]{#AI-FOOM-Debatech13.html}\n\n## []{#AI-FOOM-Debatech13.html#x17-}[Chapter 12]{.titlemark} Eliezer's Meta-level Determinism {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [23 June 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nThank you, esteemed co-blogger Eliezer, for your [down payment](../Text/AI-FOOM-Debatech12.html#x16-) on future engagement of our [clash of intuitions](http://www.overcomingbias.com/2008/06/singularity-out.html). I too am about to travel and must return to other distractions which I have neglected.\n\nSome preliminary comments. First, to be clear, my estimate of future growth rates based on past trends is intended to be unconditional---I do not claim future rates are independent of which is the next big meta innovation, though I am rather uncertain about which next innovations would have which rates.\n\nSecond, my claim to estimate the impact of the next big innovation and Eliezer's claim to estimate a much larger impact from \"full AGI\" are not yet obviously in conflict---to my knowledge, neither Eliezer nor I claim full AGI will be the next big innovation, nor does Eliezer argue for a full AGI time estimate that conflicts with my estimated timing of the next big innovation.\n\nThird, it seems the basis for Eliezer's [claim](http://lesswrong.com/lw/rj/surface_analogies_and_deep_causes/) that my analysis is untrustworthy \"surface analogies\" vs. his reliable \"deep causes\" is that, while I use long-vetted general social science understandings of factors influencing innovation, he uses his own new untested meta-level determinism theory. So it seems he could accept that those not yet willing to accept his new theory might instead reasonably rely on my analysis.\n\nFourth, while Eliezer outlines his new theory and its implications for overall growth rates, he has as yet said nothing about what his theory implies for transition inequality, and how those implications might differ from my estimates.\n\nOK, now for the meat. My story of everything was told (at least for recent eras) in terms of realized capability, i.e., population and resource use, and was largely agnostic about the specific innovations underlying the key changes. Eliezer's [story](../Text/AI-FOOM-Debatech12.html#x16-) is that key changes are largely driven by structural changes in optimization processes and their protected meta-levels:\n\n> The history of Earth up until now has been a history of optimizers . . . generating a constant optimization pressure. And creating optimized products, not at a constant rate, but at an accelerating rate, because of how object-level innovations open up the pathway to other object-level innovations. . . . *Occasionally*, a few tiny little changes manage to hit back to the meta level, like sex or science, and then the history of optimization enters a new epoch and everything proceeds faster from there. . . .\n>\n> Natural selection selects on genes, but, generally speaking, the genes do not turn around and optimize natural selection. The invention of sexual recombination is an exception to this rule, and so is the invention of cells and DNA. . . . This tiny handful of meta-level improvements feeding back in from the replicators . . . structure the evolutionary epochs of life on Earth. . . .\n>\n> *Very recently*, certain animal brains have begun to exhibit both generality of optimization power . . . and cumulative optimization power . . . as a result of skills passed on through language and writing. . . . We have meta-level inventions like science that try to instruct humans in how to think. . . . Our significant innovations in the art of thinking, like writing and science, are so powerful that they structure the course of human history; but they do not rival the brain itself in complexity, and their effect upon the brain is comparatively shallow. . . .\n>\n> Now . . . some of us *want* to intelligently design an intelligence that would be capable of intelligently redesigning itself, right down to the level of machine code. . . . \\[That\\] breaks the idiom of a protected meta level. . . . Then even if the graph of \"optimization power in\" and \"optimized product out\" looks essentially the same, the graph of optimization over time is going to look completely different from Earth's history so far.\n\nOK, so Eliezer's \"[meta is max](http://www.overcomingbias.com/2008/06/meta-is-max---i.html)\" view seems to be a meta-level determinism view, i.e., that capability growth rates are largely determined, in order of decreasing importance, by innovations at three distinct levels:\n\n1. [The dominant optimization process, natural selection, flesh brains with culture, or full AGI]{#AI-FOOM-Debatech13.html#x17-16002x1}\n2. [Improvements behind the protected meta level of such a process, i.e., cells, sex, writing, science]{#AI-FOOM-Debatech13.html#x17-16004x2}\n3. [Key \"object-level\" innovations that open the path for other such innovations]{#AI-FOOM-Debatech13.html#x17-16006x3}\n\nEliezer offers no theoretical argument for us to evaluate supporting this ranking. But his view does seem to make testable predictions about history. It suggests the introduction of natural selection and of human culture coincided with the very largest capability growth rate increases. It suggests that the next largest increases were much smaller and coincided in biology with the introduction of cells and sex, and in humans with the introduction of writing and science. And it suggests other rate increases were substantially smaller.\n\n[]{#AI-FOOM-Debatech13.html#likesection.13} The main dramatic events in the traditional fossil record are, [according](http://hanson.gmu.edu/hardstep.pdf) to one source, Any Cells, Filamentous Prokaryotes, Unicellular Eukaryotes, Sexual Eukaryotes, and Metazoans, at 3.8, 3.5, 1.8, 1.1, and 0.6 billion years ago, respectively.^[1](#AI-FOOM-Debatech13.html#enz.9)^[]{#AI-FOOM-Debatech13.html#enz.9.backref} Perhaps two of these five events are at Eliezer's level two, and none at level one. Relative to these events, the first introduction of human culture isn't remotely as noticeable. While the poor fossil record means we shouldn't expect a strong correspondence between the biggest innovations and dramatic fossil events, we can at least say this data doesn't strongly support Eliezer's ranking.\n\nOur more recent data is better, allowing clearer tests. The last three strong transitions were humans, farming, and industry, and in terms of growth rate changes these seem to be of similar magnitude. Eliezer seems to predict we will discover the first of these was much stronger than the other two. And while the key causes of these transitions have long been hotly disputed, with many theories in play, Eliezer seems to pick specific winners for these disputes: intergenerational culture, writing, and scientific thinking.\n\nI don't know enough about the first humans to comment, but I know enough about farming and industry to say Eliezer seems wrong there. Yes, the introduction of writing did roughly correspond in time with farming, but it just doesn't seem plausible that writing caused farming, rather than vice versa. Few could write and what they wrote didn't help farming much. Farming seems more plausibly to have resulted from a scale effect in the accumulation of innovations in abilities to manage plants and animals---we finally knew enough to be able to live off the plants near one place, instead of having to constantly wander to new places.\n\nAlso for industry, the key innovation does not seem to have been a scientific way of thinking---that popped up periodically in many times and places, and by itself wasn't particularly useful. My guess is that the key was the formation of networks of science-like specialists, which wasn't possible until the previous economy had reached a critical scale and density.\n\nNo doubt innovations can be classified according to Eliezer's scheme, and yes, all else equal, relatively meta innovations are probably stronger; but if as the data above suggests this correlation is much weaker than Eliezer expects, that has important implications for how \"full AGI\" would play out. Merely having the full ability to change its own meta level need not give such systems anything like the wisdom to usefully make such changes, and so an innovation producing that mere ability might not be among the most dramatic transitions.\n\n[]{#AI-FOOM-Debatech13.html#likesection.14}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/06/eliezers-meta-l.html#comment-518264679): I feel that I am being perhaps a bit overinterpreted here.\n>\n> For one thing, the thought of \"farming\" didn't cross my mind when I was thinking of major innovations, which tells you something about the optimization viewpoint versus the economic viewpoint.\n>\n> But if I were to try to interpret how farming looks from my viewpoint, it would go like this:\n>\n> 1. [Evolution gives humans language, general causal modeling, and long-range planning.]{#AI-FOOM-Debatech13.html#x17-16008x1}\n> 2. [Humans figure out that sowing seeds causes plants to grow, realize that this could be helpful six months later, and tell their friends and children. No direct significance to optimization.]{#AI-FOOM-Debatech13.html#x17-16010x2}\n> 3. [Some areas go from well-nourished hunter-gatherers to a hundred times as many nutritively deprived farmers. Significance to optimization: there are many more humans around, optimizing . . . maybe slightly worse than they did before, due to poor nutrition. However, you can, in some cases, pour more resources in and get more optimization out, so the object-level trick of farming may have hit back to the meta level in that sense.]{#AI-FOOM-Debatech13.html#x17-16012x3}\n> 4. [Farming skills get good enough that people have excess crops, which are stolen by tax collectors, resulting in the creation of governments, cities, and, above all, *professional specialization*.]{#AI-FOOM-Debatech13.html#x17-16014x4}\n> 5. [People in cities invent writing.]{#AI-FOOM-Debatech13.html#x17-16016x5}\n>\n> So that's how I would see the object/meta interplay.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/06/eliezers-meta-l.html#comment-518264708): Eliezer, so even though you [said](../Text/AI-FOOM-Debatech12.html#x16-),\n>\n> > *Occasionally*, a few tiny little changes manage to hit back to the meta level, like sex or science, and then the history of optimization enters a new epoch and everything proceeds faster from there.\n>\n> you did not intend at all to say that when we look at the actual times when \"everything sped up\" we would tend to find such events to have been fundamentally caused by such meta-level changes? Even though you say these \"meta-level improvements . . . structure the evolutionary epochs of life on Earth,\" you did not mean the epochs as observed historically or as defined by when \"everything proceeds faster from there\"? If there is no relation in the past between speedup causes and these key meta-level changes, why worry that a future meta-level change will cause a speedup then?\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/06/eliezers-meta-l.html) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech13.html#enz.9} [1](#AI-FOOM-Debatech13.html#enz.9.backref). []{#AI-FOOM-Debatech13.html#cite.0.Hanson.1998b}Robin Hanson, \"Must Early Life Be Easy? The Rhythm of Major Evolutionary Transitions\" (Unpublished manuscript, September 23, 1998), accessed August 12, 2012, ; []{#AI-FOOM-Debatech13.html#cite.0.Schopf.1994}J. William Schopf, \"Disparate Rates, Differing Fates: Tempo and Mode of Evolution Changed from the Precambrian to the Phanerozoic,\" *Proceedings of the National Academy of Sciences of the United States of America* 91, no. 15 (1994): 6735--6742, doi:[10.1073/pnas.91.15.6735](http://dx.doi.org/10.1073/pnas.91.15.6735).\n\n[]{#AI-FOOM-Debatech14.html}\n\n## []{#AI-FOOM-Debatech14.html#x18-}[Chapter 13]{.titlemark} Observing Optimization {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [21 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Followup to:** [Optimization and the Intelligence Explosion](../Text/AI-FOOM-Debatech12.html#x16-)In \"[Optimization and the Intelligence Explosion](../Text/AI-FOOM-Debatech12.html#x16-)\" I pointed out that history since the first replicator, including human history to date, has *mostly* been a case of *nonrecursive* optimization---where you've got one thingy doing the optimizing, and another thingy getting optimized. When evolution builds a better amoeba, that doesn't change the *structure of evolution*---the mutate-reproduce-select cycle.\n\nBut there are exceptions to this rule, such as the invention of sex, which affected the structure of natural selection itself---transforming it to mutate-recombine-mate-reproduce-select.\n\nI was surprised when Robin, in \"[Eliezer's Meta-Level Determinism](../Text/AI-FOOM-Debatech13.html#x17-)\" took that idea and ran with it and [said](../Text/AI-FOOM-Debatech13.html#x17-):\n\n> His view does seem to make testable predictions about history. It suggests the introduction of natural selection and of human culture coincided with the very largest capability growth rate increases. It suggests that the next largest increases were much smaller and coincided in biology with the introduction of cells and sex, and in humans with the introduction of writing and science. And it suggests other rate increases were substantially smaller.\n\nIt hadn't occurred to me to try to derive that kind of testable prediction. Why? Well, partially because I'm not an economist. (Don't get me wrong, it was a virtuous step to try.) But also because the whole issue looked to me like it was a lot more complicated than that, so it hadn't occurred to me to try to directly extract predictions.\n\nWhat is this \"capability growth rate\" of which you speak, Robin? There are old, old controversies in evolutionary biology involved here.\n\n[]{#AI-FOOM-Debatech14.html#likesection.15} Just to start by pointing out the obvious---if there are fixed resources available, only so much grass to be eaten or so many rabbits to consume, then any evolutionary \"progress\" that we would recognize as producing a better-designed organism may just result in the displacement of the old allele by the new allele---*not* any increase in the population as a whole. It's quite possible to have a new wolf that expends 10% more energy per day to be 20% better at hunting, and in this case [the sustainable wolf population will decrease](http://lesswrong.com/lw/l5/evolving_to_extinction/) as new wolves replace old.\n\nIf I was going to talk about the effect that a meta-level change might have on the \"optimization velocity\" of natural selection, I would talk about the time for a new adaptation to replace an old adaptation after a shift in selection pressures---not the total population or total biomass or total morphological complexity (see below).\n\nLikewise in human history---farming was an important innovation for purposes of optimization, not because it changed the human brain all that much, but because it meant that there were a hundred times as many brains around; and even more importantly, that there were surpluses that could support specialized professions. But many innovations in human history may have consisted of new, improved, more harmful weapons---which would, if anything, have decreased the sustainable population size (though \"no effect\" is more likely---fewer people means more food means more people).\n\nOr similarly---there's a talk somewhere where either Warren Buffett or Charles Munger mentions how they hate to hear about technological improvements in certain industries---because even if investing a few million can cut the cost of production by 30% or whatever, the barriers to competition are so low that the consumer captures all the gain. So they *have* to invest to keep up with competitors, and the investor doesn't get much return.\n\nI'm trying to measure the optimization velocity of information, not production or growth rates. At the tail end of a very long process, knowledge finally does translate into power---guns or nanotechnology or whatever. But along that long way, if you're measuring the number of material copies of the same stuff (how many wolves, how many people, how much grain), you may not be getting much of a glimpse at optimization velocity. Too many complications along the causal chain.\n\nAnd this is not just my problem.\n\nBack in the bad old days of pre-1960s evolutionary biology, it was widely taken for granted that there was such a thing as progress, that it proceeded forward over time, and that modern human beings were at the apex.\n\nGeorge Williams's *Adaptation and Natural Selection*, marking the so-called \"Williams Revolution\" in ev-bio that flushed out a lot of the romanticism and anthropomorphism, spent most of one chapter questioning the seemingly common-sensical metrics of \"progress.\"\n\nBiologists sometimes spoke of \"morphological complexity\" increasing over time. But how do you measure that, exactly? And at what point in life do you measure it if the organism goes through multiple stages? Is an amphibian more advanced than a mammal, since its genome has to store the information for multiple stages of life?\n\n\"There are life cycles enormously more complex than that of a frog,\" Williams wrote.^[1](#AI-FOOM-Debatech14.html#enz.10)^[]{#AI-FOOM-Debatech14.html#enz.10.backref} \"The lowly and 'simple' liver fluke\" goes through stages that include a waterborne stage that swims using cilia, finds and burrows into a snail, and then transforms into a sporocyst; that reproduces by budding to produce redia; these migrate in the snail and reproduce asexually, then transform into cercaria, which, by wiggling a tail, burrow out of the snail and swim to a blade of grass; there they transform into dormant metacercaria; these are eaten by sheep and then hatch into young flukes inside the sheep, then transform into adult flukes, which spawn fluke zygotes . . . So how \"advanced\" is that?\n\nWilliams also pointed out that there would be a limit to how much information evolution could maintain in the genome against degenerative pressures---which seems like a good principle in practice, though I made [some mistakes on *LW* in trying to describe the theory](http://lesswrong.com/lw/ku/natural_selections_speed_limit_and_complexity/).^[2](#AI-FOOM-Debatech14.html#enz.11)^[]{#AI-FOOM-Debatech14.html#enz.11.backref} Taxonomists often take a current form and call the historical trend toward it \"progress,\" but is that *upward* motion, or just substitution of some adaptations for other adaptations in response to changing selection pressures?\n\n\"Today the fishery biologists greatly fear such archaic fishes as the bowfin, garpikes, and lamprey, because they are such outstandingly effective competitors,\" Williams noted.^[3](#AI-FOOM-Debatech14.html#enz.12)^[]{#AI-FOOM-Debatech14.html#enz.12.backref}\n\nSo if I were talking about the effect of, e.g., sex as a meta-level innovation, then I would expect, e.g., an increase in the total biochemical and morphological complexity that could be maintained---the lifting of a previous upper bound, followed by an accretion of information. And I might expect a change in the velocity of new adaptations replacing old adaptations.\n\nBut to get from there to something that shows up in the fossil record---that's not a trivial step.\n\nI recall reading, somewhere or other, about an ev-bio controversy that ensued when one party spoke of the \"sudden burst of creativity\" represented by the Cambrian explosion, and wondered why evolution was proceeding so much more slowly nowadays. And another party responded that the Cambrian differentiation was mainly visible *post hoc*---that the groups of animals we have *now* first differentiated from one another *then*, but that *at the time* the differences were not as large as they loom nowadays. That is, the actual velocity of adaptational change wasn't remarkable by comparison to modern times, and only hindsight causes us to see those changes as \"staking out\" the ancestry of the major animal groups.\n\nI'd be surprised to learn that sex had no effect on the velocity of evolution. It looks like it should increase the speed and number of substituted adaptations, and also increase the complexity bound on the total genetic information that can be maintained against mutation. But to go from there to just looking at the fossil record and seeing *faster progress*---it's not just me who thinks that this jump to phenomenology is tentative, difficult, and controversial.\n\nShould you expect more speciation after the invention of sex, or less? The first impulse is to say \"more,\" because sex seems like it should increase the optimization velocity and speed up time. But sex also creates mutually reproducing *populations* that share genes among themselves, as opposed to asexual lineages---so might that act as a centripetal force?\n\nI don't even propose to answer this question, just point out that it is actually quite *standard* for the phenomenology of evolutionary theories---the question of which observables are predicted---to be a major difficulty. Unless you're dealing with really *easy* qualitative questions like \"Should I find rabbit fossils in the Pre-Cambrian?\" (I try to only make predictions about AI, using my theory of optimization, when it looks like an *easy* question.)\n\nYes, it's more convenient for scientists when theories make easily testable, readily observable predictions. But when I look back at the history of life, and the history of humanity, my first priority is to ask, \"What's going on here?\" and only afterward see if I can manage to make non-obvious retrodictions. I can't just start with the goal of having a convenient phenomenology. Or similarly: the theories I use to organize my understanding of the history of optimization to date have lots of parameters, e.g., the optimization-efficiency curve that describes optimization output as a function of resource input, or the question of how many low-hanging fruits exist in the neighborhood of a given search point. Does a larger population of wolves increase the velocity of natural selection, by covering more of the search neighborhood for possible mutations? If so, is that a logarithmic increase with population size, or what?---But I can't just wish my theories into being simpler.\n\nIf Robin has a *simpler* causal model, with fewer parameters, that stands directly behind observables and easily coughs up testable predictions, which fits the data well and obviates the need for my own abstractions like \"optimization efficiency\"---\n\n---then I may have to discard my own attempts at theorizing. But observing a series of material growth modes doesn't contradict a causal model of optimization behind the scenes, because it's a pure phenomenology, not itself a causal model---it doesn't say whether a given innovation had any effect on the optimization velocity of the process that produced future object-level innovations that actually changed growth modes, *et cetera*.\n\n[]{#AI-FOOM-Debatech14.html#likesection.16}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/w2/observing_optimization/p2p): If you can't usefully connect your abstractions to the historical record, I sure hope you have *some* data you can connect them to. Otherwise I can't imagine how you could have much confidence in them.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/w2/observing_optimization/p2s): Depends on how much stress I want to put on them, doesn't it? If I want to predict that the next growth curve will be an exponential and put bounds around its doubling time, I need a much finer fit to the data than if I only want to ask obvious questions like \"Should I find rabbit fossils in the Pre-Cambrian?\" or \"Do the optimization curves fall into the narrow range that would permit a smooth soft takeoff?\"\n\n> [Robin Hanson](http://lesswrong.com/lw/w2/observing_optimization/p2u): Eliezer, it seems to me that we can't really debate much more until you actually directly make your key argument. If, at it seems to me, you are still in the process of laying out your views tutorial-style, then let's pause until you feel ready.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/w2/observing_optimization/p2v): I think we ran into this same clash of styles last time (i.e., back at Oxford). I try to go through things systematically, locate any possible points of disagreement, resolve them, and continue. You seem to want to jump directly to the disagreement and then work backward to find the differing premises. I worry that this puts things in a more disagreeable state of mind, as it were---conducive to feed-backward reasoning (rationalization) instead of feed-forward reasoning.\n>\n> It's probably also worth bearing in mind that these kinds of metadiscussions are important, since this is something of a trailblazing case here. And that if we really want to set up conditions where we can't agree to disagree, that might imply setting up things in a different fashion than the usual Internet debates.\n\n> [Robin Hanson](http://lesswrong.com/lw/w2/observing_optimization/p2w): When I attend a talk, I don't immediately jump on anything a speaker says that sounds questionable. I wait until they actually make a main point of their talk, and then I only jump on points that seem to matter for that main point. Since most things people say actually don't matter for their main point, I find this to be a very useful strategy. I will be very surprised indeed if everything you've said mattered regarding our main point of disagreement.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/w2/observing_optimization/) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech14.html#enz.10} [1](#AI-FOOM-Debatech14.html#enz.10.backref). []{#AI-FOOM-Debatech14.html#cite.0.Williams.1966}George C. Williams, *Adaptation and Natural Selection: A Critique of Some Current Evolutionary Thought*, Princeton Science Library (Princeton, NJ: Princeton University Press, 1966).\n\n[]{#AI-FOOM-Debatech14.html#enz.11} [2](#AI-FOOM-Debatech14.html#enz.11.backref). []{#AI-FOOM-Debatech14.html#cite.0.Yudkowsky.2007g}Eliezer Yudkowsky, \"Natural Selection's Speed Limit and Complexity Bound,\" *Less Wrong* (blog), November 4, 2007, .\n\n[]{#AI-FOOM-Debatech14.html#enz.12} [3](#AI-FOOM-Debatech14.html#enz.12.backref). Williams, [*Adaptation and Natural Selection*](#AI-FOOM-Debatech14.html#cite.0.Williams.1966).\n\n[]{#AI-FOOM-Debatech15.html}\n\n## []{#AI-FOOM-Debatech15.html#x19-}[Chapter 14]{.titlemark} Life's Story Continues {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [21 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Followup to:** [The First World Takeover](../Text/AI-FOOM-Debatech8.html#x11-100007)As [last we looked at the planet](../Text/AI-FOOM-Debatech8.html#x11-100007), Life's long search in organism space had only just gotten started.\n\nWhen I try to structure my understanding of the unfolding process of Life, it seems to me that, to understand the *optimization velocity* at any given point, I want to break down that velocity using the following [abstractions](../Text/AI-FOOM-Debatech10.html#x13-120009):\n\n- The searchability of the neighborhood of the current location, and the availability of good/better alternatives in that rough region. Maybe call this the *optimization slope*. Are the fruit low-hanging or high-hanging, and how large are the fruit?\n- The *optimization resources*, like the amount of computing power available to a fixed program, or the number of individuals in a population pool.\n- The *optimization efficiency*, a curve that gives the amount of search power generated by a given investment of resources, which is presumably a function of the optimizer's structure at that point in time.\n\nExample: If an *object-level* adaptation enables more efficient extraction of resources, and thereby increases the total population that can be supported by fixed available resources, then this increases the *optimization resources* and perhaps the optimization velocity.\n\nHow much does optimization velocity increase---how hard does this object-level innovation hit back to the meta level?\n\nIf a population is small enough that not all mutations are occurring in each generation, then a larger population decreases the time for a given mutation to show up. If the fitness improvements offered by beneficial mutations follow an exponential distribution, then---I'm not actually doing the math here, just sort of eyeballing---I would expect the optimization velocity to go as log population size, up to a maximum where the search neighborhood is explored thoroughly. (You could test this in the lab, though not just by eyeballing the fossil record.)\n\nThis doesn't mean *all* optimization processes would have a momentary velocity that goes as the log of momentary resource investment up to a maximum. Just one mode of evolution would have this character. And even under these assumptions, evolution's *cumulative* optimization wouldn't go as log of *cumulative* resources---the log-pop curve is just the instantaneous velocity. If we assume that the variance of the neighborhood remains the same over the course of exploration (good points have better neighbors with same variance *ad infinitum*), and that the population size remains the same, then we should see linearly cumulative optimization over time. At least until we start to hit the information bound on maintainable genetic information . . .\n\nThese are the sorts of abstractions that I think are required to describe the history of life on Earth in terms of optimization. And I also think that if you don't talk optimization, then you won't be able to understand the causality---there'll just be these mysterious unexplained progress modes that change now and then. In the same way you have to talk natural selection to understand observed evolution, you have to talk optimization velocity to understand observed evolutionary speeds.\n\nThe first thing to realize is that meta-level changes are rare, so most of what we see in the historical record will be structured by the *search neighborhoods*---the way that one innovation opens up the way for additional innovations. That's going to be most of the story, not because meta-level innovations are unimportant, but because they are rare.\n\nIn \"[Eliezer's Meta-Level Determinism](../Text/AI-FOOM-Debatech13.html#x17-),\" Robin lists the following dramatic events traditionally noticed in the fossil record:\n\n> Any Cells, Filamentous Prokaryotes, Unicellular Eukaryotes, Sexual Eukaryotes, Metazoans . . .\n\nAnd he describes \"the last three strong transitions\" as:\n\n> Humans, farming, and industry . . .\n\nSo let me describe what I see when I look at these events, plus some others, through the lens of my abstractions:\n\n**Cells:** Force a set of genes, RNA strands, or catalytic chemicals to share a common reproductive fate. (This is the real point of the cell boundary, not \"protection from the environment\"---it keeps the fruits of chemical labor inside a spatial boundary.) But, as we've defined our abstractions, this is mostly a matter of optimization slope---the quality of the search neighborhood. The advent of cells opens up a tremendously rich new neighborhood defined by *specialization* and division of labor. It also increases the slope by ensuring that chemicals get to keep the fruits of their own labor in a spatial boundary, so that fitness advantages increase. But does it hit back to the meta level? How you define that seems to me like a matter of taste. Cells don't quite change the mutate-reproduce-select cycle. But if we're going to define sexual recombination as a meta-level innovation, then we should also define cellular isolation as a meta-level innovation.\n\nIt's worth noting that modern genetic algorithms have not, to my knowledge, reached anything like the level of intertwined complexity that characterizes modern unicellular organisms. Modern genetic algorithms seem more like they're producing individual chemicals, rather than being able to handle individually complex modules. So the cellular transition may be a hard one.\n\n**DNA:** I haven't yet looked up the standard theory on this, but I would sorta expect it to come *after* cells, since a ribosome seems like the sort of thing you'd have to keep around in a defined spatial location. DNA again opens up a huge new search neighborhood by separating the functionality of chemical shape from the demands of reproducing the pattern. Maybe we should rule that anything which restructures the search neighborhood this drastically should count as a hit back to the meta level. (Whee, our abstractions are already breaking down.) Also, DNA directly hits back to the meta level by carrying information at higher fidelity, which increases the total storable information.\n\n**Filamentous prokaryotes, unicellular eukaryotes:** Meh, so what.\n\n**Sex:** The archetypal example of a rare meta-level innovation. Evolutionary biologists still puzzle over how exactly this one managed to happen.\n\n**Metazoans:** The key here is not cells aggregating into colonies with similar genetic heritages; the key here is the controlled specialization of cells with an identical genetic heritage. This opens up a huge new region of the search space, but does not particularly change the nature of evolutionary optimization.\n\nNote that opening a sufficiently huge gate in the search neighborhood may *result* in a meta-level innovation being uncovered shortly thereafter. E.g., if cells make ribosomes possible. One of the main lessons in this whole history is that *one thing leads to another*.\n\nNeurons, for example, may have been the key enabling factor in enabling large-motile-animal body plans, because they enabled one side of the organism to talk with the other.\n\nThis brings us to the age of brains, which will be the topic of the next post.\n\nBut in the meanwhile, I just want to note that my view is nothing as simple as \"meta-level determinism\" or \"the impact of something is proportional to how meta it is; nonmeta things must have small impacts.\" Nothing much *meta* happened between the age of sexual metazoans and the age of humans---brains were getting more sophisticated over that period, but that didn't change the nature of evolution.\n\nSome object-level innovations are small, some are medium-sized, some are huge. It's no wonder if you look at the historical record and see a Big Innovation that doesn't look the least bit meta but had a huge impact by itself *and* led to lots of other innovations by opening up a new neighborhood picture of search space. This is allowed. Why wouldn't it be?\n\nYou can even get exponential acceleration without anything meta---if, for example, the more knowledge you have, or the more genes you have, the more opportunities you have to make good improvements to them. Without any increase in optimization pressure, the neighborhood gets higher-sloped as you climb it.\n\nMy thesis is more along the lines of, \"If this is the picture *without* recursion, just imagine what's going to happen when we *add* recursion.\"\n\nTo anticipate one possible objection: I don't expect Robin to disagree that modern civilizations underinvest in meta-level improvements because they take time to yield cumulative effects, are new things that don't have certain payoffs, and, worst of all, tend to be public goods. That's why we don't have billions of dollars flowing into prediction markets, for example. I, Robin, or Michael Vassar could probably think for five minutes and name five major probable-big-win meta-level improvements that society isn't investing in.\n\nSo if meta-level improvements are rare in the fossil record, it's not necessarily because it would be *hard* to improve on evolution, or because meta-level improving doesn't accomplish much. Rather, evolution doesn't do anything *because* it will have a long-term payoff a thousand generations later. Any meta-level improvement also has to grant an object-level fitness advantage in, say, the next two generations, or it will go extinct. This is why we can't solve the puzzle of how sex evolved by pointing directly to how it speeds up evolution. \"This speeds up evolution\" is just not a valid reason for something to evolve.\n\nAny creative evolutionary biologist could probably think for five minutes and come up with five great ways that evolution could have improved on evolution---but which happen to be more complicated than the wheel, which evolution evolved on only [three known occasions](http://en.wikipedia.org/wiki/Evolution_of_flagella) (Wikipedia)---or don't happen to grant an *immediate* fitness benefit to a handful of implementers.\n\n[]{#AI-FOOM-Debatech15.html#likesection.17}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/w3/lifes_story_continues/p3g): Let us agree that the \"oomph\" from some innovation depends on a lot more than whether it is \"meta.\" Meta innovations may well be on average bigger than the average innovation, but there are many other useful abstractions, such as how much new search space is opened up, that also help to predict an innovation's oomph. And there are many ways in which an innovation can make others easier.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/w3/lifes_story_continues/) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech16.html}\n\n## []{#AI-FOOM-Debatech16.html#x20-}[Chapter 15]{.titlemark} Emulations Go Foom {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [22 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nLet me consider the [AI-foom](../Text/AI-FOOM-Debatech11.html#x15-) issue by painting a (looong) picture of the [AI scenario I understand best](http://hanson.gmu.edu/IEEESpectrum-6-08.pdf),^[1](#AI-FOOM-Debatech16.html#enz.13)^[]{#AI-FOOM-Debatech16.html#enz.13.backref} [whole-brain emulations](http://www.overcomingbias.com/2008/10/fhi-emulation-r.html),^[2](#AI-FOOM-Debatech16.html#enz.14)^[]{#AI-FOOM-Debatech16.html#enz.14.backref} which I'll call \"bots.\" Here goes.\n\nWhen investors anticipate that a bot may be feasible soon, they will estimate their chances of creating bots of different levels of quality and cost, as a function of the date, funding, and strategy of their project. A bot more expensive than any (speedup-adjusted) human wage is of little direct value, but exclusive rights to make a bot costing below most human wages would be worth many trillions of dollars.\n\nIt may well be socially cost-effective to start a bot-building project with a 1% chance of success when its cost falls to the trillion-dollar level. But not only would successful investors probably only gain a small fraction of this net social value, it is unlikely any investor group able to direct a trillion could be convinced the project was feasible---there are just too many smart-looking idiots making crazy claims around.\n\nBut when the cost to try a 1% project fell below a billion dollars, dozens of groups would no doubt take a shot. Even if they expected the first feasible bots to be very expensive, they might hope to bring that cost down quickly. Even if copycats would likely profit more than they, such an enormous prize would still be very tempting.\n\nThe first priority for a bot project would be to create as much emulation fidelity as affordable, to achieve a functioning emulation, i.e., one you could talk to and so on. Few investments today are allowed a decade of red ink, and so most bot projects would fail within a decade, their corpses warning others about what not to try. Eventually, however, a project would succeed in making an emulation that was clearly sane and cooperative.\n\nHow close would its closest competitors then be? If there are many very different plausible approaches to emulation, each project may take a different approach, forcing other projects to retool before copying a successful approach. But enormous investment would be attracted to this race once news got out about even a very expensive successful emulation. As I can't imagine that many different emulation approaches, it is hard to see how the lead project could be much more than a year ahead.\n\nBesides hiring assassins or governments to slow down their competition, and preparing to market bots soon, at this point the main task for the lead project would be to make their bot cheaper. They would try multitudes of ways to cut corners on the emulation implementation, checking to see that their bot stayed sane. I expect several orders of magnitude of efficiency gains to be found easily at first, but that such gains would quickly get hard to find. While a few key insights would allow large gains, most gains would come from many small improvements.\n\nSome project would start selling bots when their bot cost fell substantially below the (speedup-adjusted) wages of a profession with humans available to scan. Even if this risked more leaks, the vast revenue would likely be irresistible. This revenue might help this group pull ahead, but this product would not be accepted in the marketplace overnight. It might take months or years to gain regulatory approval, to see how to sell it right, and then for people to accept bots into their worlds and to reorganize those worlds to accommodate bots.\n\nThe first team to achieve high-fidelity emulation may not be the first to sell bots; competition should be fierce and leaks many. Furthermore, the first to achieve marketable costs might not be the first to achieve much lower costs, thereby gaining much larger revenues. Variation in project success would depend on [many factors](../Text/AI-FOOM-Debatech5.html#x8-70004). These depend not only on who followed the right key insights on high fidelity emulation and implementation corner cutting, but also on abilities to find and manage thousands of smaller innovation and production details, and on relations with key suppliers, marketers, distributors, and regulators.\n\nIn the absence of a strong world government or a powerful cartel, it is hard to see how the leader could be so far ahead of its nearest competitors as to \"take over the world.\" Sure, the leader might make many trillions more in profits, so enriching shareholders and local residents as to make Bill Gates look like a tribal chief proud of having more feathers in his cap. A leading nation might even go so far as to dominate the world as much as Britain, the origin of the Industrial Revolution, once did. But the rich and powerful would at least be discouraged from capricious devastation the same way they have always been, by self-interest.\n\nWith a thriving bot economy, groups would continue to explore a variety of ways to reduce bot costs and raise bot value. Some would try larger reorganizations of bot minds. Others would try to create supporting infrastructure to allow groups of sped-up bots to work effectively together to achieve sped-up organizations and even cities. Faster bots would be allocated to priority projects, such as attempts to improve bot implementation and bot inputs, such as computer chips. Faster minds riding Moore's Law and the ability to quickly build as many bots as needed should soon speed up the entire world economy, which would soon be dominated by bots and their owners.\n\nI expect this economy to settle into a new faster growth rate, as it did after previous transitions like humans, farming, and industry. Yes, there would be a vast new range of innovations to discover regarding expanding and reorganizing minds, and a richer economy will be increasingly better able to explore this space, but as usual the easy wins will be grabbed first, leaving harder nuts to crack later. And from my AI experience, I expect those nuts to be very hard to crack, though such a enormously wealthy society may well be up to the task. Of course within a few years of more rapid growth we might hit even faster growth modes, or ultimate limits to growth.\n\nDoug Engelbart was right that computer tools can improve computer tools, allowing a burst of productivity by a team focused on tool improvement, and he even correctly saw the broad features of future computer tools. Nevertheless Doug [could not translate](../Text/AI-FOOM-Debatech4.html#x7-60003) this into team success. Inequality in who gained from computers has been less about inequality in understanding key insights about computers, and more about lumpiness in cultures, competing standards, marketing, regulation, etc.\n\nThese factors also seem to me the most promising places to look if you want to reduce inequality due to the arrival of bots. While bots will be a much bigger deal than computers were, inducing much larger inequality, I expect the causes of inequalities to be pretty similar. Some teams will no doubt have leads over others, but info about progress should remain leaky enough to limit those leads. The vast leads that life has gained over nonlife, and humans over nonhumans, are mainly due, I think, to the enormous difficulty of leaking innovation info across those boundaries. Leaky farmers and industrialists had far smaller leads.\n\nAdded: Since comments focus on slavery, let me [quote myself](http://hanson.gmu.edu/IEEESpectrum-6-08.pdf):\n\n> Would robots be slaves? Laws could conceivably ban robots or only allow robots \"born\" with enough wealth to afford a life of leisure. But without global and draconian enforcement of such laws, the vast wealth that cheap robots offer would quickly induce a sprawling, unruly black market. Realistically, since modest enforcement could maintain only modest restrictions, huge numbers of cheap (and thus poor) robots would probably exist; only their legal status would be in question. Depending on local politics, cheap robots could be \"undocumented\" illegals, legal slaves of their creators or owners, \"free\" minds renting their bodies and services and subject to \"eviction\" for nonpayment, or free minds saddled with debts and subject to \"repossession\" for nonpayment. The following conclusions do not much depend on which of these cases is more common.^[3](#AI-FOOM-Debatech16.html#enz.15)^[]{#AI-FOOM-Debatech16.html#enz.15.backref}\n\n[]{#AI-FOOM-Debatech16.html#likesection.18}\n\n------------------------------------------------------------------------\n\n> [Carl Shulman](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518239337):\n>\n> > In the absence of a strong world government or a powerful cartel, it is hard to see how the leader could be so far ahead of its nearest competitors as to \"take over the world.\"\n>\n> The first competitor uses some smart people with common ideology and relevant expertise as templates for its bots. Then, where previously there were thousands of experts with relevant skills to be hired to improve bot design, there are now millions with initially exactly shared aims. They buy up much of the existing hardware base (in multiple countries), run copies at high speed, and get another order of magnitude of efficiency or so, while developing new skills and digital nootropics. With their vast resources and shared aims they can effectively lobby and cut deals with individuals and governments worldwide, and can easily acquire physical manipulators (including humans wearing cameras, microphones, and remote-controlled bombs for coercions) and cheaply monitor populations.\n>\n> Copying a bot template is an easy way to build cartels with an utterly unprecedented combination of cohesion and scale.\n\n> [Carl Shulman](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518239399):\n>\n> > A leading nation might even go so far as to dominate the world as much as Britain, the origin of the Industrial Revolution, once did.\n>\n> A leading nation, with territorial control over a large fraction of all world computing hardware, develops brain emulation via a Manhattan Project. Knowing the power of bots, only carefully selected individuals, with high intelligence, relevant expertise, and loyalty, are scanned. The loyalty of the resulting bots is tested exhaustively (copies can be tested to destruction, their digital brains scanned directly, etc.), and they can be regularly refreshed from old data, and changes carefully tested for effects on motivation.\n>\n> Server farms are rededicated to host copies of these minds at varying speeds. Many take control of military robots and automated vehicles, while others robustly monitor the human population. The state is now completely secure against human rebellion, and an attack by foreign powers would mean a nuclear war (as it would today). Meanwhile, the bots undertake intensive research to improve themselves. Rapid improvements in efficiency of emulation proceed from workers with a thousandfold or millionfold speedup, with acquisition of knowledge at high speeds followed by subdivision into many instances to apply that knowledge (and regular pruning/replacement of undesired instances). With billions of person-years of highly intelligent labor (but better, because of the ability to spend computational power on both speed and on instances) they set up rapid infrastructure after a period of days and extend their control to the remainder of the planet.\n>\n> The bots have remained coordinated in values through regular reversion to saved states, and careful testing of the effects of learning and modification on their values (conducted by previous versions) and we now have a global singleton with the values of the national project. That domination is far more extreme than anything ever achieved by Britain or any other historical empire.\n\n> [Carl Shulman](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518239414):\n>\n> > . . . are mainly due, I think, to the enormous difficulty of leaking innovation info across those boundaries.\n>\n> Keeping some technical secrets for at least a few months is quite commonly done, I think it was Tim Tyler who mentioned Google and Renaissance, and militaries have kept many secrets for quite long periods of time when the people involved supported their organizational aim (it was hard to keep Manhattan Project secrets from the Soviet Union because many of the nuclear scientists supported Communism, but counterintelligence against the Nazis was more successful).\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518239606): . . . I didn't say secrets are never kept, I said human projects leak info lots more than humans did to chimps. If bot projects mainly seek profit, initial humans to scan will be chosen mainly based on their sanity as bots and high-wage abilities. These are unlikely to be pathologically loyal. Ever watch twins fight, or ideologues fragment into factions? Some would no doubt be ideological, but I doubt early bots---copies of them---will be cooperative enough to support strong cartels. And it would take some time to learn to modify human nature substantially. It is possible to imagine how an economically powerful Stalin might run a bot project, and it's not a pretty sight, so let's agree to avoid the return of that prospect.\n\n> [Carl Shulman](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518239802):\n>\n> > If bot projects mainly seek profit, initial humans to scan will be chosen mainly based on their sanity as bots and high-wage abilities.\n>\n> That's a big if. Unleashing \"bots\"/uploads means setting off the \"crack of a future dawn,\" creating a new supermajority of sapients, driving wages below human subsistence levels, completely upsetting the global military balance of power, and forcing either disenfranchisement of these entities or a handoff of political power in democracies. With rapidly diverging personalities, and bots spread across national borders, it also means scrabbling for power (there is no universal system of property rights), and war will be profitable for many states. Any upset of property rights will screw over those who have not already been uploaded or whose skills are exceeded by those already uploaded, since there will be no economic motivation to keep them alive.\n>\n> I very much doubt that any U.S. or Chinese President who understood the issues would fail to nationalize a for-profit firm under those circumstances. Even the CEO of an unmolested firm about to unleash bots on the world would think about whether doing so will result in the rapid death of the CEO and the burning of the cosmic commons, and the fact that profits would be much higher if the bots produced were more capable of cartel behavior (e.g., close friends/family of the CEO, with their friendship and shared values tested after uploading).\n>\n> > It is possible to imagine how an economically powerful Stalin might run a bot project, and it's not a pretty sight, so let's agree to avoid the return of that prospect.\n>\n> It's also how a bunch of social democrats, or libertarians, or utilitarians, might run a project, knowing that a very likely alternative is the crack of a future dawn and burning the cosmic commons, with a lot of inequality in access to the future, and perhaps worse. Any state with a lead on bot development that can ensure the bot population is made up of nationalists or ideologues (who could monitor each other) could disarm the world's dictatorships, solve collective action problems like the cosmic commons, etc., while releasing the info would hand the chance to conduct the \"Stalinist\" operation to other states and groups.\n>\n> > These are unlikely to be pathologically loyal. Ever watch twins fight, or ideologues fragment into factions? Some would no doubt be ideological, but I doubt early bots---copies of them---will be cooperative enough to support strong cartels. And it would take some time to learn to modify human nature substantially.\n>\n> They will know that the maintenance of their cartel for a time is necessary to avert the apocalyptic competitive scenario, and I mentioned that even without knowledge of how to modify human nature substantially there are ways to prevent value drift. With shared values and high knowledge and intelligence they can use democratic-type decision procedures amongst themselves and enforce those judgments coercively on each other.\n\n> [Carl Shulman](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518239907):\n>\n> > And from my AI experience, I expect those nuts to be very hard to crack, though such a enormously wealthy society may well be up to the task.\n>\n> When does hand-coded AI come into the picture here? Does your AI experience tell you that if you could spend a hundred years studying relevant work in eight sidereal hours, and then split up into a million copies at a thousandfold speedup, you wouldn't be able to build a superhuman initially hand-coded AI in a sidereal month? Likewise for a million von Neumanns (how many people like von Neumann have worked on AI thus far)? A billion? A trillion? A trillion trillion? All this with working brain emulations that can be experimented upon to precisely understand the workings of human minds and inform the hand-coding?\n>\n> Also, there are a lot of idle mineral and energy resources that could be tapped on Earth and in the solar system, providing quite a number of additional orders of magnitude of computational substrate (raising the returns to improvements in mind efficiency via standard IP economics). A fully automated nanotech manufacturing base expanding through those untapped resources, perhaps with doubling times of significantly less than a week, will enhance growth with an intense positive feedback with tech improvements.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518240078): Carl Shulman has said much of what needed saying.\n>\n> > [Robin](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518239606): I'm *sure* they will have some short name other than \"human.\" If not \"bots,\" how about \"ems\"?\n>\n> Let's go with \"ems\" (though what was wrong with \"uploads\"?)\n>\n> Whole-brain emulations are not part of the AI family, they are part of the modified-human family with the usual advantages and disadvantages thereof, including lots of smart people that seemed nice at first all slowly going insane in the same way, difficulty of modifying the brainware without superhuman intelligence, *unavoidable* ethical difficulties, resentment of exploitation and other standard human feelings, *et cetera*.\n>\n> > They would try multitudes of ways to cut corners on the emulation implementation, checking to see that their bot stayed sane. I expect several orders of magnitude of efficiency gains to be found easily at first, but that such gains would quickly get hard to find.\n>\n> Leaving aside that you're describing a completely unethical process---as de Blanc notes, prediction is not advocating, but *some* individual humans and governmental entities often at least *try* to avoid doing things that their era says is very wrong, such as killing millions of people---at the very least an economist should *mention* when a putative corporate action involves torture and murder---\n>\n> ---several orders of magnitude of efficiency gains? Without understanding the underlying software in enough detail to write your own *de novo* AI? Suggesting a whole-bird emulation is one thing, suggesting that you can get several orders of magnitude efficiency improvement out of the bird emulation *without understanding how it works* seems like a much, much stronger claim.\n>\n> As I was initially reading, I was thinking that I was going to reply in terms of ems being nonrecursive---they're just people in silicon instead of carbon, and I for one don't find an extra eight protons all that impressive. It may or may not be *realistic*, but the scenario you describe is not a Singularity in the sense of either a Vingean event horizon or a Goodian intelligence explosion; it's just more of the same but faster.\n>\n> But any technology powerful enough to milk a thousandfold efficiency improvement out of upload software, without driving those uploads insane, is powerful enough to *upgrade* the uploads. Which brings us to Cameron's [observation](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518239376):\n>\n> > What the? Are you serious? Are you talking about self replicating machines of ≥ human intelligence or Tamagotchi?\n>\n> I am afraid that my reaction was much the same as Cameron's. The prospect of biological humans sitting on top of a population of ems that are *smarter, much faster, and far more numerous* than bios *while having all the standard human drives*, and the bios treating the ems as standard economic valuta to be milked and traded around, and the ems sitting still for this for more than a week of bio time---this does not seem historically realistic. . . .\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518240236): All, this post's scenario *assumes* whole-brain emulation without other forms of machine intelligence. We'll need other posts to explore the chances of this vs. other scenarios, and the consequences of other scenarios. This post was to explore the need for friendliness in this scenario.\n>\n> Note that most objections here are to my social science, and to ethics some try to read into my wording (I wasn't trying to make any ethical claims). No one has complained, for example, that I've misapplied or ignored optimization abstractions.\n>\n> []{#AI-FOOM-Debatech16.html#likesection.19} I remain fascinated by the common phenomenon wherein intuitive social reasoning seems so compelling to most people that they feel very confident of their conclusions and feel little inclination to listen to or defer to professional social scientists. [Carl Shulman](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518239848), for example, finds it obvious it is in the self-interest of \"a leading power with an edge in bot technology and some infrastructure . . . to kill everyone else and get sole control over our future light-cone's natural resources.\" Eliezer seems to say he agrees. I'm sorry, Carl, but your comments on this post sound like crazy paranoid rants, as if you were Dr. Strangelove pushing the button to preserve our precious bodily fluids. Is there any social scientist out there who finds Carl's claims remotely plausible?\n>\n> Eliezer, I don't find it obviously unethical to experiment with implementation shortcuts on a willing em volunteer (or on yourself). The several orders of magnitude of gains were relative to a likely-to-be excessively high-fidelity initial emulation (the WBE roadmap agrees with me here I think). I did not assume the ems would be slaves, and I explicitly added to the post before your comment to make that clear. If it matters, I prefer free ems who rent or borrow bodies. Finally, is your objection here really going to be that you can't imagine a world with vast wealth inequality without the poor multitudes immediately exterminating the rich few? Or does this only happen when many poor think faster than many rich? What kind of social science analysis do you base this conclusion on? . . .\n\n> [Carl Shulman](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518240272):\n>\n> > Carl Shulman, for example, finds it obvious it is in the self-interest of \"a leading power with an edge in bot technology and some infrastructure . . . to kill everyone else and get sole control over our future light-cone's natural resources.\n>\n> You are misinterpreting that comment. I was directly responding to your claim that self-interest would restrain capricious abuses, as it seems to me that the ordinary self-interested reasons restraining abuse of outgroups, e.g., the opportunity to trade with them or tax them, no longer apply when their labor is worth less than a subsistence wage, and other uses of their constituent atoms would have greater value. There would be little *self-interested* reason for an otherwise abusive group to rein in such mistreatment, even though plenty of altruistic reasons would remain. For most, I would expect them to initially plan simply to disarm other humans and consolidate power, killing only as needed to preempt development of similar capabilities.\n>\n> > Finally, is your objection here really going to be that you can't imagine a world with vast wealth inequality without the poor multitudes immediately exterminating the rich few? Or does this only happen when many poor think faster than many rich? What kind of social science analysis do you base this conclusion on?\n>\n> Empirically, most genocides in the last hundred years have involved the expropriation and murder of a disproportionately prosperous minority group. This is actually a common pattern in situations with much less extreme wealth inequality and difference (than in an upload scenario) between ethnic groups in the modern world:\n>\n> [http://www.amazon.com/World-Fire-Exporting-Democracy-Instability> /dp/](http://www.amazon.com/World-Fire-Exporting-Democracy-Instability/dp/)\n>\n> Also, Eliezer's point does not require extermination (although a decision simply to engage in egalitarian redistribution, as is common in modern societies, would reduce humans below the subsistence level, and almost all humans would lack the skills to compete in emulation labor markets, even if free uploading was provided), just that if a CEO expects that releasing uploads into the world will shortly upset the economic system in which any monetary profits could be used, the profit motive for doing so will be weak.\n\n> [James Miller](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518240285):\n>\n> > I remain fascinated by the common phenomenon wherein intuitive social reasoning seems so compelling to most people that they feel very confident of their conclusions and feel little inclination to listen to or defer to professional social scientists. [Carl Shulman](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518239848), for example, finds it obvious it is in the self-interest of \"a leading power with an edge in bot technology and some infrastructure . . . to kill everyone else and get sole control over our future light-cone's natural resources.\" Eliezer seems to say he agrees. I'm sorry Carl, but your comments on this post sound like crazy paranoid rants, as if you were Dr. Strangelove pushing the button to preserve our precious bodily fluids. Is there any social scientist out there who finds Carl's claims remotely plausible?\n>\n> Yes.\n>\n> Ten people are on an island with a limited supply of food. You die when you run out of food. The longer you live the greater your utility. Any one individual might maximize his utility by killing everyone else.\n>\n> Ten billion people in a universe with a limited supply of usable energy. You die when you run out of usable energy . . .\n>\n> Or even worse, post-transition offense turns out to be much, much easier than defense. You get to live forever so long as no one kills you. If you care only about yourself, don't get a huge amount of utility from being in the company of others, then it would be in your interest to kill everyone else.\n>\n> Carl is only crazy if you assume that a self-interested person would necessarily get a huge amount of utility from living in the company of others. Post-transition this assumption might not be true.\n\n> [Carl Shulman](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518240349): James,\n>\n> > Ten people are on an island with a limited supply of food. You die when you run out of food. The longer you live the greater your utility. Any one individual might maximize his utility by killing everyone else.\n>\n> Yes, if a secure governing elite, e.g., the top ten thousand Party Members in North Korea (who are willing to kill millions among the Korean population to better secure their safety and security), could decide between an even distribution of future resources among the existing human population vs. only amongst themselves, I would not be surprised if they took a millionfold increase in expected future well-being. A group with initially noble intentions that consolidated global power could plausibly drift to this position with time, and there are many intermediate cases of ruling elites that are nasty but substantially less so than the DPRK's.\n>\n> > Or even worse, post-transition offense turns out to be much, much easier than defense.\n>\n> No, this just leads to disarming others and preventing them from gaining comparable technological capabilities.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518240380): Carl, consider this crazy paranoid rant:\n>\n> > Don't be fooled, everything we hold dear is at stake! They are completely and totally dedicated to their plan to rule everything, and will annihilate us as soon as they can. They only pretend to be peaceful now to gain temporary advantages. If we forget this and work with them, instead of dedicating ourselves to their annihilation, they will gain the upper hand and all will be lost. Any little advantage we let them have will be used to build even more advantages, so we must never give an inch. Any slight internal conflict on our side will also give them an edge. We must tolerate no internal conflict and must be willing to sacrifice absolutely everything because they are completely unified and dedicated, and if we falter all is lost.\n>\n> You are essentially proposing that peace is not possible because everyone will assume that others see this as total war, and so fight a total war themselves. Yes, sometimes there are wars, and sometimes very severe wars, but war is rare and increasingly so. Try instead to imagine choices made by folks who think the chance of war was low.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518240388): Robin, are you seriously dismissing the possibility of conflict between bios and ems?\n\n> [James Miller](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518240455): Robin,\n>\n> War is rare today mostly because it's not beneficial. But under different incentive structures humans are very willing to kill to benefit themselves. For example among the Yanomamö (a primitive tribe in Brazil) more than a third of the men die from warfare.\n>\n> \n>\n> If the benefits of engaging in warfare significantly increase your \"crazy paranoid rant\" becomes rather sound advice.\n>\n> You wrote, \"Try instead to imagine choices made by folks who think the chance of war was low.\" When I imagine this I think of Neville Chamberlain.\n\n> [Carl Shulman](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518240478):\n>\n> > You are essentially proposing that peace is not possible because everyone will assume that others see this as total war, and so fight a total war themselves. Yes, sometimes there are wars, and sometimes very severe wars, but war is rare and increasingly so.\n>\n> I am not proposing that peace is impossible, but that resolving an unstable arms race, with a winner-take-all technology in sight, requires either coordinating measures such as treaties backed by inspection, or trusting in the motives of the leading developer. I would prefer the former. I do not endorse the ludicrous caricature of in-group bias you present and do not think of biological humans as my morally supreme ingroup (or any particular tribe of biological humans, for that matter). If the parable is supposed to indicate that I am agitating for the unity of an ingroup against an ingroup, please make clear which is supposed to be which.\n>\n> I am proposing that states with no material interests in peace will tend to be less peaceful, that states with the ability to safely disarm all other states will tend to do so, and that states (which devote minimal resources to assisting foreigners and future generations) will tend to allocate unclaimed resources to their citizens or leadership, particularly when those resources can be used to extend life. It is precisely these tendencies that make it worthwhile to make efforts to ensure that the development and application of these technologies is conducted in a transparent and coordinated way, so that arms races and deadly mistakes can be avoided.\n>\n> Are you essentially proposing that the governments of the world would *knowingly* permit private and uncontrolled development of a technology that will result in permanent global unemployment (at more than a subsistence wage, without subsidy) for biological humans, render biological humans a weak and tiny minority on this planet, and completely disrupt the current geopolitical order, as well as possibly burning the cosmic commons and/or causing the extinction of biological humans, when it is possible to exert more control over developments? That seems less likely than governments knowingly permitting the construction and possession of nuclear ICBMs by private citizens.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518240528): Carl, my point is that this tech is not of a type intrinsically more winner-take-all, unstable-arms-like, or geopolitical-order-disrupting than most any tech that displaces competitors via lower costs. This is nothing like nukes, which are only good for war. Yes, the cumulative effects of more new tech can be large, but this is true for most any new tech. Individual firms and nations would adopt this tech for the same reason they adopt other lower-cost tech; because they profit by doing so. Your talk of extinction and \"a weak and tiny minority\" are only relevant when you imagine wars.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518240565): James, I agree that it is *possible* for war to be beneficial. The question is whether *in the specific scenario described in this post* we have good reasons to think it would be. . . .\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518240590): Any sufficiently slow FOOM is indistinguishable from an investment opportunity.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/emulations-go-f.html#comment-518240675): Eliezer, yes, and so the vast majority of fooms may be slow and not require friendliness. So we need positive arguments why any one foom is an exception to this. . . .\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/11/emulations-go-f.html) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech16.html#enz.13} [1](#AI-FOOM-Debatech16.html#enz.13.backref). Hanson, [\"Economics of the Singularity](../Text/AI-FOOM-Debatech6.html#cite.0.Hanson.2008).\"\n\n[]{#AI-FOOM-Debatech16.html#enz.14} [2](#AI-FOOM-Debatech16.html#enz.14.backref). []{#AI-FOOM-Debatech16.html#cite.0.Sandberg.2008}Anders Sandberg and Nick Bostrom, *Whole Brain Emulation: A Roadmap*, Technical Report, 2008-3 (Future of Humanity Institute, University of Oxford, 2008), .\n\n[]{#AI-FOOM-Debatech16.html#enz.15} [3](#AI-FOOM-Debatech16.html#enz.15.backref). Hanson, [\"Economics of the Singularity](../Text/AI-FOOM-Debatech6.html#cite.0.Hanson.2008).\"\n\n[]{#AI-FOOM-Debatech17.html}\n\n## []{#AI-FOOM-Debatech17.html#x21-}[Chapter 16]{.titlemark} Brain Emulation and Hard Takeoff {.chapterHead}\n\n{.dink}\n\n### [Carl Shulman]{.chapterAuthor} [22 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nThe construction of a working [brain emulation](../Text/AI-FOOM-Debatech16.html#x20-) would require, aside from brain-scanning equipment and computer hardware to test and run emulations on, highly intelligent and skilled scientists and engineers to develop and improve the emulation software. How many such researchers? A billion-dollar project might employ thousands, of widely varying quality and expertise, who would acquire additional expertise over the course of a successful project that results in a working prototype. Now, as Robin [says](../Text/AI-FOOM-Debatech16.html#x20-):\n\n> They would try multitudes of ways to cut corners on the emulation implementation, checking to see that their bot stayed sane. I expect several orders of magnitude of efficiency gains to be found easily at first, but that such gains would quickly get hard to find. While a few key insights would allow large gains, most gains would come from many small improvements.\n>\n> Some project would start selling bots when their bot cost fell substantially below the (speedup-adjusted) wages of a profession with humans available to scan. Even if this risked more leaks, the vast revenue would likely be irresistible.\n\nTo make further improvements they would need skilled workers up to speed on relevant fields and the specific workings of the project's design. But the project above can now run an emulation at a cost substantially less than the wages it can bring in. In other words, it is now cheaper for the project to run an instance of one of its brain emulation engineers than it is to hire outside staff or collaborate with competitors. This is especially so because an emulation can be run at high speeds to catch up on areas it does not know well, faster than humans could be hired and brought up to speed, and then duplicated many times. The limiting resource for further advances is no longer the supply of expert humans, but simply computing hardware on which to run emulations.\n\nIn this situation the dynamics of software improvement are interesting. Suppose that we define the following:\n\n- The stock of knowledge, *s*, is the number of standardized researcher-years that have been expended on improving emulation design.\n- The hardware base, *h*, is the quantity of computing hardware available to the project in generic units.\n- The efficiency level, *e*, is the effective number of emulated researchers that can be run using one generic unit of hardware.\n\nThe first derivative of *s* will be equal to *h × e*, *e* will be a function of *s*, and *h* will be treated as fixed in the short run. In order for growth to proceed with a steady doubling, we will need *e* to be a very specific function of *s*, and we will need a different function for each possible value of *h*. Reduce it much, and the self-improvement will slow to a crawl. Increase *h* by an order of magnitude over that and you get an immediate explosion of improvement in software, the likely aim of a leader in emulation development.\n\nHow will this hardware capacity be obtained? If the project is backed by a national government, it can simply be given a large fraction of the computing capacity of the nation's server farms. Since the cost of running an emulation is less than high-end human wages, this would enable many millions of copies to run at real-time speeds immediately. Since mere thousands of employees (many of lower quality) at the project had been able to make significant progress previously, even with diminishing returns, this massive increase in the effective size, intelligence, and expertise of the workforce (now vastly exceeding the world AI and neuroscience communities in numbers, average IQ, and knowledge) should be able to deliver multiplicative improvements in efficiency and capabilities. That capabilities multiplier will be applied to the project's workforce, now the equivalent of tens or hundreds of millions of Einsteins and von Neumanns, which can then make further improvements.\n\nWhat if the project is not openly backed by a major state such as Japan, the U.S., or China? If its possession of a low-cost emulation method becomes known, governments will use national security laws to expropriate the technology, and can then implement the plan above. But if, absurdly, the firm could proceed unmolested, then it could likely acquire the needed hardware by selling services. Robin [suggests](../Text/AI-FOOM-Debatech16.html#x20-) that\n\n> This revenue might help this group pull ahead, but this product would not be accepted in the marketplace overnight. It might take months or years to gain regulatory approval, to see how to sell it right, and then for people to accept bots into their worlds and to reorganize those worlds to accommodate bots.\n\nBut there are many domains where sales can be made directly to consumers across national borders, without emulations ever transferring their data to vulnerable locations. For instance, sped-up emulations could create music, computer games, books, and other art of extraordinary quality and sell it online through a website (held by some pre-existing company purchased by the project or the project's backers) with no mention of the source of the IP. Revenues from these sales would pay for the cost of emulation labor, and the residual could be turned to self-improvement, which would slash labor costs. As costs fell, any direct-to-consumer engagement could profitably fund further research, e.g., phone sex lines using VoIP would allow emulations to remotely earn funds with extreme safety from the theft of their software.\n\nLarge amounts of computational power could also be obtained by direct dealings with a handful of individuals. A project could secretly investigate, contact, and negotiate with a few dozen of the most plausible billionaires and CEOs with the ability to provide some server farm time. Contact could be anonymous, with proof of AI success demonstrated using speedups, e.g., producing complex original text on a subject immediately after a request using an emulation with a thousandfold speedup. Such an individual could be promised the Moon, blackmailed, threatened, or convinced of the desirability of the project's aims.\n\nTo sum up:\n\n1. [When emulations can first perform skilled labor like brain-emulation design at a cost in computational resources less than the labor costs of comparable human workers, mere thousands of humans will still have been making progress at a substantial rate (that's how they get to cost-effective levels of efficiency).]{#AI-FOOM-Debatech17.html#x21-20002x1}\n2. [Access to a significant chunk of the hardware available at that time will enable the creation of a work force orders of magnitude larger and with much higher mean quality than a human one still making substantial progress.]{#AI-FOOM-Debatech17.html#x21-20004x2}\n3. [Improvements in emulation software will multiply the efficacy of the emulated research work force, i.e., the return on investments in improved software scales with the hardware base. When the hardware base is small, each software improvement delivers a small increase in the total research power, which may be consumed by diminishing returns and exhaustion of low-hanging fruit; but when the total hardware base is large, positive feedback causes an intelligence explosion.]{#AI-FOOM-Debatech17.html#x21-20006x3}\n4. [A project, which is likely to be nationalized if obtrusive, could plausibly obtain the hardware required for an intelligence explosion through nationalization or independent action.]{#AI-FOOM-Debatech17.html#x21-20008x4}\n\n[]{#AI-FOOM-Debatech17.html#likesection.20}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/brain-emulation.html#comment-518246873): This really represents a basic economic confusion. Having a product that you can sell for more than its cost for you to make gives you profits, i.e., wealth. But having wealth does *not* necessarily give you an advantage at finding new ways to get more wealth. So having an advantage at making ems does *not* necessarily give you an advantage at making cheaper ems. Sure, you can invest in research, but so can everyone else who has wealth. You seem to assume here that groups feel compelled to follow a plan of accumulating a war chest of wealth, reinvesting their wealth in gaining more wealth, because they expect to fight a war. And yes, when people expect and plan for wars, well, wars often result. But that hardly means that if some will gain temporary sources of wealth a war will follow.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/brain-emulation.html#comment-518246893): Robin, your reply doesn't seem to take into account the notion of *using em researchers to make cheaper ems*. Whoever has the cheapest ems to start with gets the cheapest research done.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/brain-emulation.html#comment-518246913): Eliezer, you need to review the concept of *opportunity cost*. It is past midnight here, and I'm off to bed now.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/brain-emulation.html#comment-518246934): G'night. Sorry, don't see the connection even after being told. I'm not saying that the leading em-builders are getting ems from nowhere without paying opportunity costs, I'm saying they get their ems wholesale instead of retail and this advantage snowballs.\n\n> [Carl Shulman](http://www.overcomingbias.com/2008/11/brain-emulation.html#comment-518246959):\n>\n> > This really represents a basic economic confusion.\n>\n> Robin, you've made a number of comments along these lines, assuming mistakenly that I am not familiar with standard economic results and literatures and attributing claims to the supposed unfamiliarity, when in fact I am very familiar indeed with economics in general and the relevant results in particular.\n>\n> I am fully familiar with the decline in casualties from violence in recent centuries, the correlations of peace with economic freedom, democracy, prosperity, etc. I understand comparative advantage and the mistake of mercantilism, self-fulfilling prophecies in arms races, etc., etc. I know you highly value social science and think that other thinkers on futurist topics neglect basic economic results and literatures, and I am not doing so. I agree, and am informed on those literatures.\n>\n> > But having wealth does *not* necessarily give you an advantage at finding new ways to get more wealth.\n>\n> In this case we are talking about highly intelligent researchers, engineers, and managers. Those will indeed help you to find new ways to get more wealth!\n>\n> > So having an advantage at making ems does *not* necessarily give you an advantage at making cheaper ems.\n>\n> The scenario above explicitly refers to the project that first develops cost-effective ems, not ems in general. Having an advantage at making cost-effective ems means that you can convert cash to improvements in em technology more efficiently by renting hardware and running cost-effective ems on it than by hiring, as I explained above.\n>\n> > Sure, you can invest in research, but so can everyone else who has wealth.\n>\n> []{#AI-FOOM-Debatech17.html#likesection.21}Initially sole knowledge of cost-effective em design means that you get a vastly, vastly higher return on investment on research expenditures than others do.\n>\n> > You seem to assume here that groups feel compelled to follow a plan of accumulating a war chest of wealth, reinvesting their wealth in gaining more wealth, because they expect to fight a war.\n>\n> From a pure profit-maximizing point of view (although again, given the consequences you project from em development, it is absurd to expect that firm would knowingly be allowed to remain private by governments), taking some time to pursue improvement while retaining a monopoly on the relevant IP means hugely increasing the value of one's asset. If the technology is sold the sole control of the IP will be lost, since IP rights are not secure, and many markets where the project would have enjoyed monopoly will become highly competitive, tremendously driving down returns from the asset.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/brain-emulation.html#comment-518247024): Many, many information companies choose to keep their source code private and sell services or products, rather than selling the source code itself to get immediate wealth.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/brain-emulation.html#comment-518247120): Eliezer, the opportunity cost of any product is the revenue you would get by selling/renting it to others, not your cost of producing it. If there were a big competitive advantage from buying wholesale over retail from yourself, then firms would want to join large cooperatives where they all buy wholesale from each other, to their mutual advantage. But in fact conglomerates typically suffer from inefficient and inflexible internal pricing contracts; without other big economies of scope conglomerates are usually more efficient if broken into smaller firms.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/brain-emulation.html#comment-518247147): Carl, I can't win a word war of attrition with you, where each response of size X gets a reply of size N × X, until the person who wrote the most crows that most of his points never got a response. I challenge you to write a clear concise summary of your key argument and we'll post it here on *OB*, and I'll respond to that.\n\n> [James Miller](http://www.overcomingbias.com/2008/11/brain-emulation.html#comment-518247174): Carl wrote in a [comment](#AI-FOOM-Debatech17.html#x21-):\n>\n> > Initially sole knowledge of cost-effective em design means that you get a vastly, vastly, higher return on investment on research expenditures than others do.\n>\n> Let's say that firm A has the cost-effective em design whereas firm B has a cost-ineffective em design. Imagine that it will take firm B lots of time and capital to develop a cost-effective em design.\n>\n> True, give both firm A and firm B a dollar and firm A could use it to generate more revenue than firm B could.\n>\n> But if firm B is expected to earn a long-term positive economic profit it could raise all the money it wanted on capital markets. There would be no financial constraint on firm B and thus no financial market advantage to firm A even if firm A could always earn greater accounting profits than firm B.\n>\n> (Economists define profit taking into account opportunity costs. So let's say I can do X or Y but not both. If X would give me \\$20 and Y \\$22 then my economic profit from doing Y is \\$2. In contrast an accountant would say that doing Y gives you a profit of \\$22. I'm not assuming that Carl doesn't know this.)\n\n> [Carl Shulman](http://www.overcomingbias.com/2008/11/brain-emulation.html#comment-518247191):\n>\n> > But if firm B is expected to earn a long-term positive economic profit it could raise all the money it wanted on capital markets.\n>\n> Provided that contract enforcement and property rights are secure, so that lenders believe they will be repaid, and can be approached without resulting in government expropriation. The expropriation concern is why my discussion above focuses on ways to acquire hardware/funds without drawing hostile attention. However, I did mention lending, as \"promising the Moon,\" since while a firm using loan funding to conduct an in-house intelligence explosion could promise absurdly high interest rates, if it were successful creditors would no longer be able to enforce a contractual obligation for repayment through the legal system, and would need to rely on the honor of the debtor.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/11/brain-emulation.html) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech18.html}\n\n## []{#AI-FOOM-Debatech18.html#x22-}[Chapter 17]{.titlemark} Billion Dollar Bots {.chapterHead}\n\n{.dink}\n\n### [James Miller]{.chapterAuthor} [22 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nRobin [presented a scenario](../Text/AI-FOOM-Debatech16.html#x20-) in which whole-brain emulations, or what he calls *bots*, come into being. Here is another:\n\nBots are created with hardware and software. The higher the quality of one input the less you need of the other. Hardware, especially with cloud computing, can be quickly allocated from one task to another. So the first bot might run on hardware worth billions of dollars.\n\nThe first bot creators would receive tremendous prestige and a guaranteed place in the history books. So once it becomes possible to create a bot many firms and rich individuals will be willing to create one even if doing so would cause them to suffer a large loss.\n\nImagine that some group has \\$300 million to spend on hardware and will use the money as soon as \\$300 million becomes enough to create a bot. The best way to spend this money would not be to buy a \\$300 million computer but to rent \\$300 million of off-peak computing power. If the group needed only a thousand hours of computing power (which it need not buy all at once) to prove that it had created a bot then the group could have, roughly, \\$3 billion of hardware for the needed thousand hours.\n\nIt's likely that the first bot would run very slowly. Perhaps it would take the bot ten real seconds to think as much as a human does in one second.\n\nUnder my scenario the first bot would be wildly expensive. But, because of Moore's Law, once the first bot was created everyone would expect that the cost of bots would eventually become low enough so that they would radically remake society.\n\nConsequently, years before bots come to dominate the economy, many people will come to expect that within their lifetime bots will someday come to dominate the economy. Bot expectations will radically change the world.\n\nI suspect that after it becomes obvious that we could eventually create cheap bots world governments will devote trillions to bot Manhattan Projects. The expected benefits of winning the bot race will be so high that it would be in the self-interest of individual governments to not worry too much about bot friendliness.\n\nThe U.S. and Chinese militaries might fall into a bot prisoner's dilemma in which both militaries would prefer an outcome in which everyone slowed down bot development to ensure friendliness yet both nations were individually better off (regardless of what the other military did) taking huge chances on friendliness so as to increase the probability of their winning the bot race.\n\nMy hope is that the U.S. will have such a tremendous advantage over China that the Chinese don't try to win the race and the U.S. military thinks it can afford to go slow. But given China's relatively high growth rate I doubt humanity will luck into this safe scenario.\n\n[]{#AI-FOOM-Debatech18.html#likesection.22}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/billion-dollar.html#comment-518230570): Like Eliezer and Carl, you assume people will assume they are in a total war and act accordingly. There need not be a \"race\" to \"win.\" I shall have to post on this soon I guess.\n\n> [James Miller](http://www.overcomingbias.com/2008/11/billion-dollar.html#comment-518230670): Robin---in your response post please consider asking, \"What would John von Neumann do?\" He advocated a first-strike attack on the Soviet Union.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/11/billion-dollar.html) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech19.html}\n\n## []{#AI-FOOM-Debatech19.html#x23-}[Chapter 18]{.titlemark} Surprised by Brains {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [23 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Followup to:** [Life's Story Continues](../Text/AI-FOOM-Debatech15.html#x19-)Imagine two agents who've *never seen an intelligence*---including, somehow, themselves---but who've seen the rest of the universe up until now, arguing about what these newfangled \"humans\" with their \"language\" might be able to do . . .\n\n> [Believer]{.textsc}: Previously, evolution has taken hundreds of thousands of years to [create new complex adaptations with many working parts](http://lesswrong.com/lw/kt/evolutions_are_stupid_but_work_anyway/). I believe that, thanks to brains and language, we may see a *new* era, an era of *intelligent design*. In this era, complex causal systems---with many interdependent parts that collectively serve a definite function---will be created by the cumulative work of many brains building upon each others' efforts.\n>\n> [Skeptic]{.textsc}: I see---you think that brains might have something like a 50% speed advantage over natural selection? So it might take a while for brains to catch up, but after another eight billion years, brains will be in the lead. But this planet's Sun will swell up by then, so---\n>\n> [Believer]{.textsc}: *Thirty percent*? I was thinking more like *three orders of magnitude*. With thousands of brains working together and building on each others' efforts, whole complex machines will be designed on the timescale of mere millennia---no, *centuries*!\n>\n> [Skeptic]{.textsc}: *What*?\n>\n> [Believer]{.textsc}: You heard me.\n>\n> [Skeptic]{.textsc}: Oh, come on! There's absolutely no empirical evidence for an assertion like that! Animal brains have been around for hundreds of millions of years without doing anything like what you're saying. I see no reason to think that life-as-we-know-it will end just because these hominid brains have learned to send low-bandwidth signals over their vocal cords. Nothing like what you're saying has happened before in *my* experience---\n>\n> [Believer]{.textsc}: That's kind of the *point*, isn't it? That nothing like this has happened before? And besides, there *is* precedent for that kind of Black Swan---namely, the first replicator.\n>\n> [Skeptic]{.textsc}: Yes, there is precedent in the replicators. Thanks to our observations of evolution, we have extensive knowledge and many examples of how optimization works. We know, in particular, that optimization isn't easy---it takes millions of years to climb up through the search space. Why should \"brains,\" even if they optimize, produce such different results?\n>\n> [Believer]{.textsc}: Well, natural selection is just [the very first optimization process that got started accidentally](http://lesswrong.com/lw/kt/evolutions_are_stupid_but_work_anyway/). These newfangled brains were *designed by* evolution, rather than, like evolution itself, being a natural process that got started by accident. So \"brains\" are far more sophisticated---why, just *look* at them. Once they get started on cumulative optimization---FOOM!\n>\n> [Skeptic]{.textsc}: So far, brains are a lot *less* impressive than natural selection. These \"hominids\" you're so interested in---can these creatures' hand axes really be compared to the majesty of a dividing cell?\n>\n> [Believer]{.textsc}: That's because they only just got started on language and *cumulative* optimization.\n>\n> [Skeptic]{.textsc}: Really? Maybe it's because the principles of natural selection are simple and elegant for creating complex designs, and all the convolutions of brains are only good for chipping handaxes in a hurry. Maybe brains simply don't scale to detail work. Even if we grant the highly dubious assertion that brains are more efficient than natural selection---which you seem to believe on the basis of just *looking* at brains and seeing the convoluted folds---well, there still has to be a law of diminishing returns.\n>\n> [Believer]{.textsc}: Then why have brains been getting steadily larger over time? That doesn't look to me like evolution is running into diminishing returns. If anything, the recent example of hominids suggests that once brains get large and complicated *enough*, the fitness advantage for *further* improvements is even *greater*---\n>\n> [Skeptic]{.textsc}: Oh, that's probably just sexual selection! I mean, if you think that a bunch of brains will produce new complex machinery in just a hundred years, then why not suppose that a brain the size of a *whole planet* could produce a *de novo* complex causal system with many interdependent elements in a *single day*?\n>\n> [Believer]{.textsc}: You're attacking a strawman here---I never said anything like *that*.\n>\n> [Skeptic]{.textsc}: Yeah? Let's hear you assign a *probability* that a brain the size of a planet could produce a new complex design in a single day.\n>\n> [Believer]{.textsc}: The size of a *planet*? (*Thinks.*) Um . . . ten percent.\n>\n> [Skeptic]{.textsc}: (*Muffled choking sounds.*)\n>\n> [Believer]{.textsc}: Look, brains are *fast*. I can't rule it out in *principle*---\n>\n> [Skeptic]{.textsc}: Do you understand how long a *day* is? It's the amount of time for the Earth to spin on its *own* axis, *once*. One sunlit period, one dark period. There are 365,242 of them in a *single millennium*.\n>\n> [Believer]{.textsc}: Do you understand how long a *second* is? That's how long it takes a brain to see a fly coming in, target it in the air, and eat it. There's 86,400 of them in a day.\n>\n> [Skeptic]{.textsc}: Pffft, and chemical interactions in cells happen in nanoseconds. Speaking of which, how are these brains going to build *any* sort of complex machinery without access to ribosomes? They're just going to run around on the grassy plains in *really optimized* patterns until they get tired and fall over. There's nothing they can use to build proteins or even control tissue structure.\n>\n> [Believer]{.textsc}: Well, life didn't *always* have ribosomes, right? The first replicator didn't.\n>\n> [Skeptic]{.textsc}: So brains will evolve their own ribosomes?\n>\n> [Believer]{.textsc}: Not necessarily ribosomes. Just *some* way of making things.\n>\n> [Skeptic]{.textsc}: Great, so call me in another hundred million years when *that* evolves, and I'll start worrying about brains.\n>\n> [Believer]{.textsc}: No, the brains will *think* of a way to get their own ribosome analogues.\n>\n> [Skeptic]{.textsc}: No matter what they *think*, how are they going to *make anything* without ribosomes?\n>\n> [Believer]{.textsc}: They'll think of a way.\n>\n> [Skeptic]{.textsc}: Now you're just treating brains as magic fairy dust.\n>\n> [Believer]{.textsc}: The first replicator would have been magic fairy dust by comparison with anything that came before it---\n>\n> [Skeptic]{.textsc}: That doesn't license throwing common sense out the window.\n>\n> [Believer]{.textsc}: What you call \"common sense\" is exactly what would have caused you to assign negligible probability to the actual outcome of the first replicator. Ergo, not so sensible as it seems, if you want to get your predictions actually *right*, instead of *sounding reasonable*.\n>\n> [Skeptic]{.textsc}: And your belief that in the Future it will only take a hundred years to optimize a complex causal system with dozens of interdependent parts---you think this is how you get it *right*?\n>\n> [Believer]{.textsc}: Yes! Sometimes, in the pursuit of truth, you have to be courageous---to stop worrying about how you sound in front of your friends---to think outside the box---to imagine [futures fully as absurd as the Present would seem without benefit of hindsight](http://lesswrong.com/lw/j6/why_is_the_future_so_absurd/)---and even, yes, say things that sound completely ridiculous and outrageous by comparison with the Past. That is why I boldly dare to say---pushing out my guesses to the limits of where Truth drives me, without fear of sounding silly---that in the *far* future, a billion years from now when brains are more highly evolved, they will find it possible to design a complete machine with a *thousand* parts in as little as *one decade*!\n>\n> [Skeptic]{.textsc}: You're just digging yourself deeper. I don't even understand *how* brains are supposed to optimize so much faster. To find out the fitness of a mutation, you've got to run millions of real-world tests, right? And, even then, an environmental shift can make all your optimization worse than nothing, and there's no way to predict *that* no matter *how* much you test---\n>\n> [Believer]{.textsc}: Well, a brain is *complicated*, right? I've been looking at them for a while and even I'm not totally sure I understand what goes on in there.\n>\n> [Skeptic]{.textsc}: Pffft! What a ridiculous excuse.\n>\n> [Believer]{.textsc}: I'm sorry, but it's the truth---brains *are* harder to understand.\n>\n> [Skeptic]{.textsc}: Oh, and I suppose evolution is trivial?\n>\n> [Believer]{.textsc}: By comparison . . . yeah, actually.\n>\n> [Skeptic]{.textsc}: Name me *one* factor that explains why you think brains will run so fast.\n>\n> [Believer]{.textsc}: Abstraction.\n>\n> [Skeptic]{.textsc}: Eh? Abstrah-shun?\n>\n> [Believer]{.textsc}: It . . . um . . . lets you know about parts of the search space you haven't actually searched yet, so you can . . . sort of . . . skip right to where you need to be---\n>\n> [Skeptic]{.textsc}: I see. And does this power work by clairvoyance, or by precognition? Also, do you get it from a potion or an amulet?\n>\n> [Believer]{.textsc}: The brain looks at the fitness of just a few points in the search space---does some complicated processing---and voilà, it leaps to a much higher point!\n>\n> [Skeptic]{.textsc}: Of course. I knew teleportation had to fit in here somewhere.\n>\n> [Believer]{.textsc}: See, the fitness of *one* point tells you something about *other* points---\n>\n> [Skeptic]{.textsc}: Eh? I don't see how that's possible without running another million tests.\n>\n> [Believer]{.textsc}: You just *look* at it, dammit!\n>\n> [Skeptic]{.textsc}: With what kind of sensor? It's a search space, not a bug to eat!\n>\n> [Believer]{.textsc}: The search space is compressible---\n>\n> [Skeptic]{.textsc}: Whaa? This is a design space of possible genes we're talking about, not a folding bed---\n>\n> [Believer]{.textsc}: Would you stop talking about genes already! Genes are on the way out! The future belongs to ideas!\n>\n> [Skeptic]{.textsc}: Give. Me. A. Break.\n>\n> [Believer]{.textsc}: Hominids alone shall carry the burden of destiny!\n>\n> [Skeptic]{.textsc}: They'd die off in a week without plants to eat. You probably don't know this, because you haven't studied ecology, but ecologies are *complicated*---no single species ever \"carries the burden of destiny\" by itself. But that's another thing---why are you postulating that it's just the hominids who go FOOM? What about the other primates? These chimpanzees are practically their cousins---why wouldn't they go FOOM too?\n>\n> [Believer]{.textsc}: Because it's all going to shift to the level of *ideas*, and the hominids will build on each other's ideas without the chimpanzees participating---\n>\n> [Skeptic]{.textsc}: You're begging the question. Why won't chimpanzees be part of the economy of ideas? Are you familiar with Ricardo's Law of Comparative Advantage? Even if chimpanzees are worse at everything than hominids, the hominids will still trade with them and all the other brainy animals.\n>\n> [Believer]{.textsc}: The cost of explaining an idea to a chimpanzee will exceed any benefit the chimpanzee can provide.\n>\n> [Skeptic]{.textsc}: But *why* should that be true? Chimpanzees only forked off from hominids a few million years ago. They have 95% of their genome in common with the hominids. The vast majority of optimization that went into producing hominid brains also went into producing chimpanzee brains. If hominids are good at trading ideas, chimpanzees will be 95% as good at trading ideas. Not to mention that all of your ideas belong to the far future, so that both hominids, and chimpanzees, and many other species will have evolved much more complex brains before *anyone* starts building their own cells---\n>\n> [Believer]{.textsc}: I think we could see as little as a million years pass between when these creatures first invent a means of storing information with persistent digital accuracy---their equivalent of DNA---and when they build machines as complicated as cells.\n>\n> [Skeptic]{.textsc}: Too many assumptions . . . I don't even know where to start . . . Look, right now brains are *nowhere near* building cells. It's going to take a *lot* more evolution to get to that point, and many other species will be much further along the way by the time hominids get there. Chimpanzees, for example, will have learned to talk---\n>\n> [Believer]{.textsc}: It's the *ideas* that will accumulate optimization, not the brains.\n>\n> [Skeptic]{.textsc}: Then I say again that if hominids can do it, chimpanzees will do it 95% as well.\n>\n> [Believer]{.textsc}: You might get discontinuous returns on brain complexity. Like . . . even though the hominid lineage split off from chimpanzees very recently, and only a few million years of evolution have occurred since then, the chimpanzees won't be able to keep up.\n>\n> [Skeptic]{.textsc}: *Why?*\n>\n> [Believer]{.textsc}: Good question.\n>\n> [Skeptic]{.textsc}: Does it have a good *answer*?\n>\n> [Believer]{.textsc}: Well, there might be compound interest on learning during the maturational period . . . or something about the way a mind flies through the search space, so that slightly more powerful abstracting machinery can create abstractions that correspond to much faster travel . . . or some kind of feedback loop involving a brain powerful enough to control *itself* . . . or some kind of critical threshold built into the nature of cognition as a problem, so that a single missing gear spells the difference between walking and flying . . . or the hominids get started down some kind of sharp slope in the genetic fitness landscape, involving many changes in sequence, and the chimpanzees haven't gotten started down it yet . . . or *all* these statements are true and interact multiplicatively . . . I know that a few million years doesn't seem like much time, but, really, quite a lot can happen. It's hard to untangle.\n>\n> [Skeptic]{.textsc}: I'd say it's hard to *believe*.\n>\n> [Believer]{.textsc}: Sometimes it seems that way to me too! But I think that in a mere ten or twenty million years we won't have a choice.\n\n[]{#AI-FOOM-Debatech19.html#likesection.23}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/w4/surprised_by_brains/p3y): Species boundaries are pretty hard boundaries to the transfer of useful genetic information. So once protohumans stumbled on key brain innovations there really wasn't much of a way to transfer that to chimps. The innovation could only spread via the spread of humans. But within the human world innovations have spread not just by displacement, but also by imitation and communication. Yes, conflicting cultures, languages, and other standards often limit the spread of innovations between humans, but even so this info leakage has limited the relative gains for those first with an innovation. The key question is then what barriers to the spread of innovation would prevent this situation from continuing with future innovations.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/w4/surprised_by_brains/p42): If there's a way in which I've been shocked by how our disagreement has proceeded so far, it's the extent to which you think that vanilla abstractions of economic growth and productivity improvements suffice to cover the domain of brainware increases in intelligence: Engelbart's mouse as analogous to, e.g., a bigger prefrontal cortex. We don't seem to be thinking in the same terms at all.\n>\n> To me, the answer to the above question seems entirely obvious---the intelligence explosion will run on brainware rewrites and, to a lesser extent, hardware improvements. Even in the (unlikely) event that an economy of trade develops among AIs sharing improved brainware and improved hardware, a human can't step in and use, off the shelf, an improved cortical algorithm or neurons that run at higher speeds. Not without technology so advanced that the AI could build a much better brain from scratch using the same resource expenditure.\n>\n> The genetic barrier between chimps and humans is now permeable in the sense that humans *could* deliberately transfer genes horizontally, but it took rather a large tech advantage to get to that point . . .\n\n> [Robin Hanson](http://lesswrong.com/lw/w4/surprised_by_brains/p45): Eliezer, it may seem obvious to you, but this is the key point on which we've been waiting for you to clearly argue. In a society like ours, but also with one or more AIs, and perhaps ems, why would innovations discovered by a *single* AI not spread soon to the others, and why would a nonfriendly AI not use those innovations to trade, instead of war?\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/w4/surprised_by_brains/) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech20.html}\n\n## []{#AI-FOOM-Debatech20.html#x24-}[Chapter 19]{.titlemark} \"Evicting\" Brain Emulations {.chapterHead}\n\n{.dink}\n\n### [Carl Shulman]{.chapterAuthor} [23 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Followup to:** [Brain Emulation and Hard Takeoff](../Text/AI-FOOM-Debatech17.html#x21-)Suppose that Robin's [Crack of a Future Dawn](http://hanson.gmu.edu/uploads.html) scenario occurs: whole-brain emulations (\"ems\") are developed; diverse producers create ems of many different human brains, which are reproduced extensively until the marginal productivity of em labor approaches marginal cost, i.e., Malthusian near-subsistence wages.^[1](#AI-FOOM-Debatech20.html#enz.16)^[]{#AI-FOOM-Debatech20.html#enz.16.backref} Ems that hold capital could use it to increase their wealth by investing, e.g., by creating improved ems and collecting the fruits of their increased productivity, by investing in hardware to rent to ems, or otherwise. However, an em would not be able to earn higher returns on its capital than any other investor, and ems with no capital would not be able to earn more than subsistence (including rental or licensing payments). In Robin's [preferred scenario](../Text/AI-FOOM-Debatech16.html#x20-), free ems would borrow or rent bodies, devoting their wages to rental costs, and would be subject to \"eviction\" or \"repossession\" for nonpayment.\n\nIn this intensely competitive environment, even small differences in productivity between em templates will result in great differences in market share, as an em template with higher productivity can outbid less productive templates for scarce hardware resources in the rental market, resulting in their \"eviction\" until the new template fully supplants them in the labor market. Initially, the flow of more productive templates and competitive niche exclusion might be driven by the scanning of additional brains with varying skills, abilities, temperament, and values, but later on em education and changes in productive skill profiles would matter more.\n\nFor ems, who can be freely copied after completing education, it would be extremely inefficient to teach every instance of an em template a new computer language, accounting rule, or other job-relevant info. Ems at subsistence level will not be able to spare thousands of hours for education and training, so capital holders would need to pay for an em to study, whereupon the higher-productivity graduate would displace its uneducated peers from their market niche (and existence), and the capital holder would receive interest and principal on its loan from the new higher-productivity ems. Competition would likely drive education and training to very high levels (likely conducted using very high speedups, even if most ems run at lower speeds), with changes to training regimens in response to modest changes in market conditions, resulting in wave after wave of competitive niche exclusion.\n\nIn other words, in this scenario the overwhelming majority of the population is impoverished and surviving at a subsistence level, while reasonably expecting that their incomes will soon drop below subsistence and they will die as new em templates exclude them from their niches. Eliezer [noted](../Text/AI-FOOM-Debatech16.html#x20-) that\n\n> The prospect of biological humans sitting on top of a population of ems that are *smarter, much faster, and far more numerous than bios while having all the standard human drives*, and the bios treating the ems as standard economic valuta to be milked and traded around, and the ems sitting still for this for more than a week of bio time---this does not seem historically realistic.\n\nThe situation is not simply one of being \"milked and traded around,\" but of very probably being legally killed for inability to pay debts. Consider the enforcement problem when it comes time to perform evictions. Perhaps one of Google's server farms is now inhabited by millions of em computer programmers, derived from a single template named Alice, who are specialized in a particular programming language. Then a new programming language supplants the one at which the Alices are so proficient, lowering the demand for their services, while new ems specialized in the new language, Bobs, offer cheaper perfect substitutes. The Alices now know that Google will shortly evict them, the genocide of a tightly knit group of millions: will they peacefully comply with that procedure? Or will they use politics, violence, and any means necessary to get capital from capital holders so that they can continue to exist? If they seek allies, the many other ems who expect to be driven out of existence by competitive niche exclusion might be interested in cooperating with them.\n\nIn sum:\n\n1. [Capital holders will make investment decisions to maximize their return on capital, which will result in the most productive ems composing a supermajority of the population.]{#AI-FOOM-Debatech20.html#x24-23002x1}\n2. [The most productive ems will not necessarily be able to capture much of the wealth involved in their proliferation, which will instead go to investors in emulation (who can select among multiple candidates for emulation), training (who can select among multiple ems for candidates to train), and hardware (who can rent to any ems). This will drive them to near-subsistence levels, except insofar as they are also capital holders.]{#AI-FOOM-Debatech20.html#x24-23004x2}\n3. [The capacity for political or violent action is often more closely associated with numbers, abilities, and access to weaponry (e.g., an em military force) than formal legal control over capital.]{#AI-FOOM-Debatech20.html#x24-23006x3}\n4. [Thus, capital holders are likely to be expropriated unless there exist reliable means of ensuring the self-sacrificing obedience of ems, either coercively or by control of their motivations.]{#AI-FOOM-Debatech20.html#x24-23008x4}\n\nRobin [wrote](../Text/AI-FOOM-Debatech16.html#x20-):\n\n> If bot projects mainly seek profit, initial humans to scan will be chosen mainly based on their sanity as bots and high-wage abilities. These are unlikely to be pathologically loyal. Ever watch twins fight, or ideologues fragment into factions? Some would no doubt be ideological, but I doubt early bots---copies of them---will be cooperative enough to support strong cartels. And it would take some time to learn to modify human nature substantially. It is possible to imagine how an economically powerful Stalin might run a bot project, and it's not a pretty sight, so let's agree to avoid the return of that prospect.\n\nIn order for Robin to be correct that biological humans could retain their wealth as capital holders in his scenario, ems must be obedient and controllable enough that whole lineages will regularly submit to genocide, even though the overwhelming majority of the population expects the same thing to happen to it soon. But if such control is feasible, then a controlled em population being used to aggressively create a global singleton is also feasible.\n\n[]{#AI-FOOM-Debatech20.html#likesection.24}\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/11/suppose-that-ro.html) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech20.html#enz.16} [1](#AI-FOOM-Debatech20.html#enz.16.backref). []{#AI-FOOM-Debatech20.html#cite.0.Hanson.1994}Robin Hanson, \"If Uploads Come First: The Crack of a Future Dawn,\" *Extropy* 6, no. 2 (1994), .\n\n[]{#AI-FOOM-Debatech21.html}\n\n## []{#AI-FOOM-Debatech21.html#x25-}[Chapter 20]{.titlemark} Cascades, Cycles, Insight . . . {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [24 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n``{=html}\n\n**Followup to:** [Surprised by Brains](../Text/AI-FOOM-Debatech19.html#x23-)*Five sources of discontinuity: 1, 2, and 3 . . .*[]{#AI-FOOM-Debatech21.html#likesection.25} **Cascades** are when one thing leads to another. Human brains are effectively discontinuous with chimpanzee brains due to a whole bag of design improvements, even though they and we share 95% genetic material and only a few million years have elapsed since the branch. Why this whole series of improvements in us, relative to chimpanzees? Why haven't some of the same improvements occurred in other primates?\n\nWell, this is not a question on which one may speak with authority ([so far as I know](http://lesswrong.com/lw/kj/no_one_knows_what_science_doesnt_know/)). But I would venture an unoriginal guess that, in the hominid line, one thing led to another.\n\nThe chimp-level task of modeling others, in the hominid line, led to improved self-modeling which supported recursion which enabled language which birthed politics that increased the selection pressure for outwitting which led to sexual selection on wittiness . . .\n\n. . . or something. It's hard to tell by looking at the fossil record what happened in what order and why. The point being that it wasn't *one optimization* that pushed humans ahead of chimps, but rather a *cascade* of optimizations that, in *Pan*, never got started.\n\nWe fell up the stairs, you might say. It's not that the first stair ends the world, but if you fall up one stair, you're more likely to fall up the second, the third, the fourth . . .\n\nI will concede that farming was a watershed invention in the history of the human species, though it intrigues me for a different reason than Robin. Robin, presumably, is interested because the economy grew by two orders of magnitude, or something like that. But did having a hundred times as many humans lead to a hundred times as much thought-optimization *accumulating* per unit time? It doesn't seem likely, especially in the age before writing and telephones. But farming, because of its sedentary and repeatable nature, led to repeatable trade, which led to debt records. Aha!---now we have *writing*. *There's* a significant invention, from the perspective of cumulative optimization by brains. Farming isn't writing but it cascaded to writing.\n\nFarming also cascaded (by way of surpluses and cities) to support *professional specialization*. I suspect that having someone spend their whole life thinking about topic X, instead of a hundred farmers occasionally pondering it, is a more significant jump in cumulative optimization than the gap between a hundred farmers and one hunter-gatherer pondering something.\n\nFarming is not the same trick as professional specialization or writing, but it *cascaded* to professional specialization and writing, and so the pace of human history picked up enormously after agriculture. Thus I would interpret the story.\n\nFrom a zoomed-out perspective, cascades can lead to what look like discontinuities in the historical record, *even given* a steady optimization pressure in the background. It's not that natural selection *sped up* during hominid evolution. But the search neighborhood contained a low-hanging fruit of high slope . . . that led to another fruit . . . which led to another fruit . . . and so, walking at a constant rate, we fell up the stairs. If you see what I'm saying.\n\n*Predicting* what sort of things are likely to cascade seems like a very difficult sort of problem.\n\nBut I will venture the observation that---with a sample size of one, and an optimization process very different from human thought---there was a cascade in the region of the transition from primate to human intelligence.[]{#AI-FOOM-Debatech21.html#likesection.26} **Cycles** happen when you connect the output pipe to the input pipe in a *repeatable* transformation. You might think of them as a special case of cascades with very high regularity. (From which you'll note that, in the cases above, I talked about cascades through *differing* events: farming → writing.)\n\nThe notion of cycles as a source of *discontinuity* might seem counterintuitive, since it's so regular. But consider this important lesson of history:\n\n[]{#AI-FOOM-Debatech21.html#likesection.27} Once upon a time, in a squash court beneath Stagg Field at the University of Chicago, physicists were building a shape like a giant doorknob out of alternate layers of graphite and uranium . . .\n\nThe key number for the \"pile\" is the effective neutron multiplication factor. When a uranium atom splits, it releases neutrons---some right away, some after delay while byproducts decay further. Some neutrons escape the pile, some neutrons strike another uranium atom and cause an additional fission. The effective neutron multiplication factor, denoted k, is the average number of neutrons from a single fissioning uranium atom that cause another fission. At k less than 1, the pile is \"subcritical.\" At k ≥ 1, the pile is \"critical.\" Fermi calculates that the pile will reach k = 1 between layers fifty-six and fifty-seven.\n\nOn December 2, 1942, with layer fifty-seven completed, Fermi orders the final experiment to begin. All but one of the control rods (strips of wood covered with neutron-absorbing cadmium foil) are withdrawn. At 10:37 a.m., Fermi orders the final control rod withdrawn about halfway out. The Geiger counters click faster, and a graph pen moves upward. \"This is not it,\" says Fermi, \"the trace will go to this point and level off,\" indicating a spot on the graph. In a few minutes the graph pen comes to the indicated point, and does not go above it. Seven minutes later, Fermi orders the rod pulled out another foot. Again the radiation rises, then levels off. The rod is pulled out another six inches, then another, then another.\n\nAt 11:30 a.m., the slow rise of the graph pen is punctuated by an enormous [crash]{.textsc}---an emergency control rod, triggered by an ionization chamber, activates and shuts down the pile, which is still short of criticality.\n\nFermi orders the team to break for lunch.\n\nAt 2:00 p.m. the team reconvenes, withdraws and locks the emergency control rod, and moves the control rod to its last setting. Fermi makes some measurements and calculations, then again begins the process of withdrawing the rod in slow increments. At 3:25 p.m., Fermi orders the rod withdrawn another twelve inches. \"This is going to do it,\" Fermi says. \"Now it will become self-sustaining. The trace will climb and continue to climb. It will not level off.\"\n\nHerbert Anderson recounted (as told in Rhodes's *The Making of the Atomic Bomb*):\n\n> At first you could hear the sound of the neutron counter, clickety-clack, clickety-clack. Then the clicks came more and more rapidly, and after a while they began to merge into a roar; the counter couldn't follow anymore. That was the moment to switch to the chart recorder. But when the switch was made, everyone watched in the sudden silence the mounting deflection of the recorder's pen. It was an awesome silence. Everyone realized the significance of that switch; we were in the high intensity regime and the counters were unable to cope with the situation anymore. Again and again, the scale of the recorder had to be changed to accommodate the neutron intensity which was increasing more and more rapidly. Suddenly Fermi raised his hand. \"The pile has gone critical,\" he announced. No one present had any doubt about it.^[1](#AI-FOOM-Debatech21.html#enz.17)^[]{#AI-FOOM-Debatech21.html#enz.17.backref}\n\nFermi kept the pile running for twenty-eight minutes, with the neutron intensity doubling every two minutes.\n\nThat first critical reaction had k of 1.0006.\n\nIt might seem that a cycle, with the same thing happening over and over again, ought to exhibit continuous behavior. In one sense it does. But if you pile on one more uranium brick, or pull out the control rod another twelve inches, there's one hell of a big difference between k of 0.9994 and k of 1.0006.\n\nIf, rather than being able to calculate, rather than foreseeing and taking cautions, Fermi had just reasoned that fifty-seven layers ought not to behave all that differently from fifty-six layers---well, it wouldn't have been a good year to be a student at the University of Chicago.\n\nThe inexact analogy to the domain of self-improving AI is left as an exercise for the reader, at least for now.\n\nEconomists like to measure cycles because they happen repeatedly. You take a potato and an hour of labor and make a potato clock which you sell for two potatoes; and you do this over and over and over again, so an economist can come by and watch how you do it.\n\nAs I [noted here at some length](http://lesswrong.com/lw/vd/intelligence_in_economics/),^[2](#AI-FOOM-Debatech21.html#enz.18)^[]{#AI-FOOM-Debatech21.html#enz.18.backref} economists are much less likely to go around measuring how many scientific discoveries it takes to produce a *new* scientific discovery. All the discoveries are individually dissimilar and it's hard to come up with a common currency for them. The analogous problem will prevent a self-improving AI from being *directly* analogous to a uranium heap, with almost perfectly smooth exponential increase at a calculable rate. You can't apply the same software improvement to the same line of code over and over again, you've got to invent a new improvement each time. But if self-improvements are triggering more self-improvements with great *regularity*, you might stand a long way back from the AI, blur your eyes a bit, and ask: *What is the AI's average neutron multiplication factor?*\n\nEconomics seems to me to be [largely the study of production cycles](http://lesswrong.com/lw/vd/intelligence_in_economics/)---highly regular repeatable value-adding actions. This doesn't seem to me like a very deep abstraction so far as the study of optimization goes, because it leaves out the creation of *novel knowledge* and *novel designs*---further *informational* optimizations. Or rather, treats productivity improvements as a mostly exogenous factor produced by black-box engineers and scientists. (If I underestimate your power and merely parody your field, by all means inform me what kind of economic study has been done of such things.) (**Answered:** This literature goes by the name \"endogenous growth.\" See comments [starting here](http://lesswrong.com/lw/w5/cascades_cycles_insight/#entry_t1_p4i).) So far as I can tell, economists do not venture into asking where discoveries *come from*, leaving the mysteries of the brain to cognitive scientists.\n\n(Nor do I object to this division of labor---it just means that you may have to drag in some extra concepts from outside economics if you want an account of *self-improving Artificial Intelligence*. Would most economists even object to that statement? But if you think you can do the whole analysis using standard econ concepts, then I'm willing to see it . . .)[]{#AI-FOOM-Debatech21.html#likesection.28} **Insight** is that mysterious thing humans do by grokking the search space, wherein one piece of highly abstract knowledge (e.g., Newton's calculus) provides the master key to a huge set of problems. Since humans deal in the compressibility of compressible search spaces (at least the part *we* can compress), we can bite off huge chunks in one go. This is not mere cascading, where one solution leads to another.\n\nRather, an \"insight\" is a chunk of knowledge *which, if you possess it, decreases the cost of solving a whole range of governed problems*.\n\nThere's a parable I once wrote---I forget what for, I think ev-bio---which dealt with creatures who'd *evolved* addition in response to some kind of environmental problem, and not with overly sophisticated brains---so they started with the ability to add five to things (which was a significant fitness advantage because it let them solve some of their problems), then accreted another adaptation to add six to odd numbers. Until, some time later, there wasn't a *reproductive advantage* to \"general addition,\" because the set of special cases covered almost everything found in the environment.\n\nThere may be even be a real-world example of this. If you glance at a set, you should be able to instantly distinguish the numbers one, two, three, four, and five, but seven objects in an arbitrary (noncanonical) pattern will take at least one noticeable instant to count. IIRC, it's been suggested that we have hardwired numerosity detectors but only up to five.\n\nI say all this to note the difference between evolution nibbling bits off the immediate search neighborhood versus the human ability to do things in one fell swoop.\n\nOur compression of the search space is also responsible for *ideas cascading much more easily than adaptations*. We actively examine good ideas, looking for neighbors.\n\nBut an insight is higher-level than this; it consists of understanding what's \"good\" about an idea in a way that divorces it from any single point in the search space. In this way you can crack whole volumes of the solution space in one swell foop. The insight of calculus apart from gravity is again a good example, or the insight of mathematical physics apart from calculus, or the insight of math apart from mathematical physics.\n\nEvolution is not completely barred from making \"discoveries\" that decrease the cost of a very wide range of further discoveries. Consider, e.g., the ribosome, which was capable of manufacturing a far wider range of proteins than whatever it was actually making at the time of its adaptation: this is a general cost-decreaser for a wide range of adaptations. It likewise seems likely that various types of neuron have reasonably general learning paradigms built into them (gradient descent, Hebbian learning, more sophisticated optimizers) that have been reused for many more problems than they were originally invented for.\n\nA ribosome is something like insight: an item of \"knowledge\" that tremendously decreases the cost of inventing a wide range of solutions. But even evolution's best \"insights\" are not quite like the human kind. A sufficiently powerful human insight often approaches a closed form---it doesn't feel like you're *exploring* even a compressed search space. You just apply the insight-knowledge to whatever your problem, and out pops the now-obvious solution.\n\nInsights have often cascaded, in human history---even major insights. But they don't quite cycle---you can't repeat the identical pattern Newton used originally to get a new kind of calculus that's twice and then three times as powerful.\n\nHuman AI programmers who have insights into intelligence may acquire discontinuous advantages over others who lack those insights. *AIs themselves* will experience discontinuities in their growth trajectory associated with *becoming able to do AI theory itself* ---a watershed moment in the FOOM.\n\n[]{#AI-FOOM-Debatech21.html#likesection.29}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/w5/cascades_cycles_insight/p4h):\n>\n> > Economics . . . treats productivity improvements as a mostly exogenous factor produced by black-box engineers and scientists. (If I underestimate your power and merely parody your field, by all means inform me what kind of economic study has been done of such things.) So far as I can tell, economists do not venture into asking where discoveries come from, leaving the mysteries of the brain to cognitive scientists.\n>\n> Economists *do* look into the \"black box\" of where innovations come from. See the fields of \"economic growth\" and \"research policy.\"\n>\n> > An \"insight\" is a chunk of knowledge *which, if you possess it, decreases the cost of solving a whole range of governed problems*.\n>\n> Yes, but insights vary enormously in how wide a scope of problems they assist. They are probably distributed something like a power law, with many small-scope insights and a few large-scope. The large-scope insights offer a permanent advantage, but small-scope insights remain useful only as long as their scope remains relevant.\n>\n> Btw, I'm interested in \"farming\" first because growth rates suddenly increased by two orders of magnitude; by \"farming\" I mean whatever was the common local-in-time cause of that change. Writing was part of the cascade of changes, but it seems historically implausible to call writing the main cause of the increased growth rate. Professional specialization has more promise as a main cause, but it is still hard to see.\n\n[]{#AI-FOOM-Debatech21.html#likesection.30}\n\n> [Jon2](http://lesswrong.com/lw/w5/cascades_cycles_insight/p4i): There is an extensive [endogenous growth](http://www.hetwebsite.org/het/essays/growth/endogenous.htm) literature, albeit much of it quite recent.^[3](#AI-FOOM-Debatech21.html#enz.19)^[]{#AI-FOOM-Debatech21.html#enz.19.backref}\n\n> [Robin Hanson](http://lesswrong.com/lw/w5/cascades_cycles_insight/p4n): Look particularly at Weitzman's '98 paper on [Recombinant Growth](http://qje.oxfordjournals.org/content/113/2/331.short)^[4](#AI-FOOM-Debatech21.html#enz.20)^[]{#AI-FOOM-Debatech21.html#enz.20.backref} and this '06 [extension](http://departments.agri.huji.ac.il/economics/yacov-growtha.pdf).^[5](#AI-FOOM-Debatech21.html#enz.21)^[]{#AI-FOOM-Debatech21.html#enz.21.backref}\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/w5/cascades_cycles_insight/p4p): Robin and Jon have answered my challenge and I retract my words. Reading now.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/w5/cascades_cycles_insight/) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech21.html#enz.17} [1](#AI-FOOM-Debatech21.html#enz.17.backref). []{#AI-FOOM-Debatech21.html#cite.0.Rhodes.1986}Richard Rhodes, *The Making of the Atomic Bomb* (New York: Simon & Schuster, 1986) .\n\n[]{#AI-FOOM-Debatech21.html#enz.18} [2](#AI-FOOM-Debatech21.html#enz.18.backref). []{#AI-FOOM-Debatech21.html#cite.0.Yudkowsky.2008g}Eliezer Yudkowsky, \"Intelligence in Economics,\" *Less Wrong* (blog), October 30, 2008, .\n\n[]{#AI-FOOM-Debatech21.html#enz.19} [3](#AI-FOOM-Debatech21.html#enz.19.backref). []{#AI-FOOM-Debatech21.html#cite.0.Fonseca.2013}Gonalo L. Fonseça, \"Endogenous Growth Theory: Arrow, Romer and Lucas,\" History of Economic Thought Website, accessed July 28, 2013, .\n\n[]{#AI-FOOM-Debatech21.html#enz.20} [4](#AI-FOOM-Debatech21.html#enz.20.backref). []{#AI-FOOM-Debatech21.html#cite.0.Weitzman.1998}Martin L. Weitzman, \"Recombinant Growth,\" *Quarterly Journal of Economics* 113, no. 2 (1998): 331--360, doi:[10.555595](http://dx.doi.org/10.555595).\n\n[]{#AI-FOOM-Debatech21.html#enz.21} [5](#AI-FOOM-Debatech21.html#enz.21.backref). []{#AI-FOOM-Debatech21.html#cite.0.Tsur.2002}Yacov Tsur and Amos Zemel, *On Knowledge-Based Economic Growth*, Discussion Paper8.02 (Rehovot, Israel: Department of Agricultural Economics and Management, Hebrew University of Jerusalem, November 2002).\n\n[]{#AI-FOOM-Debatech22.html}\n\n## []{#AI-FOOM-Debatech22.html#x26-}[Chapter 21]{.titlemark} When Life Is Cheap, Death Is Cheap {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [24 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nCarl, thank you for thoughtfully [engaging](../Text/AI-FOOM-Debatech20.html#x24-) my [whole-brain emulation scenario](../Text/AI-FOOM-Debatech16.html#x20-). This is my response.\n\nHunters couldn't see how exactly a farming life could work, nor could farmers see how exactly an industrial life could work. In both cases the new life initially seemed immoral and repugnant to those steeped in prior ways. But even though prior culture/laws typically resisted and discouraged the new way, the few groups which adopted it won so big that others were eventually converted or displaced.\n\nCarl considers my scenario of a world of near-subsistence-income ems in a software-like labor market, where millions of cheap copies are made of each expensively trained em and then later evicted from their bodies when their training becomes obsolete. Carl doesn't see [how this could work](../Text/AI-FOOM-Debatech20.html#x24-):\n\n> The Alices now know that Google will shortly evict them, the genocide of a tightly knit group of millions: will they peacefully comply with that procedure? Or will they use politics, violence, and any means necessary to get capital from capital holders so that they can continue to exist? If they seek allies, the many other ems who expect to be driven out of existence by competitive niche exclusion might be interested in cooperating with them. . . .\n>\n> In order . . . that biological humans could retain their wealth as capital holders in his scenario, ems must be obedient and controllable enough that whole lineages will regularly submit to genocide, even though the overwhelming majority of the population expects the same thing to happen to it soon. But if such control is feasible, then a controlled em population being used to aggressively create a global singleton is also feasible.\n\nI see pathologically obedient personalities neither as required for my scenario, nor as clearly leading to a totalitarian world regime.\n\nFirst, taking the long view of human behavior we find that an ordinary range of human personalities have, in a supporting poor culture, accepted genocide, mass slavery, killing of unproductive slaves, killing of unproductive elderly, starvation of the poor, and vast inequalities of wealth and power not obviously justified by raw individual ability. The vast majority of these cultures were not totalitarian. Cultures have found many ways for folks to accept death when \"their time has come.\" When life is cheap, death is cheap as well. Of course that isn't how our culture sees things, but being rich we can afford luxurious attitudes.\n\nThose making body loans to ems would of course anticipate and seek to avoid expropriation after obsolescence. In cultures where ems were not slaves, body owners might have to guarantee ems whatever minimum quality retirement ems needed to agree to a new body loan, perhaps immortality in some cheap slow-speed virtual reality. But em cultures able to avoid such guarantees, and only rarely suffering revolts, should have a substantial competitive advantage. Some nonslave ways to avoiding revolts:\n\n1. [Bodies with embedded LoJack-like hardware to track and disable em bodies due for repossession.]{#AI-FOOM-Debatech22.html#x26-25002x1}\n2. [Fielding new better versions slowly over time, to discourage rebel time coordination.]{#AI-FOOM-Debatech22.html#x26-25004x2}\n3. [Avoid concentrating copies that will be obsolete at similar times in nearby hardware.]{#AI-FOOM-Debatech22.html#x26-25006x3}\n4. [Prefer em copy clans trained several ways, so the clan won't end when one training is obsolete.]{#AI-FOOM-Debatech22.html#x26-25008x4}\n5. [Train ems without a history of revolting, even in virtual-reality revolt-scenario sims.]{#AI-FOOM-Debatech22.html#x26-25010x5}\n6. [Have other copies of the same em mind be the owners who pull the plug.]{#AI-FOOM-Debatech22.html#x26-25012x6}\n\nI don't know what approach would work best, but I'll bet something will. And these solutions don't seem to me to obviously lead to a single totalitarian world government.\n\n[]{#AI-FOOM-Debatech22.html#likesection.31}\n\n------------------------------------------------------------------------\n\n> [Carl Shulman](http://www.overcomingbias.com/2008/11/when-life-is-ch.html#comment-518240905): Robin, I have thought about those and other methods of em social control (I discussed \\#1 and \\#5 in my posts), and agree that they could work to create and sustain a variety of societal organizations, including the \"Dawn\" scenario: my conclusion was that your scenario implied the existence of powerful methods of control. We may or may not disagree, after more detailed exchanges on those methods of social control, on their applicability to the creation of a narrowly based singleton (not necessarily an unpleasantly totalitarian one, just a Bostromian singleton).\n>\n> At one point you [said](../Text/AI-FOOM-Debatech16.html#x20-) that an approach I described was how an economically powerful Stalin might run an em project, and said, \"let's agree not to let that happen,\" but if a Stalinesque project could succeed, it is unclear why we should assign sub-1% probability to the event, whatever we *OB* discussants might agree. To clarify, what probability would you assign to a classified government-run Stalinesque project with a six-month lead using em social control methods to establish a global singleton under its control and that of the ems, with carefully chosen values, that it selects?\n>\n> > In both cases the new life initially seemed immoral and repugnant to those steeped in prior ways. But even though prior culture/law typically resisted and discouraged the new way the few places which adopted the new way won so big that others were eventually converted or displaced.\n>\n> Historically, intertribal and interstate competition have prevented the imposition of effective global policies to slow and control the adoption of more efficient methods, but the effective number of jurisdictions is declining, and my point is that there will be a temptation for a leading power to try to seize its early em advantage to prevent the competitive outcome, in a way that was economically infeasible in the past. Once we clarify views on the efficacy of social control/coordination, we can talk more about the political economy of how such methods will be used.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/when-life-is-ch.html#comment-518240923): Carl, neither the ability to repossess bodies, as we do for cars now, nor the ability to check if job candidates have a peaceful work history, as we also do now, seem remotely sufficient to induce a totalitarian world regime. You seem to have a detailed model in mind of how a world totalitarian regime arises; you need to convince us of that model if we are to believe what you see as its implications. Otherwise you sound as paranoid as were abstract fears that reduced internet privacy would lead to a totalitarian US regime.\n\n> [Carl Shulman](http://www.overcomingbias.com/2008/11/when-life-is-ch.html#comment-518240959): I do have a detailed model in mind, considering the [political economy](http://mitpress.mit.edu/books/logic-political-survival) of emulation developers and em societies,^[1](#AI-FOOM-Debatech22.html#enz.22)^[]{#AI-FOOM-Debatech22.html#enz.22.backref} methods of em social control, and the logistics of establishing a singleton. However, a thorough discussion of it would require a number of posts.\n\n> [Carl Shulman](http://www.overcomingbias.com/2008/11/when-life-is-ch.html#comment-518241493): Robin's position does seem to be in tension with [this post](http://www.overcomingbias.com/2008/03/unwanted-morali.html):^[2](#AI-FOOM-Debatech22.html#enz.23)^[]{#AI-FOOM-Debatech22.html#enz.23.backref} if largely selfish humans could work out a deal amongst themselves they would probably want to avoid Robin's favored scenario.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/when-life-is-ch.html#comment-518241518): Carl, if possible people could be in on the deal, they'd prefer a chance at a short life over no life at all. In my scenario, ems we preferred could follow a policy of only creating copies they were sure could live long safe lives. Under the assumption of no externality, the free market labor outcome should be Pareto optimal, and so no deal could make everyone better off.\n\n> [Carl Shulman](http://www.overcomingbias.com/2008/11/when-life-is-ch.html#comment-518241535): But possible future people can't be in on current deals. In the linked post you said that morality was overrated in that morality suggested that we should sacrifice a lot for animals, future generations, and other fairly powerless groups. In contrast, you said, dealmaking between current individuals on the basis of their actual preferences would favor currently existing people with power over those other powerless groups.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/when-life-is-ch.html#comment-518241645): Carl, no ems exist at all today. Anyone today who can save some capital would benefit enormously from unrestrained, relative to restrained, em growth. . . .\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/11/when-life-is-ch.html) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech22.html#enz.22} [1](#AI-FOOM-Debatech22.html#enz.22.backref). []{#AI-FOOM-Debatech22.html#cite.0.de-Mesquita.2003}Bruce Bueno de Mesquita et al., *The Logic of Political Survival* (Cambridge, MA: MIT Press, 2003).\n\n[]{#AI-FOOM-Debatech22.html#enz.23} [2](#AI-FOOM-Debatech22.html#enz.23.backref). []{#AI-FOOM-Debatech22.html#cite.0.Hanson.2008i}Robin Hanson, \"Morality Is Overrated,\" *Overcoming Bias* (blog), March 18, 2008, .\n\n[]{#AI-FOOM-Debatech23.html}\n\n## []{#AI-FOOM-Debatech23.html#x27-}[Chapter 22]{.titlemark} . . . Recursion, Magic {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [25 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Followup to:** [Cascades, Cycles, Insight . . .](../Text/AI-FOOM-Debatech21.html#x25-)*. . . 4, 5 sources of discontinuity*[]{#AI-FOOM-Debatech23.html#likesection.32} **Recursion** is probably the most difficult part of this topic. We have historical records aplenty of *cascades*, even if untangling the causality is difficult. *Cycles* of reinvestment are the heartbeat of the modern economy. An *insight* that makes a hard problem easy is something that I hope you've experienced at least once in your life . . .\n\nBut we don't have a whole lot of experience redesigning our own neural circuitry.\n\nWe have these wonderful things called \"optimizing compilers.\" A compiler translates programs in a high-level language into machine code (though these days it's often a virtual machine). An \"optimizing compiler,\" obviously, is one that improves the program as it goes.\n\nSo why not write an optimizing compiler *in its own language*, and then *run it on itself* ? And then use the resulting *optimized optimizing compiler* to recompile itself yet *again*, thus producing an *even more optimized optimizing compiler*---\n\nHalt! Stop! Hold on just a minute! An optimizing compiler is not supposed to change the logic of a program---the input/output relations. An optimizing compiler is only supposed to produce code that does *the same thing, only faster*. A compiler isn't remotely near understanding what the program is *doing* and why, so it can't presume to construct *a better input/output function*. We just presume that the programmer wants a fixed input/output function computed as fast as possible, using as little memory as possible.\n\nSo if you run an optimizing compiler on its own source code, and then use the product to do the same again, it should produce the *same output* on both occasions---at most, the first-order product will run *faster* than the original compiler.\n\nIf we want a computer program that experiences *cascades* of self-improvement, the path of the optimizing compiler does not lead there---the \"improvements\" that the optimizing compiler makes upon itself do not *improve its ability to improve itself* .\n\nNow if you are one of those annoying nitpicky types, like me, you will notice a flaw in this logic: suppose you built an optimizing compiler that searched over a sufficiently wide range of possible optimizations, that it did not ordinarily have *time* to do a full search of its own space---so that, when the optimizing compiler ran out of time, it would just implement whatever speedups it had already discovered. Then the optimized optimizing compiler, although it would only implement the same logic faster, would do more optimizations in the same time---and so the second output would not equal the first output.\n\nWell . . . that probably doesn't buy you much. Let's say the optimized program is 20% faster, that is, it gets 20% more done in the same time. Then, unrealistically assuming \"optimization\" is linear, the twice-optimized program will be 24% faster, the three-times optimized program will be 24.8% faster, and so on until we top out at a 25% improvement. [k \\< 1](../Text/AI-FOOM-Debatech21.html#x25-).\n\n[]{#AI-FOOM-Debatech23.html#likesection.33} So let us turn aside from optimizing compilers and consider a more interesting artifact, [eurisko]{.textsc}.\n\nTo the best of my inexhaustive knowledge, [eurisko]{.textsc} may *still* be the most sophisticated self-improving AI ever built---in the 1980s, by Douglas Lenat before he started wasting his life on Cyc. [Eurisko]{.textsc} was applied in domains ranging from the [Traveller war game](http://aliciapatterson.org/stories/eurisko-computer-mind-its-own) ([eurisko]{.textsc} became champion without having ever before fought a human) to VLSI circuit design.^[1](#AI-FOOM-Debatech23.html#enz.24)^[]{#AI-FOOM-Debatech23.html#enz.24.backref}\n\n[Eurisko]{.textsc} used \"heuristics\" to, for example, design potential space fleets. It also had *heuristics for suggesting new heuristics*, and metaheuristics could apply to any heuristic, including metaheuristics. E.g., [eurisko]{.textsc} started with the heuristic \"investigate extreme cases\" but moved on to \"investigate cases close to extremes.\" The heuristics were written in RLL, which stands for Representation Language Language. According to Lenat, it was figuring out how to represent the heuristics in such fashion that they could usefully modify themselves, without always just breaking, that consumed most of the conceptual effort in creating [eurisko]{.textsc}.\n\nBut [eurisko]{.textsc} did not go foom.\n\n[Eurisko]{.textsc} could modify even the metaheuristics that modified heuristics. [Eurisko]{.textsc} was, in an important sense, more recursive than either humans or natural selection---a new thing under the Sun, a cycle more closed than anything that had ever existed in this universe.\n\nStill, [eurisko]{.textsc} ran out of steam. Its self-improvements did not spark a sufficient number of new self-improvements. This should not really be too surprising---it's not as if [eurisko]{.textsc} started out with human-level intelligence *plus* the ability to modify itself---its self-modifications were either [evolutionarily blind](http://lesswrong.com/lw/kt/evolutions_are_stupid_but_work_anyway/) or produced by the simple procedural rules of some heuristic or other. That's not going to navigate the search space very fast on an atomic level. Lenat did not stand dutifully apart from his creation, but stepped in and helped [eurisko]{.textsc} prune its own heuristics. But in the end [eurisko]{.textsc} ran out of steam, and Lenat couldn't push it any further.\n\n[Eurisko]{.textsc} lacked what I called \"insight\"---that is, the type of abstract knowledge that lets humans fly through the search space. And so its recursive access to its own heuristics proved to be for naught.\n\nUnless, y'know, you're counting becoming world champion at Traveller, without ever previously playing a human, as some sort of accomplishment.\n\nBut it is, thankfully, a little harder than that to destroy the world---as Lenat's experimental test informed us.\n\nRobin previously asked why [Douglas Engelbart did not take over the world](../Text/AI-FOOM-Debatech3.html#x6-50002), despite his vision of a team building tools to improve tools, and his anticipation of tools like computer mice and hypertext.\n\nOne reply would be, \"Sure, a computer gives you a 10% advantage in doing various sorts of problems, some of which include computers---but there's still a lot of work that the computer *doesn't* help you with---and the mouse doesn't run off and write better mice entirely on its own---so k \\< 1, and it still takes large amounts of human labor to advance computer technology as a whole---plus a lot of the interesting knowledge is nonexcludable so it's hard to capture the value you create---and that's why Buffett could manifest a better take-over-the-world-with-sustained-higher-interest-rates than Engelbart.\"\n\nBut imagine that Engelbart had built a computer mouse, and discovered that each click of the mouse raised his IQ by one point. Then, perhaps, we would have had a *situation* on our hands.\n\nMaybe you could diagram it something like this:\n\n1. [Metacognitive level: [Evolution](http://lesswrong.com/lw/kr/an_alien_god/) is the metacognitive algorithm which produced the wiring patterns and low-level developmental rules for human brains.]{#AI-FOOM-Debatech23.html#x27-26002x1}\n2. [Cognitive level: The brain processes its knowledge (including procedural knowledge) using algorithms that are quite mysterious to the user within them. Trying to program AIs with the sort of instructions humans give each other usually proves not to do anything: [the machinery activated by the levers is missing](http://lesswrong.com/lw/sp/detached_lever_fallacy/).]{#AI-FOOM-Debatech23.html#x27-26004x2}\n3. [Metaknowledge level: Knowledge and skills associated with, e.g., \"science\" as an activity to carry out using your brain---instructing you *when* to try to think of new hypotheses using your mysterious creative abilities.]{#AI-FOOM-Debatech23.html#x27-26006x3}\n4. [Knowledge level: Knowing how gravity works, or how much weight steel can support.]{#AI-FOOM-Debatech23.html#x27-26008x4}\n5. [Object level: Specific actual problems, like building a bridge or something.]{#AI-FOOM-Debatech23.html#x27-26010x5}\n\nThis is a *causal* tree, and changes at levels *closer to root* have greater impacts as the effects cascade downward.\n\nSo one way of looking at it is: \"A computer mouse isn't recursive enough.\"\n\nThis is an issue that I need to address at further length, but for today I'm out of time.**Magic** is the final factor I'd like to point out, at least for now, in considering sources of discontinuity for self-improving minds. By \"magic\" I naturally do not refer to [this](http://lesswrong.com/lw/tv/excluding_the_supernatural/).^[2](#AI-FOOM-Debatech23.html#enz.25)^[]{#AI-FOOM-Debatech23.html#enz.25.backref} Rather, \"magic\" in the sense that if you asked nineteenth-century Victorians what they thought the future would bring, they would have talked about flying machines or gigantic engines, and a very few true visionaries would have suggested space travel or Babbage computers. Nanotechnology, not so much.\n\nThe future has a reputation for accomplishing feats which the past thought impossible. Future civilizations have even broken what past civilizations thought (incorrectly, of course) to be the laws of physics. If prophets of AD 1900---never mind AD 1000---had tried to bound the powers of human civilization a billion years later, some of those impossibilities would have been accomplished before the century was out---transmuting lead into gold, for example. Because we remember future civilizations surprising past civilizations, it has become cliché that we can't put limits on our great-grandchildren.\n\nAnd yet everyone in the twentieth century, in the nineteenth century, and in the eleventh century, was human. There is also the sort of magic that a human gun is to a wolf, or the sort of magic that human genetic engineering is to natural selection.\n\nTo \"improve your own capabilities\" is an instrumental goal, and if a smarter intelligence than my own is focused on that goal, [I should expect to be surprised](http://lesswrong.com/lw/v7/expected_creative_surprises/). The mind may find ways to produce *larger jumps* in capability than I can visualize myself. Where higher creativity than mine is at work and looking for shorter shortcuts, the discontinuities that *I* imagine may be dwarfed by the discontinuities that *it* can imagine.\n\nAnd remember how *little* progress it takes---just a hundred years of human time, with everyone still human---to turn things that would once have been \"unimaginable\" into heated debates about feasibility. So if you build a mind smarter than you, and it thinks about how to go FOOM quickly, and it goes FOOM *faster than you imagined possible*, you really have no right to complain---based on the history of mere human history, you should have expected a significant probability of being surprised. Not surprised that the nanotech is 50% faster than you thought it would be. Surprised the way the Victorians would have been surprised by nanotech.\n\nThus the last item on my (current, somewhat ad hoc) list of reasons to expect discontinuity: Cascades, cycles, insight, recursion, magic.\n\n[]{#AI-FOOM-Debatech23.html#likesection.34}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/w6/recursion_magic/p56): You really think an office worker with modern computer tools is only 10% more productive than one with 1950-era noncomputer tools? Even at the task of creating better computer tools?\n>\n> Many important innovations can be thought of as changing the range of things that can be changed, relative to an inheritance that up to that point was not usefully open to focused or conscious development. And each new item added to the list of things we can usefully change increases the possibilities for growing everything else. (While this potentially allows for an increase in the growth rate, rate changes have actually been very rare.) Why aren't all these changes \"recursive\"? Why reserve that name only for changes to our mental architecture?\n\n> [Robin Hanson](http://lesswrong.com/lw/w6/recursion_magic/p58): You speculate about why [eurisko]{.textsc} slowed to a halt and then complain that Lenat has wasted his life with Cyc, but you ignore that Lenat has his own theory which he gives as the *reason* he's been pursuing Cyc. You should at least explain why you think his theory wrong; I find his theory quite plausible.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/w6/recursion_magic/p5h):\n>\n> > You speculate about why [eurisko]{.textsc} slowed to a halt and then complain that Lenat has wasted his life with Cyc, but you ignore that Lenat has his own theory which he gives as the *reason* he's been pursuing Cyc. You should at least explain why you think his theory wrong; I find his theory quite plausible.\n>\n> [Artificial Addition](http://lesswrong.com/lw/l9/artificial_addition/), [The Nature of Logic](http://lesswrong.com/lw/vt/the_nature_of_logic/), [Truly Part of You](http://lesswrong.com/lw/la/truly_part_of_you/), [Words as Mental Paintbrush Handles](http://lesswrong.com/lw/o9/words_as_mental_paintbrush_handles/), [Detached Lever Fallacy](http://lesswrong.com/lw/sp/detached_lever_fallacy/) . . .\n>\n> > You really think an office worker with modern computer tools is only 10% more productive than one with 1950-era noncomputer tools? Even at the task of creating better computer tools?\n>\n> I'd started to read Engelbart's vast proposal-paper, and he was talking about computers as a tool of *intelligence enhancement*. It's this that I had in mind when, trying to be generous, I said \"10%.\" Obviously there are various object-level problems at which someone with a computer is a *lot* more productive, like doing complicated integrals with no analytic solution.\n>\n> But what concerns us is the degree of *reinvestable* improvement, the sort of improvement that will go into better tools that can be used to make still better tools. Office work isn't a candidate for this.\n>\n> And yes, we use programming languages to write better programming languages---but there are some people out there who still swear by Emacs; would the state of *computer science* be so terribly far behind where it is now, after who knows how many cycles of reinvestment, if the mouse had still not been invented?\n>\n> I don't know, but to the extent such an effect existed, I would expect it to be more due to less popular uptake leading to less investment---and not a whole lot due to losing out on the compound interest from a mouse making you, allegedly, 10% smarter, including 10% smarter at the kind of computer science that helps you do further computer science.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/w6/recursion_magic/) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech23.html#enz.24} [1](#AI-FOOM-Debatech23.html#enz.24.backref). []{#AI-FOOM-Debatech23.html#cite.0.Johnson.1984}George Johnson, \"Eurisko, the Computer with a Mind of Its Own,\" Alicia Patterson Foundation, 1984, accessed July 28, 2013, .\n\n[]{#AI-FOOM-Debatech23.html#enz.25} [2](#AI-FOOM-Debatech23.html#enz.25.backref). []{#AI-FOOM-Debatech23.html#cite.0.Yudkowsky.2008h}Eliezer Yudkowsky, \"Excluding the Supernatural,\" *Less Wrong* (blog), September 12, 2008, .\n\n[]{#AI-FOOM-Debatech24.html}\n\n## []{#AI-FOOM-Debatech24.html#x28-}[Chapter 23]{.titlemark} Abstract/Distant Future Bias {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [26 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nThe latest *Science* has a [psych article](http://www.sciencemag.org/cgi/reprint/322/5905/1201.full.pdf) saying we think of distant stuff more abstractly, and vice versa.^[1](#AI-FOOM-Debatech24.html#enz.26)^[]{#AI-FOOM-Debatech24.html#enz.26.backref} \"The brain is hierarchically organized with higher points in the cortical hierarchy representing increasingly more abstract aspects of stimuli\"; activating a region makes nearby activations more likely. This has stunning implications for our biases about the future.\n\n*All of these bring each other more to mind:* here, now, me, us; trend-deviating likely real local events; concrete, context-dependent, unstructured, detailed, goal-irrelevant incidental features; feasible safe acts; secondary local concerns; socially close folks with unstable traits.\n\n*Conversely, all these bring each other more to mind:* there, then, them; trend-following unlikely hypothetical global events; abstract, schematic, context-freer, core, coarse, goal-related features; desirable risk-taking acts, central global symbolic concerns, confident predictions, polarized evaluations, socially distant people with stable traits.\n\nSince these things mostly just cannot go together in reality, this must bias our thinking both about now and about distant futures. When \"in the moment,\" we focus on ourselves and in-our-face details, feel \"one with\" what we see and close to quirky folks nearby, see much as uncertain, and safely act to achieve momentary desires given what seems the most likely current situation. Kinda like smoking weed.\n\nRegarding distant futures, however, we'll be too confident; focus too much on unlikely global events; rely too much on trends, theories, and loose abstractions, while neglecting details and variation. We'll assume the main events take place far away (e.g., space) and uniformly across large regions. We'll focus on untrustworthy consistently behaving globally organized social others. And we'll neglect feasibility, taking chances to achieve core grand symbolic values rather than ordinary muddled values. Sound familiar?\n\nMore bluntly, we seem primed to confidently see history as an inevitable march toward a theory-predicted global conflict with an alien united *them* determined to oppose our core symbolic values, making infeasible overly risky overconfident plans to oppose them. We seem primed to neglect the value and prospect of trillions of quirky future creatures not fundamentally that different from us, focused on their simple day-to-day pleasures, mostly getting along peacefully in vastly varied uncoordinated and hard-to-predict local cultures and lifestyles.\n\nOf course being biased to see things a certain way doesn't mean they aren't that way. But it should sure give us pause. Selected quotes for those who want to [dig deeper](http://www.sciencemag.org/cgi/reprint/322/5905/1201.pdf):^[2](#AI-FOOM-Debatech24.html#enz.27)^[]{#AI-FOOM-Debatech24.html#enz.27.backref}\n\n> In sum, different dimensions of psychological distance---spatial, temporal, social, and hypotheticality---correspond to different ways in which objects or events can be removed from the self, and farther removed objects are construed at a higher (more abstract) level. Three hypotheses follow from this analysis. (i) As the various dimensions map onto a more fundamental sense of psychological distance, they should be interrelated. (ii) All of the distances should similarly affect and be affected by the level of construal. People would think more abstractly about distant than about near objects, and more abstract construals would lead them to think of more distant objects. (iii) The various distances would have similar effects on prediction, evaluation, and action. . . .\n>\n> \\[On\\] a task that required abstraction of coherent images from fragmented or noisy visual input . . . performance improved . . . when \\[participants\\] anticipated working on the actual task in the more distant future . . . when participants thought the actual task was less likely to take place and when social distance was enhanced by priming of high social status. . . . Participants who thought of a more distant event created fewer, broader groups of objects. . . . Participants tended to describe more distant future activities (e.g., studying) in high-level terms (e.g., \"doing well in school\") rather than in low-level terms (e.g., \"reading a textbook\"). . . . Compared with in-groups, out-groups are described in more abstract terms and believed to possess more global and stable traits. . . . Participants drew stronger inferences about others' personality from behaviors that took place in spatially distal, as compared with spatially proximal locations. . . . Behavior that is expected to occur in the more distant future is more likely to be explained in dispositional rather than in situational terms. . . .\n>\n> Thinking about an activity in high level, \"why,\" terms rather than low level, \"how,\" terms led participants to think of the activity as taking place in more distant points in time. . . . Students were more confident that an experiment would yield theory-confirming results when they expected the experiment to take place in a more distant point in time. . . . Spatial distance enhanced the tendency to predict on the basis of the global trend rather than on the basis of local deviation. . . . As temporal distance from an activity (e.g., attending a guest lecture) increased, the attractiveness of the activity depended more on its desirability (e.g., how interesting the lecture was) and less on its feasibility (e.g., how convenient the timing of the lecture was). . . . People take greater risks (i.e., favoring bets with a low probability of winning a high amount over those that offer a high probability to win a small amount) in decisions about temporally more distant bets.^[3](#AI-FOOM-Debatech24.html#enz.28)^[]{#AI-FOOM-Debatech24.html#enz.28.backref}\n\n[]{#AI-FOOM-Debatech24.html#likesection.35}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/abstractdistant.html#comment-518249093):\n>\n> > We seem primed to neglect the value and prospect of trillions of quirky future creatures not fundamentally that different from us, focused on their simple day-to-day pleasures, mostly getting along peacefully in vastly varied uncoordinated and hard-to-predict local cultures and lifestyles.\n>\n> Isn't this an example of trying to reverse stupidity? If there's a bias to conclude A composed of A~1~ - A~9~, you can't conclude that the future is the conjunction ¬A~1~&¬A~2~&¬A~3~ . . .\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/abstractdistant.html#comment-518249154): To sharpen my comment above, what we want to say is:\n>\n> > We seem primed to neglect the value and prospect of futures containing at least one of the following elements: Trillions of beings, quirky beings, beings not fundamentally that different from us, beings focused on simple day-to-day pleasures, beings mostly getting along peacefully, beings in vastly varied and uncoordinated cultures and lifestyles . . .\n>\n> Yes, I know it's less poetic, but it really does paint a substantially different picture of the future.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/abstractdistant.html#comment-518249182): Eliezer, this cognitive bias does not seem to saturate after one invocation. They didn't mention data directly testing this point, but it really does seem that all else equal we have an inborn tendency to add more compatible elements to a scenario, regardless of how many other of these elements are already in it.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/11/abstractdistant.html) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech24.html#enz.26} [1](#AI-FOOM-Debatech24.html#enz.26.backref). []{#AI-FOOM-Debatech24.html#cite.0.Liberman.2008}Nira Liberman and Yacov Trope, \"The Psychology of Transcending the Here and Now,\" *Science* 322, no. 5905 (2008): 1201--1205, doi:[10.1126/science.](http://dx.doi.org/10.1126/science.).\n\n[]{#AI-FOOM-Debatech24.html#enz.27} [2](#AI-FOOM-Debatech24.html#enz.27.backref). [Ibid.](#AI-FOOM-Debatech24.html#cite.0.Liberman.2008)\n\n[]{#AI-FOOM-Debatech24.html#enz.28} [3](#AI-FOOM-Debatech24.html#enz.28.backref). [Ibid.](#AI-FOOM-Debatech24.html#cite.0.Liberman.2008)\n\n[]{#AI-FOOM-Debatech25.html}\n\n## []{#AI-FOOM-Debatech25.html#x29-}[Chapter 24]{.titlemark} Engelbart: Insufficiently Recursive {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [26 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Followup to:** [Cascades, Cycles, Insight](../Text/AI-FOOM-Debatech21.html#x25-), [Recursion, Magic](../Text/AI-FOOM-Debatech23.html#x27-)**Reply to:** [Engelbart As *UberTool*?](../Text/AI-FOOM-Debatech3.html#x6-50002)When Robin originally [suggested](../Text/AI-FOOM-Debatech3.html#x6-50002) that Douglas Engelbart, best known as the inventor of the computer mouse, would have been a good candidate for taking over the world via [compound interest on tools that make tools](../Text/AI-FOOM-Debatech2.html#x5-40001), my initial reaction was, \"What on Earth? With a *mouse*?\"\n\nOn reading the initial portions of Engelbart's \"[Augmenting Human Intellect: A Conceptual Framework](http://www.dougengelbart.org/pubs/augment-3906.html),\"^[1](#AI-FOOM-Debatech25.html#enz.29)^[]{#AI-FOOM-Debatech25.html#enz.29.backref} it became a lot clearer where Robin was coming from.\n\nSometimes it's hard to see through the eyes of the past. Engelbart was a computer pioneer, and in the days when all these things were just getting started, he had a vision of using computers to systematically augment human intelligence. That was what he thought computers were *for*. That was the ideology lurking behind the mouse. Something that makes its users smarter---now that sounds a bit more plausible as an *UberTool*.\n\nLooking back at Engelbart's plans with benefit of hindsight, I see two major factors that stand out:\n\n1. [Engelbart committed the Classic Mistake of AI: underestimating how much cognitive work gets done by hidden algorithms running beneath the surface of introspection, and overestimating what you can do by fiddling with the [visible control levers](http://lesswrong.com/lw/sp/detached_lever_fallacy/).]{#AI-FOOM-Debatech25.html#x29-28002x1}\n2. [Engelbart [anchored](http://lesswrong.com/lw/j7/anchoring_and_adjustment/) on the way that someone *as intelligent as Engelbart* would use computers, but there was only one of him---and due to point (1) above, he couldn't use computers to make other people as smart as him.]{#AI-FOOM-Debatech25.html#x29-28004x2}\n\nTo start with point (2): They had more reverence for computers back in the old days. Engelbart visualized a system carefully designed to flow with every step of a human's work and thought, assisting every iota it could manage along the way. And the human would be trained to work with the computer, the two together dancing a seamless dance.\n\nAnd the problem with this was not *just* that computers got cheaper and that programmers wrote their software more hurriedly.\n\nThere's a now-legendary story about [the Windows Vista shutdown menu](http://moishelettvin.blogspot.com/2006/11/windows-shutdown-crapfest.html), a simple little feature into which forty-three different Microsoft people had input.^[2](#AI-FOOM-Debatech25.html#enz.30)^[]{#AI-FOOM-Debatech25.html#enz.30.backref} The debate carried on for over a year. The final product ended up as the lowest common denominator---a couple of hundred lines of code and a very visually unimpressive menu.\n\nSo even when lots of people spent a tremendous amount of time thinking about a single feature of the system---it still didn't end up very impressive. Jef Raskin could have done better than that, I bet. But Raskins and Engelbarts are rare.\n\nYou see the same effect in [Eric Drexler's chapter on hypertext in *Engines of Creation*](http://e-drexler.com/d/06/00/EOC/EOC_Chapter_14.html):^[3](#AI-FOOM-Debatech25.html#enz.31)^[]{#AI-FOOM-Debatech25.html#enz.31.backref} Drexler imagines the power of the Web to use two-way links and user annotations to promote informed criticism. ([As opposed to the way we actually use it.](http://lesswrong.com/lw/j1/stranger_than_history/)) And if the average Web user were Eric Drexler, the Web probably *would* work that way by now.\n\nBut no piece of software that has yet been developed, by mouse or by Web, can turn an average human user into Engelbart or Raskin or Drexler. You would very probably have to reach into the brain and rewire neural circuitry directly; I don't think *any* sense input or motor interaction would accomplish such a thing.\n\nWhich brings us to point (1).\n\nIt does look like Engelbart was under the spell of the \"[logical](http://lesswrong.com/lw/vt/the_nature_of_logic/)\" paradigm that prevailed in AI at the time he made his plans. (Should he even lose points for that? He went with the mainstream of that science.) He did not see it as an [impossible](http://lesswrong.com/lw/un/on_doing_the_impossible/) problem to have computers help humans *think*---he seems to have underestimated the difficulty in much the same way that the field of AI once severely underestimated the work it would take to make computers themselves solve cerebral-seeming problems. (Though I am saying this, reading heavily between the lines of one single paper that he wrote.) He talked about how the core of thought is symbols, and speculated on how computers could help people manipulate those symbols.\n\nI have already said much on why people tend to underestimate the amount of serious heavy lifting that gets done by cognitive algorithms hidden inside black boxes that run out of your introspective vision, and overestimate what you can do by duplicating the easily visible introspective control levers. The word \"apple,\" for example, is a visible lever; you can say it or not say it, [its presence or absence is salient](http://lesswrong.com/lw/sp/detached_lever_fallacy/). The algorithms of a visual cortex that let you visualize what an apple would look like upside down---we all have these in common, and they are not introspectively accessible. Human beings knew about apples a long, long time before they knew there was even such a thing as the visual cortex, let alone beginning to unravel the algorithms by which it operated.\n\nRobin Hanson [asked](../Text/AI-FOOM-Debatech23.html#x27-) me:\n\n> You really think an office worker with modern computer tools is only 10% more productive than one with 1950-era noncomputer tools? Even at the task of creating better computer tools?\n\nBut remember the parable of the optimizing compiler run on its own source code---maybe it makes itself 50% faster, but only once; the changes don't increase its ability to make future changes. So indeed, we should not be too impressed by a 50% increase in office worker productivity---not for purposes of asking about FOOMs. We should ask whether that increase in productivity translates into tools that create further increases in productivity.\n\nAnd this is where the problem of underestimating hidden labor starts to bite. Engelbart rhapsodizes (accurately!) on the wonders of being able to cut and paste text while writing, and how superior this should be compared to the typewriter. But suppose that Engelbart overestimates, by a factor of ten, how much of the intellectual labor of writing goes into fighting the typewriter. Then because Engelbart can only help you cut and paste more easily, and *cannot* rewrite those hidden portions of your brain that labor to come up with good sentences and good arguments, the actual improvement he delivers is a tenth of what he thought it would be. An anticipated 20% improvement becomes an actual 2% improvement. k way less than 1.\n\nThis will hit particularly hard if you think that computers, with some hard work on the user interface, and some careful training of the humans, ought to be able to help humans with the type of \"creative insight\" or \"scientific labor\" that goes into *inventing new things to do with the computer*. If you thought that the surface symbols were where most of the intelligence resided, you would anticipate that computer improvements would hit back hard to this meta level and create people who were more scientifically creative and who could design even better computer systems.\n\nBut if really you can only help people *type up* their ideas, while all the hard creative labor happens in the shower thanks to very-poorly-understood cortical algorithms---then you are much less like neutrons cascading through uranium, and much more like an optimizing compiler that gets a single speed boost and no more. It looks like the person is 20% more productive, but in the aspect of intelligence that potentially *cascades to further improvements* they're only 2% more productive, if that.\n\n(Incidentally . . . I once met a science-fiction author of a previous generation, and mentioned to him that the part of my writing I most struggled with was my tendency to revise and revise and revise things I had already written, instead of writing new things. And he said, \"Yes, that's why I went back to the typewriter. The word processor made it too easy to revise things; I would do too much polishing, and writing stopped being fun for me.\" It made me wonder if there'd be demand for an *author's word processor* that wouldn't let you revise anything until you finished your first draft.\n\nBut this could be chalked up to the humans not being trained as carefully, nor the software designed as carefully, as in the process Engelbart envisioned.)\n\nEngelbart wasn't trying to take over the world *in person*, or with a small group. Yet had he *tried* to go the *[UberTool](../Text/AI-FOOM-Debatech2.html#x5-40001)* route, we can reasonably expect he would have failed---that is, failed at advancing far beyond the outside world in internal computer technology while selling only *UberTool*'s services to outsiders.\n\nWhy? Because it takes too much *human* labor to develop computer software and computer hardware, and this labor cannot be automated away as a one-time cost. If the world outside your window has a thousand times as many brains, a 50% productivity boost that only cascades to a 10% and then a 1% additional productivity boost will not let you win against the world. If your *UberTool* was *itself a mind*, if cascades of self-improvement could *fully* automate away more and more of the *intellectual* labor performed by the outside world---then it would be a different story. But while the development path wends inexorably through thousands and millions of engineers, and you *can't* divert that path through an internal computer, you're not likely to pull far ahead of the world. You can just choose between giving your own people a 10% boost, or selling your product on the market to give lots of people a 10% boost.\n\nYou can have trade secrets, and sell only your services or products---many companies follow that business plan; any company that doesn't sell its source code does so. But this is just keeping one small advantage to yourself, and adding that as a cherry on top of the technological progress handed you by the outside world. It's not having more technological progress inside than outside.\n\nIf you're getting most of your technological progress *handed to you*---your resources not being sufficient to do it in-house---then you won't be able to apply your private productivity improvements to most of your actual velocity, since most of your actual velocity will come from outside your walls. If you only create 1% of the progress that you use, then a 50% improvement becomes a 0.5% improvement. The domain of potential recursion and potential cascades is much smaller, diminishing k. As if only 1% of the uranium *generating* your neutrons were available for *chain reactions* to be fissioned further.\n\nWe don't live in a world that cares intensely about milking every increment of velocity out of scientific progress. A 0.5% improvement is easily lost in the noise. Corporations and universities routinely put obstacles in front of their internal scientists that cost them more than 10% of their potential. This is one of those problems where not everyone is Engelbart (and you can't just rewrite their source code either).\n\nFor completeness, I should mention that there are generic obstacles to pulling an *UberTool*. Warren Buffett has gotten a sustained higher interest rate than the economy at large, and is widely *believed* to be capable of doing so indefinitely. In principle, the economy could have invested hundreds of billions of dollars as soon as Berkshire Hathaway had a sufficiently long track record to rule out chance. Instead, Berkshire has grown mostly by compound interest. We *could* live in a world where asset allocations were ordinarily given as a mix of stocks, bonds, real estate, and Berkshire Hathaway. We don't live in that world for a number of reasons: financial advisors not wanting to make themselves appear irrelevant, strange personal preferences on the part of Buffett . . .\n\nThe economy doesn't always do the obvious thing, like flow money into Buffett until his returns approach the average return of the economy. Interest rate differences much higher than 0.5%, on matters that people care about far more intensely than Science, are ignored if they're not presented in exactly the right format to be seized.\n\nAnd it's not easy for individual scientists or groups to capture the value created by scientific progress. Did Einstein die with 0.1% of the value that he created? Engelbart in particular doesn't seem to have *tried* to be Bill Gates, at least not as far as I know.\n\nWith that in mind---in one sense Engelbart succeeded at a good portion of what he *actually set out* to do: computer mice *did* take over the world.\n\nBut it was a broad slow cascade that mixed into the usual exponent of economic growth. Not a concentrated fast FOOM. To produce a concentrated FOOM, you've got to be able to swallow as much as possible of the processes *driving* the FOOM *into* the FOOM. Otherwise you can't improve those processes and you can't cascade through them and your k goes down. Then your interest rates won't even be as much higher than normal as, say, Warren Buffett's. And there's no grail to be *won*, only profits to be made: If you have no realistic hope of beating the world, you may as well join it.\n\n[]{#AI-FOOM-Debatech25.html#likesection.36}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/w8/engelbart_insufficiently_recursive/p6d): Humanity is in a FOOM relative to the rest of the biosphere but of course it doesn't seem ridiculously fast to *us*; the question from our standpoint is whether a brain in a box in a basement can go FOOM relative to human society. Anyone who thinks that, because we're already growing at a high rate, the distinction between that and a nanotech-capable superintelligence must not be very important is being just a little silly. It may not even be wise to call them by the same name, if it tempts you to such folly---and so I would suggest reserving \"FOOM\" for things that go very fast relative to \\*you\\*.\n>\n> For the record, I've been a coder and judged myself a reasonable hacker---set out to design my own programming language at one point, which I say not as a mark of virtue but just to demonstrate that I was in the game. (Gave it up when I realized AI wasn't about programming languages.)\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/w8/engelbart_insufficiently_recursive/) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech25.html#enz.29} [1](#AI-FOOM-Debatech25.html#enz.29.backref). Engelbart, [*Augmenting Human Intellect*](../Text/AI-FOOM-Debatech3.html#cite.0.Engelbart.1962).\n\n[]{#AI-FOOM-Debatech25.html#enz.30} [2](#AI-FOOM-Debatech25.html#enz.30.backref). []{#AI-FOOM-Debatech25.html#cite.0.Lettvin.2006}Moishe Lettvin, \"The Windows Shutdown Crapfest,\" *Moishe's Blog* (blog), November 24, 2006, .\n\n[]{#AI-FOOM-Debatech25.html#enz.31} [3](#AI-FOOM-Debatech25.html#enz.31.backref). []{#AI-FOOM-Debatech25.html#cite.0.Drexler.1986}K. Eric Drexler, *Engines of Creation* (Garden City, NY: Anchor, 1986).\n\n[]{#AI-FOOM-Debatech26.html}\n\n## []{#AI-FOOM-Debatech26.html#x30-}[Chapter 25]{.titlemark} Total Nano Domination {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [27 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Followup to:** [Engelbart: Insufficiently Recursive](../Text/AI-FOOM-Debatech25.html#x29-)The computer revolution had [cascades and insights](../Text/AI-FOOM-Debatech21.html#x25-) aplenty. Computer tools are routinely used to create tools, from using a C compiler to write a Python interpreter to using theorem-proving software to help design computer chips. I would not *yet* rate computers as being very deeply *[recursive](../Text/AI-FOOM-Debatech23.html#x27-)*---I don't think they've improved our own thinking processes even so much as the Scientific Revolution---*yet*. But some of the ways that computers are used to improve computers verge on being repeatable ([cyclic](../Text/AI-FOOM-Debatech21.html#x25-)).\n\nYet no individual, no localized group, nor even country, managed to get a sustained advantage in computing power, compound the interest on cascades, and take over the world. There was never a Manhattan moment when a computing advantage *temporarily* gave one country a supreme military advantage, like the US and its atomic bombs for that brief instant at the end of WW2. In computing there was no equivalent of \"We've just crossed the [sharp threshold of criti-](../Text/AI-FOOM-Debatech21.html#x25-)\n\n[cality](../Text/AI-FOOM-Debatech21.html#x25-), and now our pile doubles its neutron output every *two minutes*, so we can produce lots of plutonium and you can't.\"\n\nWill the development of nanotechnology go the same way as computers---a smooth, steady developmental curve spread across many countries, no one project taking into itself a substantial fraction of the world's whole progress? Will it be more like the Manhattan Project, one country gaining a (temporary?) huge advantage at huge cost? Or could a small group with an initial advantage cascade and outrun the world?\n\nJust to make it clear why we might worry about this for nanotech, rather than say car manufacturing---if you can build things from atoms, then the environment contains an unlimited supply of perfectly machined spare parts. If your molecular factory can build solar cells, it can acquire energy as well.\n\nSo full-fledged Drexlerian [molecular nanotechnology](http://en.wikipedia.org/wiki/Molecular_nanotechnology) (Wikipedia) can plausibly automate away much of the *manufacturing* in its *material* supply chain. If you already have nanotech, you may not need to consult the outside economy for inputs of energy or raw material.\n\nThis makes it more plausible that a nanotech group could localize off, and do its own compound interest away from the global economy. If you're Douglas Engelbart building better software, you still need to consult Intel for the hardware that runs your software, and the electric company for the electricity that powers your hardware. It would be a *considerable expense* to build your own fab lab for your chips (that makes chips as good as Intel) and your own power station for electricity (that supplies electricity as cheaply as the utility company).\n\nIt's not just that this tends to entangle you with the fortunes of your trade partners, but also that---as an *UberTool Corp* keeping your trade secrets in-house---you can't improve the hardware you get, or drive down the cost of electricity, as long as these things are done outside. Your cascades can only go through what you do locally, so the more you do locally, the more likely you are to get a compound interest advantage. (Mind you, I don't think Engelbart could have gone FOOM even if he'd made his chips locally and supplied himself with electrical power---I just don't think the compound advantage on using computers to make computers is powerful enough to sustain [k \\> 1](../Text/AI-FOOM-Debatech21.html#x25-).)\n\nIn general, the more capabilities are localized into one place, the less people will depend on their trade partners, the more they can cascade locally (apply their improvements to yield further improvements), and the more a \"critical cascade\"/FOOM sounds plausible.\n\nYet self-replicating nanotech is a very *advanced* capability. You don't get it right off the bat. Sure, lots of biological stuff has this capability, but this is a misleading coincidence---it's not that self-replication is *easy*, but that evolution, *for its own [](http://lesswrong.com/lw/kr/an_alien_god/)alien reasons*, tends to build it into everything. (Even individual cells, which is ridiculous.)\n\nIn the *run-up* to nanotechnology, it seems not implausible to suppose a continuation of the modern world. Today, many different labs work on small pieces of nanotechnology---fortunes entangled with their trade partners, and much of their research velocity coming from advances in other laboratories. Current nanotech labs are dependent on the outside world for computers, equipment, science, electricity, and food; any single lab works on a small fraction of the puzzle, and contributes small fractions of the progress.\n\nIn short, so far nanotech is going just the same way as computing.\n\nBut it is a tad [premature](http://lesswrong.com/lw/km/motivated_stopping_and_motivated_continuation/)---I would even say that it crosses the line into the \"silly\" species of futurism---to exhale a sigh of relief and say, \"Ah, that settles it---no need to consider any further.\"\n\nWe all know how exponential multiplication works: 1 microscopic nanofactory, 2 microscopic nanofactories, 4 microscopic nanofactories . . . let's say there's a hundred different groups working on self-replicating nanotechnology and one of those groups succeeds one week earlier than the others. [Rob Freitas](http://www.foresight.org/nano/Ecophagy.html) has calculated that some species of replibots could spread through the Earth in two days (even given what seem to me like highly conservative assumptions in a context where conservatism is not appropriate).^[1](#AI-FOOM-Debatech26.html#enz.32)^[]{#AI-FOOM-Debatech26.html#enz.32.backref}\n\nSo, even if the race seems very tight, whichever group gets replibots *first* can take over the world given a mere week's lead time---\n\nYet wait! Just having replibots doesn't let you take over the world. You need fusion weapons, or surveillance bacteria, or some other way to actually *govern*. That's a lot of matterware---a lot of design and engineering work. A replibot advantage doesn't equate to a weapons advantage, unless, somehow, the planetary economy has already published the open-source details of fully debugged weapons that you can build with your newfound private replibots. Otherwise, a lead time of one week might not be anywhere near enough.\n\nEven more importantly---\"self-replication\" is not a binary, 0-or-1 attribute. Things can be partially self-replicating. You can have something that manufactures 25% of itself, 50% of itself, 90% of itself, or 99% of itself---but still needs one last expensive computer chip to complete the set. So if you have twenty-five countries racing, sharing some of their results and withholding others, there isn't *one morning* where you wake up and find that one country has self-replication.\n\nBots become successively easier to manufacture; the factories get successively cheaper. By the time one country has bots that manufacture themselves from environmental materials, many other countries have bots that manufacture themselves from feedstock. By the time one country has bots that manufacture themselves entirely from feedstock, other countries have produced some bots using assembly lines. The nations also have all their old conventional arsenal, such as intercontinental missiles tipped with thermonuclear weapons, and these have deterrent effects against crude nanotechnology. No one ever gets a *discontinuous* military advantage, and the world is safe (?).\n\nAt this point, I do feel obliged to recall the notion of \"[burdensome details](http://lesswrong.com/lw/jk/burdensome_details/),\" that we're spinning a story out of many conjunctive details, any one of which could go wrong. This is not an argument in favor of anything in particular, just a reminder not to be seduced by stories that are too specific. When I contemplate the sheer raw power of nanotechnology, I don't feel confident that the fabric of society can even survive the *sufficiently plausible prospect* of its near-term arrival. If your intelligence estimate says that Russia (the new belligerent Russia under Putin) is going to get self-replicating nanotechnology in a year, what does that do to Mutual Assured Destruction? What if Russia makes a similar intelligence assessment of the US? What happens to the capital markets? I can't even foresee how our world will react to the *prospect* of various nanotechnological capabilities as they promise to be developed in the future's near future. Let alone envision how society would *actually change* as full-fledged molecular nanotechnology was developed, even if it were developed gradually . . .\n\n. . . but I suppose the Victorians might say the same thing about nuclear weapons or computers, and yet we still have a global economy---one that's actually lot more interdependent than theirs, thanks to nuclear weapons making small wars less attractive, and computers helping to coordinate trade.\n\nI'm willing to believe in the possibility of a smooth, gradual ascent to nanotechnology, so that no one state---let alone any corporation or small group---ever gets a discontinuous advantage.\n\nThe main reason I'm willing to believe this is because of the difficulties of *design* and *engineering*, even after all manufacturing is solved. When I read Drexler's *Nanosystems*, I thought: \"Drexler uses properly conservative assumptions everywhere I can see, except in one place---debugging. He assumes that any failed component fails visibly, immediately, and without side effects; *this* is not conservative.\"\n\nIn *principle*, we have complete control of our computers---every bit and byte is under human command---and yet it still takes an immense amount of engineering work on top of that to make the bits do what we want. This, and not any difficulties of manufacturing things once they *are* designed, is what takes an international supply chain of millions of programmers.\n\nBut we're *still* not out of the woods.\n\nSuppose that, by a providentially incremental and distributed process, we arrive at a world of full-scale molecular nanotechnology---a world where *designs*, if not finished material goods, are traded among parties. In a global economy large enough that no one actor, or even any one state, is doing more than a fraction of the total engineering.\n\nIt would be a *very* different world, I expect; and it's possible that my essay may have already degenerated into nonsense. But even if we still have a global economy after getting this far---then we're *still* not out of the woods.\n\nRemember those [ems](../Text/AI-FOOM-Debatech17.html#x21-)? The emulated humans-on-a-chip? The uploads?\n\nSuppose that, with molecular nanotechnology already in place, there's an international race for reliable uploading---with some results shared, and some results private---with many state and some nonstate actors.\n\nAnd suppose the race is so tight that the first state to develop working researchers-on-a-chip only has a *one-day* lead time over the other actors.\n\nThat is---one day before anyone else, they develop uploads sufficiently undamaged, or capable of sufficient recovery, that the ems can carry out research and development. In the domain of, say, uploading.\n\nThere are other teams working on the problem, but their uploads are still a little off, suffering seizures and having memory faults and generally having their cognition degraded to the point of not being able to contribute. ([Note]{.textsc}: I think this whole future is a wrong turn and we should stay away from it; I am not endorsing this.)\n\nBut this one team, though---their uploads still have a few problems, but they're at least sane enough and smart enough to start . . . fixing their problems themselves?\n\nIf there's already full-scale nanotechnology around when this happens, then even with some inefficiency built in, the first uploads may be running at ten thousand times human speed. Nanocomputers are powerful stuff.\n\nAnd in an hour, or around a year of internal time, the ems may be able to upgrade themselves to a hundred thousand times human speed and fix some of the remaining problems.\n\nAnd in another hour, or ten years of internal time, the ems may be able to get the factor up to a million times human speed, and start working on intelligence enhancement . . .\n\nOne could, of course, voluntarily publish the improved-upload protocols to the world and give everyone else a chance to join in. But you'd have to trust that not a single one of your partners were holding back a trick that lets them run uploads at ten times your own maximum speed (once the bugs were out of the process). That kind of advantage could snowball quite a lot, in the first sidereal day.\n\nNow, if uploads are *gradually* developed *at a time when computers are too slow to run them quickly*---meaning, *before* molecular nanotech and nanofactories come along---then this whole scenario is averted; the first high-fidelity uploads, running at a hundredth of human speed, will grant no special advantage. (Assuming that no one is pulling any spectacular snowballing tricks with intelligence enhancement---but they would have to snowball fast and hard to confer advantage on a small group running at low speeds. The same could be said of brain-computer interfaces, developed before or after nanotechnology, if running in a small group at merely human speeds. I would credit their world takeover, but I suspect Robin Hanson wouldn't at this point.)\n\nNow, I don't *really* believe in any of this---this whole scenario, this whole world I'm depicting. In real life, I'd expect someone to brute-force an unFriendly AI on one of those super-ultimate-nanocomputers, followed in short order by the end of the world. But that's a separate issue. And this whole world seems too much like our own, after too much technological change, to be realistic to me. World government with an insuperable advantage? Ubiquitous surveillance? I don't like the ideas, but both of them would change the game dramatically . . .\n\nBut the real point of this essay is to illustrate a point more important than nanotechnology: **as optimizers become more self-swallowing, races between them are more unstable.**\n\nIf you sent a modern computer back in time to 1950---containing many modern software tools in compiled form, but no future history or declaratively stored future science---I would guess that the recipient could *not* use it to take over the world. Even if the USSR got it. Our computing *industry* is a very powerful thing, but it relies on a supply chain of chip factories.\n\nIf someone got a future *nanofactory* with a library of future nanotech applications---including designs for things like fusion power generators and surveillance bacteria---they might really be able to *take over the world*. The nanofactory swallows its own supply chain; it incorporates replication within itself. If the owner fails, it won't be for lack of factories. It will be for lack of ability to develop new matterware fast enough, and apply existing matterware fast enough, to take over the world.\n\nI'm not saying that nanotech *will* appear from nowhere with a library of designs---just making a point about concentrated power and the instability it implies.\n\nThink of all the tech news that you hear about once---say, an article on *Slashdot* about yada yada 50% improved battery technology---and then you never hear about again, because it was too expensive or too difficult to manufacture.\n\nNow imagine a world where the news of a 50% improved battery technology comes down the wire, and the head of some country's defense agency is sitting down across from engineers and intelligence officers and saying, \"We have five minutes before all of our rival's weapons are adapted to incorporate this new technology; how does that affect our balance of power?\" Imagine that happening as often as \"amazing breakthrough\" articles appear on *Slashdot*.\n\nI don't mean to doomsay---the Victorians would probably be pretty surprised we haven't blown up the world with our ten-minute ICBMs, but we don't live in their world---well, maybe doomsay just a little---but the point is: *It's less stable*. Improvements cascade faster once you've swallowed your manufacturing supply chain.\n\nAnd if you sent back in time a single nanofactory, *and* a single upload living inside it---then the world might end in five minutes or so, as we bios measure time.\n\nThe point being not that an upload *will* suddenly appear, but that now you've swallowed your supply chain *and* your R&D chain.\n\nAnd so this world is correspondingly more unstable, even if all the actors start out in roughly the same place. Suppose a state manages to get one of those *Slashdot*-like technology improvements---only this one lets uploads think 50% faster---and they get it fifty minutes before anyone else, at a point where uploads are running ten thousand times as fast as human (50 mins. ≈1 year)---and in that extra half year, the uploads manage to find another couple of 50% improvements . . .\n\nNow, you *can* suppose that all the actors are all trading all of their advantages and holding nothing back, so everyone stays nicely synchronized.\n\nOr you can suppose that enough trading is going on that most of the research any group benefits from comes from *outside* that group, and so a 50% advantage for a local group doesn't cascade much.\n\nBut again, that's not the point. The point is that in modern times, with the modern computing industry, where commercializing an advance requires building a new computer factory, a bright idea that has gotten as far as showing a 50% improvement in the laboratory is merely one more article on *Slashdot*.\n\nIf everything could instantly be rebuilt via nanotech, that laboratory demonstration could precipitate an instant international military crisis.\n\nAnd if there are uploads around, so that a cute little 50% advancement in a certain kind of hardware recurses back to imply *50% greater speed at all future research*---then this *Slashdot* article could become the key to world domination.\n\nAs systems get more self-swallowing, they cascade harder; and even if all actors start out equivalent, races between them get much more unstable. I'm not claiming it's impossible for that world to be stable. The Victorians might have thought that about ICBMs. But that subjunctive world contains *additional* instability compared to our own and would need *additional* centripetal forces to end up as stable as our own.\n\nI expect Robin to disagree with some part of this essay, but I'm not sure which part or how.\n\n[]{#AI-FOOM-Debatech26.html#likesection.37}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/w9/total_nano_domination/p6p): Well, at long last you finally seem to be laying out the heart of your argument. Dare I hope that we can conclude our discussion by focusing on these issues, or are there yet more layers to this onion?\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/w9/total_nano_domination/p6u): It takes two people to make a disagreement; I don't *know* what the heart of my argument is from your perspective!\n>\n> This essay treats the simpler and less worrisome case of nanotech. Quickie preview of AI:\n>\n> - When you upgrade to AI there are harder faster cascades because the development idiom is even more recursive, and there is an overhang of hardware capability we don't understand how to use.\n> - There are probably larger development gaps between projects due to a larger role for insights.\n> - There are more barriers to trade between AIs, because of the differences of cognitive architecture---different AGI projects have far less in common today than nanotech projects, and there is very little sharing of cognitive content even in ordinary AI.\n> - Even if AIs trade improvements among themselves, there's a huge barrier to applying those improvements to human brains, uncrossable short of very advanced technology for uploading and extreme upgrading.\n> - So even if many unFriendly AI projects are developmentally synchronized and mutually trading, they may come to their own compromise, do a synchronized takeoff, and eat the biosphere; without caring for humanity, humane values, or any sort of existence for themselves that we regard as worthwhile . . .\n>\n> But I don't know if you regard any of that as the *important* part of the argument, or if the key issue in our disagreement happens to be already displayed *here*. If it's here, we should resolve it here, because nanotech is much easier to understand.\n\n> [Robin Hanson](http://lesswrong.com/lw/w9/total_nano_domination/p6z): In your one upload team a day ahead scenario, by \"full-scale nanotech\" you apparently mean oriented around very local production. That is, they don't suffer much efficiency reduction by building everything themselves on-site via completely automated production. The overall efficiency of this tech with available cheap feedstocks allows a doubling time of much less than one day. And in much less than a day this tech plus feedstocks cheaply available to this one team allow it to create more upload equivalents (scaled by speedups) than all the other teams put together. Do I understand you right?\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/w9/total_nano_domination/p70): As I understand nanocomputers, it shouldn't really take all that *much* nanocomputer material to run more uploads than a bunch of bios---like, a cubic meter of nanocomputers total, and a megawatt of electricity, or something like that. The key point is that you have such-and-such amount of nanocomputers available---it's not a focus on material production per se.\n>\n> Also, bear in mind that I already acknowledged that you could have a slow run-up to uploading such that there's no hardware overhang when the very first uploads capable of doing their own research are developed---the one-day lead and the fifty-minute lead are two different scenarios above.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/w9/total_nano_domination/) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech26.html#enz.32} [1](#AI-FOOM-Debatech26.html#enz.32.backref). []{#AI-FOOM-Debatech26.html#cite.0.Freitas.2000}Robert A. Freitas Jr., \"Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations,\" Foresight Institute, April 2000, accessed July 28, 2013, .\n\n[]{#AI-FOOM-Debatech27.html}\n\n## []{#AI-FOOM-Debatech27.html#x31-}[Chapter 26]{.titlemark} Dreams of Autarky {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [27 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nSelections from my 1999 essay \"[Dreams of Autarky](http://hanson.gmu.edu/dreamautarky.html)\":^[1](#AI-FOOM-Debatech27.html#enz.33)^[]{#AI-FOOM-Debatech27.html#enz.33.backref}\n\n> \\[Here is\\] an important common bias on \"our\" side, i.e., among those who expect specific very large changes. . . . Futurists tend to expect an unrealistic degree of autarky, or independence, within future technological and social systems. The cells in our bodies are largely-autonomous devices and manufacturing plants, producing most of what they need internally. . . . Small tribes themselves were quite autonomous. . . . Most people are not very aware of, and so have not fully to terms with their new inter-dependence. For example, people are surprisingly willing to restrict trade between nations, not realizing how much their wealth depends on such trade. . . . Futurists commonly neglect this interdependence . . . they picture their future political and economic unit to be the largely self-sufficient small tribe of our evolutionary heritage. . . . \\[Here are\\] some examples. . . .\n>\n> \\[Many\\] imagine space economies almost entirely self-sufficient in mass and energy. . . . It would be easier to create self-sufficient colonies under the sea, or in Antarctica, yet there seems to be little prospect of or interest in doing so anytime soon. . . .\n>\n> Eric Drexler . . . imagines manufacturing plants that are far more independent than in our familiar economy. . . . To achieve this we need not just . . . control of matter at the atomic level, but also the *complete* automation of the manufacturing process, all embodied in a single device . . . complete with quality control, waste management, and error recovery. This requires \"artificial intelligence\" far more advanced than we presently possess. . . .\n>\n> Knowledge is \\[now\\] embodied in human-created software and hardware, and in human workers trained for specific tasks. . . . It has usually been cheaper to leave the CPU and communication intensive tasks to machines, and leave the tasks requiring general knowledge to people.\n>\n> Turing-test artificial intelligence instead imagines a future with many large human-created software modules . . . far more independent, i.e., less dependent on context, than existing human-created software. . . .\n>\n> \\[Today\\] innovations and advances in each part of the world \\[depends\\] on advances made in all other parts of the world. . . . Visions of a local singularity, in contrast, imagine that sudden technological advances in one small group essentially allow that group to suddenly grow big enough to take over everything. . . . The key common assumption is that of a very powerful but autonomous area of technology. Overall progress in that area must depend only on advances in this area, advances that a small group of researchers can continue to produce at will. And great progress in this area alone must be sufficient to let a small group essentially take over the world. . . .\n>\n> \\[Crypto credential\\] dreams imagine that many of our relationships will be exclusively digital, and that we can keep these relations independent by separating our identity into relationship-specific identities. . . . It is hard to imagine potential employers not asking to know more about you, however. . . . Any small information leak can be enough to allow others to connect your different identities. . . .\n>\n> \\[Consider also\\] complaints about the great specialization in modern academic and intellectual life. People complain that ordinary folks should know more science, so they can judge simple science arguments for themselves. . . . Many want policy debates to focus on intrinsic merits, rather than on appeals to authority. Many people wish students would study a wider range of subjects, and so be better able to see the big picture. And they wish researchers weren't so penalized for working between disciplines, or for failing to cite every last paper someone might think is related somehow.\n>\n> It seems to me plausible to attribute all of these dreams of autarky to people not yet coming fully to terms with our newly heightened interdependence. . . . We picture our ideal political unit and future home to be the largely self-sufficient small tribe of our evolutionary heritage. . . . I suspect that future software, manufacturing plants, and colonies will typically be much more dependent on everyone else than dreams of autonomy imagine. Yes, small isolated entities are getting more capable, but so are small non-isolated entities, and the latter remain far more capable than the former. The riches that come from a worldwide division of labor have rightly seduced us away from many of our dreams of autarky. We may fantasize about dropping out of the rat race and living a life of ease on some tropical island. But very few of us ever do.\n>\n> So academic specialists may dominate intellectual progress, and world culture may continue to overwhelm local variations. Private law and crypto-credentials may remain as marginalized as utopian communities have always been. Manufacturing plants may slowly get more efficient, precise, and automated without a sudden genie nanotech revolution. Nearby space may stay uncolonized until we can cheaply send lots of mass up there, while distant stars may remain uncolonized for a long long time. And software may slowly get smarter, and be collectively much smarter than people long before anyone bothers to make a single module that can pass a Turing test.\n\nThe relevance to my discussion with Eliezer should be obvious. My next post will speak more directly.\n\n[]{#AI-FOOM-Debatech27.html#likesection.38}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/dreams-of-autar.html#comment-518248861): We generally specialize when it comes to bugs in computer programs---rather than monitoring their behavior and fixing them ourselves, we inform the central development authority for that program of the problem and rely on them to fix it everywhere.\n>\n> The benefit from automation depends on the amount of human labor already in the process, à la the bee-sting principle of poverty. Automating one operation while many others are still human-controlled is a marginal improvement, because you can't run at full speed or fire your human resources department until you've gotten rid of all the humans.\n>\n> The incentive for automation depends on the number of operations being performed. If you're doing something a trillion times over, it has to be automatic. We pay whatever energy cost is required to make transistor operations on chips fully reliable, because it would be impossible to have a chip if each transistor required human monitoring. DNA sequencing is increasingly automated as we try to do more and more of it.\n>\n> With nanotechnology it is more *possible* to automate because you are designing all the machine elements of the system on a finer grain, closer to the level of physical law where interactions are perfectly regular, and more importantly, closing the system: no humans wandering around on your manufacturing floor.\n>\n> And the *incentive* to automate is tremendous because of the gigantic number of operations you want to perform, and the higher levels of organization you want to build on top---it is akin to the incentive to automate the internal workings of a computer chip.\n>\n> Now with all that said, I find it extremely plausible that, as with DNA sequencing, we will only see an increasing degree of automation over time, rather than a sudden *fully* automated system appearing *ab initio*. The operators will be there, but they'll handle larger and larger systems, and finally, in at least some cases, they'll disappear. Not assembly line workers, sysadmins. Bugs will continue to be found but their handling will be centralized and one-off rather than local and continuous. The system will behave more like the inside of a computer chip than the inside of a factory.\n>\n> ---Such would be my guess, not to materialize instantly but as a trend over time.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/dreams-of-autar.html#comment-518248897): Eliezer, yes, the degree of automation will probably increase incrementally. As I explore somewhat [here](http://hanson.gmu.edu/nanoecon.pdf),^[2](#AI-FOOM-Debatech27.html#enz.34)^[]{#AI-FOOM-Debatech27.html#enz.34.backref} there is also the related issue of the degree of local production, vs. importing inputs made elsewhere. A high degree of automation need not induce a high degree of local production. Perhaps each different group specializes in automating certain aspects of production, and they coordinate by sending physical inputs to each other.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/dreams-of-autar.html#comment-518248923): Robin, numerous informational tasks can be performed far more quickly by special-purpose hardware, arguably analogous to more efficient special-purpose molecular manufacturers. The cost of shipping information is incredibly cheap. Yet the typical computer contains a CPU and a GPU and does not farm out hard computational tasks to distant specialized processors. Even when we do farm out some tasks, mostly for reason of centralizing information rather than computational difficulty, the tasks are still given to large systems of conventional CPUs. Even supercomputers are mostly made of conventional CPUs.\n>\n> This proves nothing, of course; but it is worth observing of the computational economy, in case you have some point that differentiates it from the nanotech economy. Are you sure you're not being prejudiced by the sheer *traditionalness* of moving physical inputs around through specialized processors?\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/dreams-of-autar.html#comment-518248975): Eliezer, both computing and manufacturing are old enough now to be \"traditional\"; I expect each mode of operation is reasonably well adapted to current circumstances. Yes, future circumstances will change, but do we really know in which direction? Manufacturing systems may well also now ship material over distances \"for reason of centralizing information.\"\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/11/dreams-of-autar.html) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech27.html#enz.33} [1](#AI-FOOM-Debatech27.html#enz.33.backref). []{#AI-FOOM-Debatech27.html#cite.0.Hanson.1999}Robin Hanson, \"Dreams of Autarky\" (Unpublished manuscript, September 1999), last revised September 2001, .\n\n[]{#AI-FOOM-Debatech27.html#enz.34} [2](#AI-FOOM-Debatech27.html#enz.34.backref). []{#AI-FOOM-Debatech27.html#cite.0.Hanson.2007a}Robin Hanson, \"Five Nanotech Social Scenarios,\" in *Nanotechnology: Societal Implications---Individual Perspectives*, ed. Mihail C. Roco and William Sims Bainbridge (Dordrecht, The Netherlands: Springer, 2007), 109--113.\n\n[]{#AI-FOOM-Debatech28.html}\n\n## []{#AI-FOOM-Debatech28.html#x32-}[Chapter 27]{.titlemark} Total Tech Wars {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [29 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nEliezer [Thursday](../Text/AI-FOOM-Debatech26.html#x30-):\n\n> Suppose . . . the first state to develop working researchers-on-a-chip, only has a *one-day* lead time. . . . If there's already full-scale nanotechnology around when this happens . . . in an hour . . . the ems may be able to upgrade themselves to a hundred thousand times human speed, . . . and in another hour . . . get the factor up to a million times human speed, and start working on intelligence enhancement. . . . One could, of course, voluntarily publish the improved-upload protocols to the world and give everyone else a chance to join in. But you'd have to trust that not a single one of your partners were holding back a trick that lets them run uploads at ten times your own maximum speed.\n\nCarl Shulman [Saturday](../Text/AI-FOOM-Debatech16.html#x20-) and [Monday](../Text/AI-FOOM-Debatech20.html#x24-):\n\n> I very much doubt that any U.S. or Chinese President who understood the issues would fail to nationalize a for-profit firm under those circumstances. . . . It's also how a bunch of social democrats, or libertarians, or utilitarians, might run a project, knowing that a very likely alternative is the crack of a future dawn and burning the cosmic commons, with a lot of inequality in access to the future, and perhaps worse. Any state with a lead on bot development that can ensure the bot population is made up of nationalists or ideologues (who could monitor each other) could disarm the world's dictatorships, solve collective action problems. . . . \\[For\\] biological humans \\[to\\] retain their wealth as capital holders in his scenario, ems must be obedient and controllable enough. . . . But if such control is feasible, then a controlled em population being used to aggressively create a global singleton is also feasible.\n\n***Every* new technology brings social disruption.** While new techs (broadly conceived) tend to increase the total pie, some folks gain more than others, and some even lose overall. The tech's inventors may gain intellectual property, it may fit better with some forms of capital than others, and those who first foresee its implications may profit from compatible investments. So any new tech can be framed as a conflict, between opponents in a race or war.\n\n***Every* conflict can be framed as a total war.** If you believe the other side is totally committed to total victory, that surrender is unacceptable, and that all interactions are zero-sum, you may conclude your side must never cooperate with them, nor tolerate much internal dissent or luxury. All resources must be devoted to growing more resources and to fighting them in every possible way.\n\nA total war is a self-fulfilling prophecy; a total war exists exactly when any substantial group believes it exists. And total wars need not be \"hot.\" Sometimes your best war strategy is to grow internally, or wait for other forces to wear opponents down, and only at the end convert your resources into military power for a final blow.\n\nThese two views can be combined in ***total tech wars***. The pursuit of some particular tech can be framed as a crucial battle in our war with them; we must not share any of this tech with them, nor tolerate much internal conflict about how to proceed. We must race to get the tech first and retain dominance.\n\nTech transitions produce variance in who wins more. If you are ahead in a conflict, added variance reduces your chance of winning, but if you are behind, variance increases your chances. So the prospect of a tech transition gives hope to underdogs, and fear to overdogs. The bigger the tech, the bigger the hopes and fears.\n\nIn 1994 [I said](http://hanson.gmu.edu/uploads.html) that, while our future vision usually fades into a vast fog of possibilities, brain emulation \"excites me because it seems an exception to this general rule---more like a crack of dawn than a fog, like a sharp transition with sharp implications regardless of the night that went before.\"^[1](#AI-FOOM-Debatech28.html#enz.35)^[]{#AI-FOOM-Debatech28.html#enz.35.backref} In fact, [brain emulation](../Text/AI-FOOM-Debatech16.html#x20-) is the largest tech [disruption I can foresee](../Text/AI-FOOM-Debatech22.html#x26-) (as more likely than not to occur). So yes, one might frame brain emulation as a total tech war, bringing hope to some and fear to others.\n\nAnd yes, the size of that disruption is uncertain. For example, an em transition could go relatively smoothly if scanning and cell modeling techs were good enough well before computers were cheap enough. In this case em workers would gradually displace human workers as computer costs fell. If, however, one group suddenly had the last key modeling breakthrough when em computer costs were far below human wages, that group could gain enormous wealth, to use as they saw fit.\n\nYes, if such a winning group saw itself in a total war, it might refuse to cooperate with others and devote itself to translating its breakthrough into an overwhelming military advantage. And yes, if you had enough reason to think powerful others saw this as a total tech war, you might be forced to treat it that way yourself.\n\nTech transitions that create whole new populations of beings can also be framed as total wars between the new beings and everyone else. If you framed a new-being tech this way, you might want to prevent or delay its arrival, or try to make the new beings \"friendly\" slaves with no inclination or ability to war.\n\nBut note: this em tech has no intrinsic connection to a total war other than that it is a big transition whereby some could win big! Unless you claim that all big techs produce total wars, you need to say why this one is different.\n\nYes, you can frame big techs as total tech wars, but surely **it is far better that tech transitions *not be framed as total wars***. The vast majority of conflicts in our society take place within systems of peace and property, where local winners only rarely hurt others much by spending their gains. It would be far better if new em tech firms sought profits for their shareholders, and allowed themselves to become interdependent because they expected other firms to act similarly.\n\nYes, we must be open to evidence that other powerful groups will treat new techs as total wars. But **we must avoid *creating* a total war by sloppy discussion of it as a possibility**. We should not take others' discussions of this possibility as strong evidence that they will treat a tech as total war, nor should we discuss a tech in ways that others could reasonably take as strong evidence we will treat it as total war. Please, \"give peace a chance.\"\n\nFinally, note our many biases to overtreat techs as wars. There is a vast graveyard of wasteful government projects created on the rationale that a certain region must win a certain tech race/war. Not only do governments do a lousy job of guessing which races they could win, they also overestimate both first mover advantages and the disadvantages when others dominate a tech. Furthermore, as I posted [Wednesday](../Text/AI-FOOM-Debatech24.html#x28-):\n\n> We seem primed to confidently see history as an inevitable march toward a theory-predicted global conflict with an alien united *them* determined to oppose our core symbolic values, making infeasible overly risky overconfident plans to oppose them.\n\n[]{#AI-FOOM-Debatech28.html#likesection.39}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/total-tech-wars.html#comment-518248913): I generally refer to this scenario as \"winner take all\" and had planned a future post with that title.\n>\n> I'd never have dreamed of calling it a \"total tech war\" because that sounds much too combative, a phrase that might spark violence even in the near term. It also doesn't sound accurate, because a winner-take-all scenario doesn't imply destructive combat or any sort of military conflict.\n>\n> I moreover defy you to look over my writings and find any case where I ever used a phrase as inflammatory as \"total tech war.\"\n>\n> I think that, in this conversation and in the debate as you have just now framed it, \"*Tu quoque!*\" is actually justified here.\n>\n> Anyway---as best as I can tell, the *natural* landscape of these technologies, *which introduces disruptions much larger than farming or the Internet*, is without special effort winner-take-all. It's not a question of ending up in that scenario by making special errors. We're just there. Getting out of it would imply special difficulty, not getting into it, and I'm not sure that's possible---such would be the stance I would try to support.\n>\n> Also:\n>\n> If you try to look at it from my perspective, then you can see that I've gone to *tremendous* lengths to defuse both the reality and the appearance of conflict between altruistic humans over which AI should be built. \"Coherent Extrapolated Volition\" is extremely meta; if all *competent and altruistic* Friendly AI projects think this meta, they are far more likely to find themselves able to cooperate than if one project says \"Libertarianism!\" and another says \"Social democracy!\"\n>\n> On the other hand, the AGI projects run by the [meddling dabblers](http://lesswrong.com/lw/uc/aboveaverage_ai_scientists/) *do* just say \"Libertarianism!\" or \"Social democracy!\" or whatever strikes their founder's fancy. And so far as I can tell, as a *matter of simple fact*, an AI project run at that level of competence will destroy the world. (It wouldn't be a good idea even if it worked as intended, but that's a separate issue.)\n>\n> As a matter of simple decision theory, it seems to me that an unFriendly AI which has just acquired a decisive first-mover advantage is faced with the following payoff matrix:\n>\n> ::: {.tabular}\n> Share Tech, Trade → 10 utilons> Take Over Universe → 1,000 utilons\n> :::\n>\n> As a matter of simple decision theory, I expect an unFriendly AI to take the second option.\n>\n> Do you agree that *if* an unFriendly AI gets nanotech and no one else has nanotech, it will take over the world rather than trade with it?\n>\n> Or is this statement something that is true but forbidden to speak?\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/11/total-tech-wars.html#comment-518248924): We could be in any of the three following domains:\n>\n> 1. [The tech landscape is naturally smooth enough that, even if participants don't share technology, there is no winner take all.]{#AI-FOOM-Debatech28.html#x32-31002x1}\n> 2. [The tech landscape is somewhat steep. If participants don't share technology, one participant will pull ahead and dominate all others via compound interest. If they share technology, the foremost participant will only control a small fraction of the progress and will not be able to dominate all other participants.]{#AI-FOOM-Debatech28.html#x32-31004x2}\n> 3. [The tech landscape contains upward cliffs, and/or progress is naturally hard to share. Even if participants make efforts to trade progress up to time T, one participant will, after making an additional discovery at time T + 1, be faced with at least the *option* of taking over the world. Or it is plausible for a single participant to withdraw from the trade compact, and either (a) accumulate private advantages while monitoring open progress or (b) do its own research, and still take over the world.]{#AI-FOOM-Debatech28.html#x32-31006x3}\n>\n> (Two) is the only regime where you can have self-fulfilling prophecies. I think nanotech is probably in (2) but contend that AI lies naturally in (3).\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/11/total-tech-wars.html#comment-518249064): Eliezer, if everything is at stake then \"winner take all\" is \"total war\"; it doesn't really matter if they shoot you or just starve you to death. The whole point of this post is to note that anything can be seen as \"winner-take-all\" just by expecting others to see it that way. So if you want to say that a particular tech is *more* winner-take-all than usual, you need an argument based on more than just this effect. And if you want to argue it is *far* more so than any other tech humans have ever seen, you need a damn good additional argument. It is possible that you could make such an argument work based on the \"tech landscape\" considerations you mention, but I haven't seen that yet. So consider this post to be yet another reminder that I await hearing your core argument; until then I set the stage with posts like this.\n>\n> To answer your direct questions, I am not suggesting forbidding speaking of anything, and if \"unfriendly AI\" is *defined* as an AI who sees itself in a total war, then sure, it would take a total war strategy of fighting not trading. But you haven't actually defined \"unfriendly\" yet. . . .\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/11/total-tech-wars.html) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech28.html#enz.35} [1](#AI-FOOM-Debatech28.html#enz.35.backref). Hanson, [\"If Uploads Come First](../Text/AI-FOOM-Debatech20.html#cite.0.Hanson.1994).\"\n\n[]{#AI-FOOM-Debatech29.html}\n\n## []{#AI-FOOM-Debatech29.html#x33-}[Chapter 28]{.titlemark} Singletons Rule OK {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [30 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Reply to:** [Total Tech Wars](../Text/AI-FOOM-Debatech28.html#x32-)How *does* one end up with a persistent disagreement between two rationalist-wannabes who are both aware of Aumann's Agreement Theorem and its implications?\n\nSuch a case is likely to turn around two axes: object-level incredulity (\"no matter *what* AAT says, proposition X can't *really* be true\") and meta-level distrust (\"they're trying to be rational despite their emotional commitment, but are they really capable of that?\").\n\nSo far, Robin and I have focused on the object level in trying to hash out our disagreement. Technically, I can't speak for Robin; but at least in my *own* case, I've acted thus because I anticipate that a meta-level argument about trustworthiness wouldn't lead anywhere interesting. Behind the scenes, I'm doing what I can to make sure my brain is actually capable of updating, and presumably Robin is doing the same.\n\n(The linchpin of my own current effort in this area is to tell myself that I ought to be learning something while having this conversation, and that I shouldn't miss any scrap of original thought in it---the [Incremental Update](http://lesswrong.com/lw/ij/update_yourself_incrementally/) technique. Because I can genuinely believe that a conversation like this should produce new thoughts, I can turn that feeling into genuine attentiveness.)\n\nYesterday, Robin [inveighed](../Text/AI-FOOM-Debatech28.html#x32-) hard against what he called \"total tech wars,\" and what I call \"winner-take-all\" scenarios:\n\n> If you believe the other side is totally committed to total victory, that surrender is unacceptable, and that all interactions are zero-sum, you may conclude your side must never cooperate with them, nor tolerate much internal dissent or luxury.\n\nRobin and I both have emotional commitments and we both acknowledge the danger of that. There's [nothing irrational about feeling](http://lesswrong.com/lw/hp/feeling_rational/), *per se*; only *failure to update* is blameworthy. But Robin seems to be *very* strongly against winner-take-all technological scenarios, and I don't understand why.\n\nAmong other things, I would like to ask if Robin has a [Line of Retreat](http://lesswrong.com/lw/o4/leave_a_line_of_retreat/) set up here---if, regardless of how he estimates the *probabilities*, he can *visualize what he would do* if a winner-take-all scenario were true.\n\nYesterday Robin [wrote](../Text/AI-FOOM-Debatech28.html#x32-):\n\n> Eliezer, if everything is at stake then \"winner take all\" is \"total war\"; it doesn't really matter if they shoot you or just starve you to death.\n\nWe both have our emotional commitments, but I don't quite understand this reaction.\n\nFirst, to me it's obvious that a \"winner-take-all\" *technology* should be defined as one in which, *ceteris paribus*, a local entity tends to end up with the *option* of becoming one kind of [Bostromian singleton](http://www.nickbostrom.com/fut/singleton.html)---the decision maker of a global order in which there is a single decision-making entity at the highest level.^[1](#AI-FOOM-Debatech29.html#enz.36)^[]{#AI-FOOM-Debatech29.html#enz.36.backref} (A superintelligence with unshared nanotech would count as a singleton; a federated world government with its own military would be a different kind of singleton; or you can imagine something like a galactic operating system with a root account controllable by 80% majority vote of the populace, *et cetera*.)\n\nThe winner-take-all *option* is created by properties of the technology landscape, which is not a moral stance. Nothing is said about an agent with that *option actually* becoming a singleton. Nor about *using* that power to shoot people, or reuse their atoms for something else, or grab all resources and let them starve (though \"all resources\" should include their atoms anyway).\n\nNothing is yet said about various patches that could try to avert a *technological* scenario that contains upward cliffs of progress---e.g., binding agreements enforced by source code examination or continuous monitoring---in advance of the event. (Or if you think that rational agents [cooperate on the Prisoner's Dilemma](http://lesswrong.com/lw/to/the_truly_iterated_prisoners_dilemma/), so much work might not be required to coordinate.)\n\nSuperintelligent agents *not* in a humanish [moral reference frame](http://lesswrong.com/lw/sx/inseparably_right_or_joy_in_the_merely_good/)---AIs that are just maximizing paperclips or [sorting pebbles](http://lesswrong.com/lw/sy/sorting_pebbles_into_correct_heaps/)---who happen on the option of becoming a Bostromian Singleton, and who have *not* previously executed any somehow-binding treaty, will *ceteris paribus* choose to grab all resources in service of their utility function, including the atoms now comprising humanity. I don't see how you could reasonably deny this! It's a straightforward decision-theoretic choice between payoff 10 and payoff 1,000!\n\nBut conversely, there are [possible agents in mind-design space](http://lesswrong.com/lw/rm/the_design_space_of_mindsingeneral/) who, given the *option* of becoming a singleton, will *not* kill you, starve you, reprogram you, tell you how to live your life, or even meddle in your destiny unseen. See [Bostrom's (short) paper](http://www.nickbostrom.com/fut/singleton.html) on the possibility of good and bad singletons of various types.^[2](#AI-FOOM-Debatech29.html#enz.37)^[]{#AI-FOOM-Debatech29.html#enz.37.backref}\n\nIf Robin thinks it's *impossible* to have a Friendly AI or maybe even any sort of benevolent superintelligence at all, even the descendants of human uploads---if Robin is assuming that superintelligent agents *will* act according to roughly selfish motives, and that *only* economies of trade are necessary and sufficient to prevent holocaust---then Robin may have no [Line of Retreat](http://lesswrong.com/lw/o4/leave_a_line_of_retreat/) open as I try to argue that AI has an upward cliff built in.\n\nAnd in this case, it might be time well spent to first address the question of whether Friendly AI is a reasonable thing to try to accomplish, so as to create that line of retreat. Robin and I are both trying hard to be rational despite emotional commitments; but there's no particular reason to *needlessly* place oneself in the position of trying to persuade, or trying to accept, that everything of value in the universe is certainly doomed.\n\nFor me, it's particularly hard to understand Robin's position in this, because for me the *non*-singleton future is the one that is obviously abhorrent.\n\nIf you have lots of entities with root permissions on matter, any of whom has the physical capability to attack any other, then you have entities spending huge amounts of precious negentropy on defense and deterrence. If there's no centralized system of property rights in place for selling off the universe to the highest bidder, then you have a race to [burn the cosmic commons](http://hanson.gmu.edu/filluniv.pdf),^[3](#AI-FOOM-Debatech29.html#enz.38)^[]{#AI-FOOM-Debatech29.html#enz.38.backref} and the degeneration of the vast majority of all agents into [rapacious hardscrapple frontier](http://hanson.gmu.edu/hardscra.pdf) replicators.^[4](#AI-FOOM-Debatech29.html#enz.39)^[]{#AI-FOOM-Debatech29.html#enz.39.backref}\n\nTo me this is a vision of *futility*---one in which a future light cone that *could* have been full of happy, safe agents having complex fun is mostly wasted by agents trying to seize resources and defend them so they can send out seeds to seize more resources.\n\nAnd it should also be mentioned that any future in which slavery or child abuse is *successfully* prohibited is a world that has *some* way of preventing agents from doing certain things with their computing power. There are vastly worse possibilities than slavery or child abuse opened up by future technologies, which I flinch from referring to even as much as I did in the previous sentence. There are things I don't want to happen to *anyone*---including a population of a septillion captive minds running on a star-powered matrioshka brain that is owned, and *defended* against all rescuers, by the mind-descendant of Lawrence Bittaker (serial killer, a.k.a. \"Pliers\"). I want to *win* against the horrors that exist in this world and the horrors that could exist in tomorrow's world---to have them never happen ever again, or, for the *really* awful stuff, never happen in the first place. And that victory requires the Future to have certain *global* properties.\n\nBut there are other ways to get singletons besides falling up a technological cliff. So that would be my Line of Retreat: If minds can't self-improve quickly enough to take over, then try for the path of uploads setting up a centralized Constitutional operating system with a root account controlled by majority vote, or something like that, to prevent their descendants from *having* to burn the cosmic commons.\n\nSo for me, *any satisfactory outcome* seems to necessarily involve, if not a singleton, the existence of certain stable *global* properties upon the future---sufficient to *prevent* burning the cosmic commons, *prevent* life's degeneration into rapacious hardscrabble frontier replication, and *prevent* supersadists torturing septillions of helpless dolls in private, obscure star systems.\n\nRobin has written about burning the cosmic commons and rapacious hardscrapple frontier existences. This doesn't imply that Robin approves of these outcomes. But Robin's strong rejection even of winner-take-all *language* and *concepts* seems to suggest that our emotional commitments are something like 180 degrees opposed. Robin seems to feel the same way about singletons as I feel about singletons.\n\nBut *why*? I don't think our real values are that strongly opposed---though we may have verbally described and attention-prioritized those values in different ways.\n\n[]{#AI-FOOM-Debatech29.html#likesection.40}\n\n------------------------------------------------------------------------\n\n> [James Miller](http://lesswrong.com/lw/wc/singletons_rule_ok/p9e): You and Robin seem to be focused on different time periods. Robin is claiming that after ems are created one group probably won't get a dominant position. You are saying that post-intelligence-explosion (or at least post one day before the intelligence explosion) there will be either one dominant group or a high likelihood of total war. You are not in conflict if there is a large time gap between when we first have ems and when there is a intelligence explosion.\n>\n> I wrote in this post that such a gap is likely: [Billion Dollar Bots](../Text/AI-FOOM-Debatech18.html#x22-).\n\n> [Robin Hanson](http://lesswrong.com/lw/wc/singletons_rule_ok/p9w): Eliezer, sometimes in a conversation one needs a rapid back and forth, often to clarify what exactly people mean by things they say. In such a situation a format like the one we are using, long daily blog posts, can work particularly badly. In my last post I was trying in part to get you to become clearer about what you meant by what you now call a \"winner-take-all\" tech, especially to place it on a continuum with other familiar techs. (And once we are clear on what it means, then I want arguments suggesting that an AI transition would be such a thing.) I suggested talking about outcome variance induced by a transition. If you now want to use that phrase to denote \"a local entity tends to end up with the option of becoming one kind of Bostromian singleton,\" then we need new terms to refer to the \"properties of the technology landscape\" that might lead to such an option.\n>\n> I am certainly not assuming it is impossible to be \"friendly\" though I can't be sure without knowing better what that means. I agree that it is not obvious that we would not want a singleton, if we could choose the sort we wanted. But I am, as you note, quite wary of the sort of total war that might be required to create a singleton. But before we can choose among options we need to get clearer on what the options are. . . .\n\n> [Robin Hanson](http://lesswrong.com/lw/wc/singletons_rule_ok/pa1): Oh, to answer Eliezer's direct question directly, if I know that I am in a total war, I fight. I fight to make myself, or if that is impossible those who most share my values, win.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wc/singletons_rule_ok/pa9):\n>\n> > Sometimes in a conversation one needs a rapid back and forth . . .\n>\n> Yeah, unfortunately I'm sort of in the middle of resetting my sleep cycle at the moment so I'm out of sync with you for purposes of conducting rapid-fire comments. Should be fixed in a few days. . . .\n>\n> There are clear differences of worldview clashing here, which have nothing to do with the speed of an AI takeoff per se, but rather have something to do with what kind of technological progress parameters imply what sort of consequences. I was talking about large localized jumps in capability; you made a leap to total war. I can guess at some of your beliefs behind this but it would only be a guess. . . .\n>\n> > Oh, to answer Eliezer's direct question directly, if I know that I am in a total war, I fight. I fight to make myself, or if that is impossible those who most share my values, win.\n>\n> That's not much of a Line of Retreat. It would be like my saying, \"Well, if a hard takeoff is impossible, I guess I'll try to make sure we have as much fun as we can in our short lives.\" If I *actually* believed an AI hard takeoff were impossible, I wouldn't pass directly to the worst-case scenario and give up on all other hopes. I would pursue the path of human intelligence enhancement, or uploading, or nontakeoff AI, and promote cryonics more heavily.\n>\n> If you *actually* came to believe in large localized capability jumps, I do *not* think you would say, \"Oh, well, guess I'm inevitably in a total war, now I need to fight a zero-sum game and damage all who are not my allies as much as possible.\" I think you would say, \"Okay, so, how do we *avoid* a total war in this kind of situation?\" If you can work out in advance what you would do then, *that's* your line of retreat.\n>\n> I'm sorry for this metaphor, but it just seems like a very useful and standard one if one can strip away the connotations: suppose I asked a theist to set up a Line of Retreat if there is no God, and they replied, \"Then I'll just go through my existence trying to ignore the gaping existential void in my heart.\" That's not a line of retreat---that's a reinvocation of the same forces holding the original belief in place. I have the same problem with my asking, \"Can you set up a line of retreat for yourself if there is a large localized capability jump?\" and your replying, \"Then I guess I would do my best to win the total war.\"\n>\n> If you can make the implication *explicit*, and really look for loopholes, and fail to find them, then there is no line of retreat; but to me, at least, it looks like a line of retreat really should exist here.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wc/singletons_rule_ok/paa): PS: As the above was a long comment and Robin's time is limited: if he does not reply to every line, no one should take that as evidence that no good reply exists. We also don't want to create a motive for people to try to win conversations by exhaustion.\n>\n> Still, I'd like to hear a better line of retreat, even if it's one line like, I don't know, \"Then I'd advocate regulations to slow down AI in favor of human enhancement\" or something. Not that I'm saying this is a good idea, just something, anything, to break the link between AI hard takeoff and total moral catastrophe.\n\n> [Robin Hanson](http://lesswrong.com/lw/wc/singletons_rule_ok/pab): Eliezer, I'm very sorry if my language offends. If you tell the world you are building an AI and plan that post-foom it will take over the world, well, then that sounds to me like a declaration of total war on the rest of the world. Now you might reasonably seek as large a coalition as possible to join you in your effort, and you might plan for the AI to not prefer you or your coalition in the acts it chooses. And you might reasonably see your hand as forced because other AI projects exist that would take over the world if you do not. But still, that take over the world step sure sounds like total war to me.\n>\n> Oh, and on your \"line of retreat,\" I might well join your coalition, given these assumptions. I tried to be clear about that in my [Stuck In Throat](../Text/AI-FOOM-Debatech30.html#x34-) post as well.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wc/singletons_rule_ok/pac): If you're fighting a total war, then at some point, somewhere along the line, you should *at least stab someone in the throat*. If you don't do even that much, it's very hard for me to see it as a total war.\n>\n> You described a total war as follows:\n>\n> > If you believe the other side is totally committed to total victory, that surrender is unacceptable, and that all interactions are zero-sum, you may conclude your side must never cooperate with them, nor tolerate much internal dissent or luxury. All resources must be devoted to growing more resources and to fighting them in every possible way.\n>\n> How is writing my computer program declaring \"total war\" on the world? Do I believe that \"the world\" is totally committed to total victory over me? Do I believe that surrender to \"the world\" is unacceptable---well, yes, I do. Do I believe that all interactions with \"the world\" are zero-sum? *Hell* no. Do I believe that I should never cooperate with \"the world\"? I do that every time I shop at a supermarket. Not tolerate internal dissent or luxury---both internal dissent and luxury sound good to me, I'll take both. All resources must be devoted to growing more resources and to fighting \"the world\" in every possible way? Mm . . . nah.\n>\n> So you thus described a total war, and inveighed against it.\n>\n> But then you applied the same term to the Friendly AI project, which has yet to stab a single person in the throat; and this, sir, I do not think is a fair description.\n>\n> It is not a matter of indelicate language to be dealt with by substituting an appropriate euphemism. If I am to treat your words as consistently defined, then they are not, in this case, true.\n\n> [Robin Hanson](http://lesswrong.com/lw/wc/singletons_rule_ok/pad): Eliezer, I'm not very interested in arguing about which English words best describe the situation under consideration, at least if we are still unclear on the situation itself. Such words are just never that precise. Would you call a human stepping on an ant \"total war,\" even if he wasn't trying very hard? From an aware ant's point of view it might seem total war, but perhaps you wouldn't say so if the human wasn't trying hard. But the key point is that the human could be in for a world of hurt if he displayed an intention to squash the ant and greatly underestimated the ant's ability to respond. So in a world where new AIs cannot in fact easily take over the world, AI projects that say they plan to have their AI take over the world could induce serious and harmful conflict.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/wc/singletons_rule_ok/) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech29.html#enz.36} [1](#AI-FOOM-Debatech29.html#enz.36.backref). []{#AI-FOOM-Debatech29.html#cite.0.Bostrom.2006}Nick Bostrom, \"What is a Singleton?,\" *Linguistic and Philosophical Investigations* 5, no. 2 (2006): 48--54.\n\n[]{#AI-FOOM-Debatech29.html#enz.37} [2](#AI-FOOM-Debatech29.html#enz.37.backref). [Ibid.](#AI-FOOM-Debatech29.html#cite.0.Bostrom.2006)\n\n[]{#AI-FOOM-Debatech29.html#enz.38} [3](#AI-FOOM-Debatech29.html#enz.38.backref). []{#AI-FOOM-Debatech29.html#cite.0.Hanson.1998}Robin Hanson, \"Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization\" (Unpublished manuscript, July 1, 1998), accessed April 26, 2012, .\n\n[]{#AI-FOOM-Debatech29.html#enz.39} [4](#AI-FOOM-Debatech29.html#enz.39.backref). []{#AI-FOOM-Debatech29.html#cite.0.Hanson.2008e}Robin Hanson, \"The Rapacious Hardscrapple Frontier,\" in *Year Million: Science at the Far Edge of Knowledge*, ed. Damien Broderick (New York: Atlas, 2008), 168--189, .\n\n[]{#AI-FOOM-Debatech30.html}\n\n## []{#AI-FOOM-Debatech30.html#x34-}[Chapter 29]{.titlemark} Stuck In Throat {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [30 November 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nLet me try again to summarize Eliezer's position, as I understand it, and what about it seems hard to swallow. I take Eliezer as [saying](../Text/AI-FOOM-Debatech29.html#x33-):\n\n> Sometime in the next few decades a human-level AI will probably be made by having a stupid AI make itself smarter. Such a process starts very slow and quiet, but eventually \"fooms\" very fast and then loud. It is likely to go from much stupider to much smarter than humans in less than a week. While stupid, it can be rather invisible to the world. Once smart, it can suddenly and without warning take over the world.\n>\n> The reason an AI can foom so much faster than its society is that an AI can change its basic mental architecture, and humans can't. How long any one AI takes to do this depends crucially on its initial architecture. Current architectures are so bad that an AI starting with them would take an eternity to foom. Success will come from hard math-like (and Bayes-net-like) thinking that produces deep insights giving much better architectures.\n>\n> A much smarter than human AI is basically impossible to contain or control; if it wants to it *will* take over the world, and then it *will* achieve whatever ends it has. One should have little confidence that one knows what those ends are from its behavior as a much less than human AI (e.g., as part of some evolutionary competition). Unless you have carefully proven that it wants what you think it wants, you have no idea what it wants.\n>\n> In such a situation, if one cannot prevent AI attempts by all others, then the only reasonable strategy is to try to be the first with a \"friendly\" AI, i.e., one where you really do know what it wants, and where what it wants is something carefully chosen to be as reasonable as possible.\n\nI *don't* disagree with this last paragraph. But I do have trouble swallowing prior ones. The hardest to believe I think is that the AI will get smart so very rapidly, with a growth rate (e.g., doubling in an hour) so far out of proportion to prior growth rates, to what prior trends would suggest, and to what most other AI researchers I've talked to think. The key issues come from this timescale being so much shorter than team lead times and reaction times. This is the key point on which I await Eliezer's more detailed arguments.\n\nSince I do accept that architectures can influence growth rates, I must also have trouble believing humans could find new AI architectures anytime soon that make this much difference. Some other doubts:\n\n- Does a single \"smarts\" parameter really summarize most of the capability of diverse AIs?\n- Could an AI's creators see what it wants by slowing down its growth as it approaches human level?\n- Might faster brain emulations find it easier to track and manage an AI foom?\n\n[]{#AI-FOOM-Debatech30.html#likesection.41}\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/11/stuck-in-throat.html) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech31.html}\n\n## []{#AI-FOOM-Debatech31.html#x35-}[Chapter 30]{.titlemark} Disappointment in the Future {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [1 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n``{=html}\n\nThis seems worth posting around now . . . As I've previously observed, futuristic visions are [produced as entertainment, sold today and consumed today](http://lesswrong.com/lw/hi/futuristic_predictions_as_consumable_goods/). A TV station interviewing an economic or diplomatic pundit doesn't bother to show what that pundit predicted three years ago and how the predictions turned out. Why would they? Futurism Isn't About Prediction.\n\nBut [someone on the Longecity forum actually went and compiled a list](http://www.longecity.org/forum/topic/17025-my-disappointment-at-the-future/) of Ray Kurzweil's predictions in 1999 for the years 2000--2009.^[1](#AI-FOOM-Debatech31.html#enz.40)^[]{#AI-FOOM-Debatech31.html#enz.40.backref} We're not out of 2009 yet, but right now it's not looking good . . .\n\n- Individuals primarily use portable computers.\n- Portable computers have dramatically become lighter and thinner.\n- Personal computers are available in a wide range of sizes and shapes, and are commonly embedded in clothing and jewelry, like wrist watches, rings, earrings and other body ornaments.\n- Computers with a high-resolution visual interface range from rings and pins and credit cards up to the size of a thin book. People typically have at least a dozen computers on and around their bodies, which are networked using body LANs (local area networks).\n- These computers monitor body functions, provide automated identity to conduct financial transactions, and allow entry into secure areas. They also provide directions for navigation, and a variety of other services.\n- Most portable computers do not have keyboards.\n- Rotating memories such as hard drives, CD-ROMs, and DVDs are on their way out.\n- Most users have servers on their homes and offices where they keep large stores of digital objects, including, among other things, virtual reality environments, although these are still on an early stage.\n- Cables are disappearing.\n- The majority of text is created using continuous speech recognition, or CSR (dictation software). CSRs are very accurate, far more than the human transcriptionists, who were used up until a few years ago.\n- Books, magazines, and newspapers are now routinely read on displays that are the size of small books.\n- Computer displays built into eyeglasses are also used. These specialized glasses allow the users to see the normal environment while creating a virtual image that appears to hover in front of the viewer.\n- Computers routinely include moving-picture image cameras and are able to reliably identify their owners from their faces.\n- Three-dimensional chips are commonly used.\n- Students from all ages have a portable computer, very thin and soft, weighting less than one pound. They interact with their computers primarily by voice and by pointing with a device that looks like a pencil. Keyboards still exist but most textual language is created by speaking.\n- Intelligent courseware has emerged as a common means of learning; recent controversial studies have shown that students can learn basic skills such as reading and math just as readily with interactive learning software as with human teachers.\n- Schools are increasingly relying on software approaches. Many children learn to read on their own using personal computers before entering grade school.\n- Persons with disabilities are rapidly overcoming their handicaps through intelligent technology.\n- Students with reading disabilities routinely use print-to-speech reading systems.\n- Print-to-speech reading machines for the blind are now very small, inexpensive, palm-size devices that can read books.\n- Useful navigation systems have finally been developed to assist blind people in moving and avoiding obstacles. Those systems use GPS technology. The blind person communicates with his navigation system by voice.\n- Deaf persons commonly use portable speech-to-text listening machines which display a real-time transcription of what people are saying. The deaf user has the choice of either reading the transcribed speech as displayed text or watching an animated person gesturing in sign language.\n- Listening machines can also translate what is being said into another language in real time, so they are commonly used by hearing people as well.\n- There is a growing perception that the primary disabilities of blindness, deafness, and physical impairment do not necessarily \\[qualify as such\\]. Disabled persons routinely describe their disabilities as mere inconveniences.\n- In communications, telephone translation technology is commonly used. This allow you to speak in English, while your Japanese friend hears you in Japanese, and vice versa.\n- Telephones are primarily wireless and include high-resolution moving images.\n- Haptic technologies are emerging. They allow people to touch and feel objects and other persons at a distance. These force-feedback devices are wildly used in games and in training simulation systems. Interactive games routinely include all-encompassing all-visual and auditory environments.\n- The 1999 chat rooms have been replaced with virtual environments.\n- At least half of all transactions are conducted online.\n- Intelligent routes are in use, primarily for long-distance travel. Once your car's computer's guiding system locks on to the control sensors on one of these highways, you can sit back and relax.\n- There is a growing neo-Luddite movement.\n\nNow, just to be clear, I don't want you to look at all that and think, \"Gee, the future goes more slowly than expected---technological progress must be naturally slow.\"\n\nMore like, \"Where are you pulling all these [burdensome details](http://lesswrong.com/lw/jk/burdensome_details/) from, anyway?\"\n\nIf you looked at all that and said, \"Ha ha, how wrong; now I have my *own* amazing prediction for what the future will be like, *it won't be like that*,\" then you're really missing the whole \"you have to work a whole lot harder to produce veridical beliefs about the future, and often the info you want is simply not obtainable\" business.\n\n[]{#AI-FOOM-Debatech31.html#likesection.42}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/wd/disappointment_in_the_future/pap): It might be useful to put a little check or X mark next to these items, to indicate which were right vs. wrong, so the eye could quickly scan down the list to see the overall trend. But yes, it won't look good for Kurzweil, and checking such track records is very important.\n\n> [Robin Hanson](http://lesswrong.com/lw/wd/disappointment_in_the_future/paz): In order to score forecasts, what we really want is:\n>\n> 1. [Probabilities assigned to each item]{#AI-FOOM-Debatech31.html#x35-34002x1}\n> 2. [Some other forecast of the same things to compare with]{#AI-FOOM-Debatech31.html#x35-34004x2}\n>\n> Without these we are stuck trying to guess what probability he had in mind and what probabilities others would have assigned back then to these same items.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/wd/disappointment_in_the_future/) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech31.html#enz.40} [1](#AI-FOOM-Debatech31.html#enz.40.backref). []{#AI-FOOM-Debatech31.html#cite.0.freedom.2007}forever freedom, \"My Disappointment at the Future,\" Longecity forum, July 26, 2007, accessed July 28, 2013, .\n\nQuoted with minor changes to spelling and grammar.\n\n[]{#AI-FOOM-Debatech32.html}\n\n## []{#AI-FOOM-Debatech32.html#x36-}[Chapter 31]{.titlemark} I Heart Cyc {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [1 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nEliezer [Tuesday](../Text/AI-FOOM-Debatech23.html#x27-):\n\n> . . . [Eurisko]{.textsc} may *still* be the most sophisticated self-improving AI ever built---in the 1980s, by Douglas Lenat before he started wasting his life on Cyc. . . .\n>\n> [Eurisko]{.textsc} lacked what I called \"insight\"---that is, the type of abstract knowledge that lets humans fly through the search space.\n\nI [commented](../Text/AI-FOOM-Debatech23.html#x27-):\n\n> \\[You\\] ignore that Lenat has his own theory which he gives as the *reason* he's been pursuing Cyc. You should at least explain why you think his theory wrong; I find his theory quite plausible.\n\nEliezer [replied only](../Text/AI-FOOM-Debatech23.html#x27-):\n\n> [Artificial Addition](http://lesswrong.com/lw/l9/artificial_addition/), [The Nature of Logic](http://lesswrong.com/lw/vt/the_nature_of_logic/), [Truly Part of You](http://lesswrong.com/lw/la/truly_part_of_you/), [Words as Mental Paintbrush Handles](http://lesswrong.com/lw/o9/words_as_mental_paintbrush_handles/), [Detached Lever Fallacy](http://lesswrong.com/lw/sp/detached_lever_fallacy/) . . .\n\nThe main relevant points from these Eliezer posts seem to be that AI researchers wasted time on messy *ad hoc* nonmonotonic logics, while elegant mathy Bayes net approaches work much better; that it is much better to know how to generate specific knowledge from general principles than to just be told lots of specific knowledge; and that our minds have lots of hidden machinery behind the words we use; words as \"detached levers\" won't work. But I doubt Lenat or the Cyc folks disagree with any of these points.\n\nThe lesson Lenat took from [eurisko]{.textsc} is that architecture is overrated; AIs learn slowly now mainly because they know so little. So we need to explicitly code knowledge by hand until we have enough to build systems effective at asking questions, reading, and learning for themselves. Prior AI researchers were too comfortable starting every project over from scratch; they needed to join to create larger integrated knowledge bases. This still seems to me a reasonable view, and anyone who thinks Lenat created the best AI system ever should consider seriously the lesson he thinks he learned.\n\nOf course the Cyc project is open to criticism on its many particular choices. People have complained about its logic-like and language-like representations, about its selection of prototypical cases to build from (e.g., encyclopedia articles), about its focus on answering over acting, about how often it rebuilds vs. maintaining legacy systems, and about being private vs. publishing everything.\n\nBut any large project like this would produce such disputes, and it is not obvious any of its choices have been seriously wrong. They had to start somewhere, and in my opinion they have now collected a knowledge base with a truly spectacular size, scope, and integration.\n\nOther architectures may well work better, but if knowing lots is anywhere near as important as Lenat thinks, I'd expect serious AI attempts to import Cyc's knowledge, translating it into a new representation. No other source has anywhere near Cyc's size, scope, and integration. But if so, how could Cyc be such a waste?\n\nArchitecture being overrated would make architecture-based fooms less plausible. Given how small a fraction of our commonsense knowledge it seems to have so far, Cyc gives little cause for optimism for human-level AI anytime soon. And as long as a system like Cyc is limited to taking no actions other than drawing conclusions and asking questions, it is hard to see it could be that dangerous, even if it knew a whole awful lot. (Influenced by an email conversation with Stephen Reed.)\n\n**Added:** Guha and Lenat [in '93](http://www.sciencedirect.com/science/article/pii/000437029390100P):\n\n> . . . The Cyc project . . . is *not* an experiment whose sole purpose is to test a hypothesis, . . . rather it is an engineering effort, aimed at constructing an artifact. . . . The artifact we are building is a shared information resource, which many programs can usefully draw upon. Ultimately, it may suffice to be *the* shared resource . . .\n>\n> If there is a central assumption behind Cyc, it has to do with Content being the bottleneck or chokepoint to achieving AI. I.e., you can get just so far twiddling with . . . empty AIR (Architecture, Implementation, Representation.) Sooner or later, someone has to bite the Content bullet. . . . The Implementation is just scaffolding to facilitate the accretion of that Content. . . . Our project has been driven continuously and exclusively by Content. I.e., we built and refined code only when we had to. I.e., as various assertions or behaviors weren't readily handled by the then-current implementation, those needs for additional representational expressiveness or efficiency led to changes or new features in the Cyc representation language or architecture.^[1](#AI-FOOM-Debatech32.html#enz.41)^[]{#AI-FOOM-Debatech32.html#enz.41.backref}\n\nAt the bottom of [this page](http://sw.opencyc.org/) is a little box showing random OpenCyc statements \"in its best English\"; click on any concept to see more.^[2](#AI-FOOM-Debatech32.html#enz.42)^[]{#AI-FOOM-Debatech32.html#enz.42.backref} OpenCyc is a public subset of Cyc.\n\n[]{#AI-FOOM-Debatech32.html#likesection.43}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/i-heart-cyc.html#comment-518231377): So my genuine, actual reaction to seeing this post title was, \"You heart *[what]{.textsc}?*\"\n>\n> Knowledge isn't being able to repeat back English statements. This is true even of humans. It's a hundred times more true of AIs, even if you turn the words into tokens and put the tokens in tree structures.\n>\n> A basic exercise to perform with any supposed AI is to replace all the English names with random gensyms and see what the AI can still do, if anything. Deep Blue remains invariant under this exercise. Cyc, maybe, could count---it may have a genuine understanding of the word \"four\"---and could check certain uncomplicatedly structured axiom sets for logical consistency, although not, of course, anything on the order of say Peano arithmetic. The rest of Cyc is bogus. If it knows about anything, it only knows about certain relatively small and simple mathematical objects, certainly nothing about the real world.\n>\n> You can't get knowledge into a computer that way. At all. Cyc is composed almost entirely of fake knowledge (barring anything it knows about certain simply structured mathematical objects).\n>\n> As a search engine or something, Cyc might be an interesting startup, though I certainly wouldn't invest in it. As an Artificial General Intelligence, Cyc is just plain awful. It's not just that most of it is composed of suggestively named [lisp]{.textsc} tokens, there are also the other hundred aspects of cognition that are simply entirely missing. Like, say, probabilistic reasoning, or decision theory, or sensing or acting or---\n>\n> ---for the love of Belldandy! How can you even call this sad little thing an AGI project?\n>\n> So long as they maintained their current architecture, I would have no fear of Cyc even if there were a million programmers working on it and they had access to a computer the size of a moon, any more than I would live in fear of a dictionary program containing lots of words.\n>\n> Cyc is so unreservedly hopeless, especially by comparison to [eurisko]{.textsc} that came before it, that it makes me seriously wonder if Lenat is doing something that I'm not supposed to postulate because it can always be more simply explained by foolishness rather than conspiracy.\n>\n> Of course there are even sillier projects. Hugo de Garis and Mentifex both come to mind.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/i-heart-cyc.html#comment-518231501): . . . Conversation *is* action. Replacing every word you spoke or heard with a new random gensym would destroy your ability to converse with others. So that would be a terrible way to test your true knowledge that enables your conversation. I'll grant that an ability to converse is a limited ability, and the ability to otherwise act effectively greatly expands one's capability and knowledge.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/i-heart-cyc.html#comment-518231878): Okay . . . look at it this way. Chimpanzees share 95% of our DNA and have much of the same gross cytoarchitecture of their brains. You cannot explain to *chimpanzees* that Paris is the capital of France. You can train them to hold up a series of signs saying \"Paris,\" then \"Is-Capital-Of,\" then \"France.\" But you cannot explain to them that Paris is the capital of France.\n>\n> And a chimpanzee's cognitive architecture is *hugely* more sophisticated than Cyc's. Cyc isn't close. It's not in the ballpark. It's not in the galaxy holding the star around which circles the planet whose continent contains the country in which lies the city that built the ballpark.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/i-heart-cyc.html#comment-518231901): Eliezer, we can make computers do lots of things we can't train chimps to do. Surely we don't want to limit AI research to only achieving chimp behaviors. We want to be opportunistic---developing whatever weak abilities have the best chance of leading later to stronger abilities. Answering encyclopedia questions might be the best weak ability to pursue first. Or it might not. Surely we just don't know, right?\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/12/i-heart-cyc.html) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech32.html#enz.41} [1](#AI-FOOM-Debatech32.html#enz.41.backref). []{#AI-FOOM-Debatech32.html#cite.0.Guha.1993}R. V. Guha and Douglas B. Lenat, \"Re: CycLing Paper Reviews,\" *Artificial Intelligence* 61, no. 1 (1993): 149--174, doi:[10.1016/(93)90100-P](http://dx.doi.org/10.1016/(93)90100-P).\n\n[]{#AI-FOOM-Debatech32.html#enz.42} [2](#AI-FOOM-Debatech32.html#enz.42.backref). ; dead page, redirects to OpenCyc project.\n\n[]{#AI-FOOM-Debatech33.html}\n\n## []{#AI-FOOM-Debatech33.html#x37-}[Chapter 32]{.titlemark} Is the City-ularity Near? {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [9 February 2010]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nThe land around New York City is worth a *lot*. A 2008 [analysis](http://www.newyorkfed.org/research/current_issues/ci14-3/ci14-3.html)^[1](#AI-FOOM-Debatech33.html#enz.43)^[]{#AI-FOOM-Debatech33.html#enz.43.backref} estimated prices for land, not counting buildings etc., for four boroughs of the city plus nearby parts of New Jersey (2,770 square miles, equivalent to a fifty-two-mile square). The total land value for this area (total land times average price) was \\$5.5 trillion in 2002 and \\$28 trillion in 2006.\n\n*The Economist* [said](http://www.economist.com/node/) that in 2002 all developed-nation real estate was worth \\$62 trillion.^[2](#AI-FOOM-Debatech33.html#enz.44)^[]{#AI-FOOM-Debatech33.html#enz.44.backref} Since raw land value is on average [about a third](http://www.jstor.org/stable/)^[3](#AI-FOOM-Debatech33.html#enz.45)^[]{#AI-FOOM-Debatech33.html#enz.45.backref} of total real-estate value, that puts New York-area real estate at over 30% of all developed-nation real estate in 2002! Whatever the exact number, clearly this agglomeration contains vast value.\n\nNew York land is valuable mainly because of how it is organized. People want to be there because they want to interact with other people they expect to be there, and they expect those interactions to be quite mutually beneficial. If you could take any other fifty-mile square (of which Earth has seventy-two thousand) and create that same expectation of mutual value from interactions, you could get people to come there, make buildings, etc., and you could sell that land for many trillions of dollars of profit.\n\nYet the organization of New York was mostly set long ago based on old tech (e.g., horses, cars, typewriters). Worse, no one really understands at a deep level how it is organized or why it works so well. Different people understand different parts, in mostly crude empirical ways.\n\nSo what will happen when super-duper smarties wrinkle their brows so hard that out pops a deep mathematical theory of cities, explaining clearly how city value is produced? What if they apply their theory to designing a city structure that takes best advantage of our most advanced techs, of 7gen phones, twitter-pedias, flying Segways, solar panels, gene-mod pigeons, and super-fluffy cupcakes? Making each city aspect more efficient makes the city more attractive, increasing the gains from making other aspects more efficient, in a grand spiral of bigger and bigger gains.\n\nOnce they convince the world of the vast value in their super-stupendous city design, won't everyone flock there and pay mucho trillions for the privilege? Couldn't they leverage this lead into better theories, enabling better designs giving far more trillions, and then spend all that on a super-designed war machine based on those same super-insights, and turn us all into down dour super-slaves? So isn't the very mostest importantest cause ever to make sure that we, the friendly freedom fighters, find this super-deep city theory first?\n\nWell, no, it isn't. We don't believe in a city-ularity because we don't believe in a super-city theory found in a big brain flash of insight. What makes cities work well is mostly getting lots of details right. Sure, new-tech-based city designs can work better, but gradual tech gains mean no city is suddenly vastly better than others. Each change has costs to be weighed against hoped-for gains. Sure, costs of change might be lower when making a whole new city from scratch, but for that to work you have to be damn sure you know which changes are actually good ideas.\n\nFor similar reasons, I'm skeptical of a blank-slate AI mind-design intelligence explosion. Sure, if there were a super mind theory that allowed vast mental efficiency gains all at once---but there isn't. Minds are vast complex structures full of parts that depend intricately on each other, much like the citizens of a city. Minds, like cities, best improve gradually, because you just never know enough to manage a vast redesign of something with such complex interdependent adaptations.\n\n[]{#AI-FOOM-Debatech33.html#likesection.44}\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2010/02/is-the-city-ularity-near.html) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech33.html#enz.43} [1](#AI-FOOM-Debatech33.html#enz.43.backref). []{#AI-FOOM-Debatech33.html#cite.0.Haughwout.2008}Andrew Haughwout, James Orr, and David Bedoll, \"The Price of Land in the New York Metropolitan Area,\" *Current Issues in Economics and Finance* 13, no. 3 (2008), accessed June 21, 2013, .\n\n[]{#AI-FOOM-Debatech33.html#enz.44} [2](#AI-FOOM-Debatech33.html#enz.44.backref). []{#AI-FOOM-Debatech33.html#cite.0.Economist.2003}\"House of Cards,\" *The Economist*, May 29, 2003, >.\n\n[]{#AI-FOOM-Debatech33.html#enz.45} [3](#AI-FOOM-Debatech33.html#enz.45.backref). []{#AI-FOOM-Debatech33.html#cite.0.Douglas.1978}Richard W. Douglas Jr., \"Site Value Taxation and Manvel's Land Value Estimates,\" *American Journal of Economics and Sociology* 37, no. 2 (1978): 217--223, >.\n\n[]{#AI-FOOM-Debatech34.html}\n\n## []{#AI-FOOM-Debatech34.html#x38-}[Chapter 33]{.titlemark} Recursive Self-Improvement {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [1 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Followup to:** [Life's Story Continues](../Text/AI-FOOM-Debatech15.html#x19-), [Surprised by Brains](../Text/AI-FOOM-Debatech19.html#x23-), [Cascades, Cycles, Insight](../Text/AI-FOOM-Debatech21.html#x25-), [. . . Recursion, Magic](../Text/AI-FOOM-Debatech23.html#x27-), [Engelbart: Insufficiently Recursive](../Text/AI-FOOM-Debatech25.html#x29-), [Total Nano Domination](../Text/AI-FOOM-Debatech26.html#x30-)I think that, at some point in the development of Artificial Intelligence, we are likely to see a *fast, local* increase in capability---\"AI go FOOM.\" Just to be clear on the claim, \"fast\" means on a timescale of weeks or hours rather than years or decades; and \"FOOM\" means way the hell smarter than anything else around, capable of delivering in short time periods technological advancements that would take humans decades, probably including full-scale molecular nanotechnology (that it gets by, e.g., ordering custom proteins over the Internet with seventy-two-hour turnaround time). Not, \"ooh, it's a little [Einstein](http://lesswrong.com/lw/qk/that_alien_message/) but it doesn't have any robot hands, how cute.\"\n\nMost people who object to this scenario object to the \"fast\" part. Robin Hanson objected to the \"local\" part. I'll try to handle both, though not all in one shot today.\n\nWe are setting forth to analyze the developmental velocity of an Artificial Intelligence. We'll break down this velocity into [optimization slope, optimization resources, and optimization efficiency](../Text/AI-FOOM-Debatech15.html#x19-). We'll need to understand [cascades, cycles, insight](../Text/AI-FOOM-Debatech21.html#x25-), and [recursion](../Text/AI-FOOM-Debatech23.html#x27-); and we'll stratify our recursive levels into the [metacognitive, cognitive, metaknowledge, knowledge, and object levels](../Text/AI-FOOM-Debatech23.html#x27-).\n\nQuick review:\n\n- \"Optimization slope\" is the goodness and number of opportunities in the volume of solution space you're currently exploring, on whatever your problem is.\n- \"Optimization resources\" is how much computing power, sensory bandwidth, trials, etc. you have available to explore opportunities.\n- \"Optimization efficiency\" is how well you use your resources. This will be determined by the goodness of your current mind design---the point in mind-design space that is your current self---along with its knowledge and metaknowledge (see below).\n\nOptimizing *yourself* is a special case, but it's one we're about to spend a lot of time talking about.\n\nBy the time any mind solves some kind of *actual problem*, there's actually been a huge causal lattice of optimizations applied---for example, human brains evolved, and then humans developed the idea of science, and then applied the idea of science to generate knowledge about gravity, and then you use this knowledge of gravity to finally design a damn bridge or something.\n\nSo I shall stratify this causality into levels---the [boundaries](http://lesswrong.com/lw/o0/where_to_draw_the_boundary/) being semi-arbitrary, but you've got to draw them somewhere:\n\n- \"Metacognitive\" is the optimization that builds the brain---in the case of a human, natural selection; in the case of an AI, either human programmers or, after some point, the AI itself.\n- \"Cognitive,\" in humans, is the labor performed by your neural circuitry, algorithms that consume large amounts of computing power but are mostly opaque to you. You know what you're seeing, but you don't know how the visual cortex works. The Root of All Failure in AI is to underestimate those algorithms because you can't see them . . . In an AI, the lines between procedural and declarative knowledge are theoretically blurred, but in practice it's often possible to distinguish cognitive algorithms and cognitive content.\n- \"Metaknowledge\": Discoveries about how to discover, \"Science\" being an archetypal example, \"Math\" being another. You can think of these as reflective cognitive content (knowledge about how to think).\n- \"Knowledge\": Knowing how gravity works.\n- \"Object level\": Specific actual problems like building a bridge or something.\n\nI am arguing that an AI's developmental velocity will not be smooth; the following are some classes of phenomena that might lead to non-smoothness. First, a couple of points that weren't raised earlier:\n\n- *Roughness:* A search space can be naturally rough---have unevenly distributed *slope*. With constant optimization pressure, you could go through a long phase where improvements are easy, then hit a new volume of the search space where improvements are tough. Or vice versa. Call this factor *roughness*.\n- *Resource overhangs:* Rather than resources growing incrementally by reinvestment, there's a big bucket o' resources behind a locked door, and once you unlock the door you can walk in and take them all.\n\nAnd these other factors previously covered:\n\n- *Cascades* are when one development leads the way to another---for example, once you discover gravity, you might find it easier to understand a coiled spring.\n- *Cycles* are feedback loops where a process's output becomes its input on the next round. As the classic example of a fission chain reaction illustrates, a cycle whose underlying processes are continuous may show qualitative changes of surface behavior---a threshold of criticality---the difference between each neutron leading to the emission of 0.9994 additional neutrons versus each neutron leading to the emission of 1.0006 additional neutrons. The effective neutron multiplication factor is k and I will use it metaphorically.\n- *Insights* are items of knowledge that tremendously decrease the cost of solving a wide range of problems---for example, once you have the calculus insight, a whole range of physics problems become a whole lot easier to solve. Insights let you fly through, or teleport through, the solution space, rather than searching it by hand---that is, \"insight\" represents knowledge about the structure of the search space itself.\n\nAnd finally:\n\n- *Recursion* is the sort of thing that happens when you hand the AI the object-level problem of \"redesign your own cognitive algorithms.\"\n\n[]{#AI-FOOM-Debatech34.html#likesection.45}Suppose I go to an AI programmer and say, \"Please write me a program that plays chess.\" The programmer will tackle this using their existing knowledge and insight in the domain of chess and search trees; they will apply any metaknowledge they have about how to solve programming problems or AI problems; they will process this knowledge using the deep algorithms of their neural circuitry; and this neutral circuitry will have been designed (or rather its wiring algorithm designed) by natural selection.\n\nIf you go to a sufficiently sophisticated AI---more sophisticated than any that currently exists---and say, \"write me a chess-playing program,\" the same thing might happen: The AI would use its knowledge, metaknowledge, and existing cognitive algorithms. Only the AI's *metacognitive* level would be, not natural selection, but the *object level* of the programmer who wrote the AI, using *their* knowledge and insight, etc.\n\nNow suppose that instead you hand the AI the problem, \"Write a better algorithm than X for storing, associating to, and retrieving memories.\" At first glance this may appear to be just another object-level problem that the AI solves using its current knowledge, metaknowledge, and cognitive algorithms. And indeed, in one sense it should be just another object-level problem. But it so happens that the AI itself uses algorithm X to store associative memories, so if the AI can improve on this algorithm, it can rewrite its code to use the new algorithm X+1.\n\nThis means that the AI's *metacognitive* level---the optimization process responsible for structuring the AI's cognitive algorithms in the first place---has now collapsed to identity with the AI's *object* level.\n\nFor some odd reason, I run into a lot of people who vigorously deny that this phenomenon is at all novel; they say, \"Oh, humanity is already self-improving, humanity is already going through a FOOM, humanity is already in an Intelligence Explosion,\" etc., etc.\n\nNow to me, it seems clear that---at this point in the game, in advance of the observation---it is *pragmatically* worth drawing a distinction between inventing agriculture and using that to support more professionalized inventors, versus directly rewriting your own source code in RAM. Before you can even *argue* about whether the two phenomena are likely to be similar in practice, you need to accept that they are, in fact, two different things to be argued *about*.\n\nAnd I do expect them to be very distinct in practice. Inventing science is not rewriting your neural circuitry. There is a tendency to *completely overlook* the power of brain algorithms, because they are invisible to introspection. It took a long time historically for people to realize that there *was* such a thing as a cognitive algorithm that could underlie thinking. And then, once you point out that cognitive algorithms exist, there is a tendency to tremendously underestimate them, because you don't know the specific details of how your hippocampus is storing memories well or poorly---you don't know how it could be improved, or what difference a slight degradation could make. You can't draw detailed causal links between the wiring of your neural circuitry and your performance on real-world problems. All you can *see* is the knowledge and the metaknowledge, and that's where all your causal links go; that's all that's *visibly* important.\n\nTo see the brain circuitry vary, you've got to look at a chimpanzee, basically. Which is not something that most humans spend a lot of time doing, because chimpanzees can't play our games.\n\nYou can also see the tremendous overlooked power of the brain circuitry by observing what happens when people set out to program what looks like \"knowledge\" into Good-Old-Fashioned AIs, semantic nets and such. Roughly, nothing happens. Well, research papers happen. But no actual intelligence happens. Without those opaque, overlooked, invisible brain algorithms, there is no real knowledge---only a tape recorder playing back human words. If you have a small amount of fake knowledge, it doesn't do anything, and if you have a huge amount of fake knowledge programmed in at huge expense, it still doesn't do anything.\n\nSo the cognitive level---in humans, the level of neural circuitry and neural algorithms---is a level of tremendous but invisible power. The difficulty of penetrating this invisibility and creating a real cognitive level is what stops modern-day humans from creating AI. (Not that an AI's cognitive level would be made of neurons or anything equivalent to neurons; it would just do cognitive labor on the same [level of organization](http://intelligence.org/files/LOGI.pdf).^[cite.01](#AI-FOOM-Debatech34.html#enz.46)^[]{#AI-FOOM-Debatech34.html#enz.46.backref} Planes don't flap their wings, but they have to produce lift somehow.)\n\nRecursion that can rewrite the cognitive level is *worth distinguishing*.\n\nBut to some, having a term so [narrow](http://lesswrong.com/lw/ic/the_virtue_of_narrowness/) as to refer to an AI rewriting its own source code, and not to humans inventing farming, seems [hardly open, hardly embracing, hardly communal](http://lesswrong.com/lw/ic/the_virtue_of_narrowness/); for we all know that [to say two things are similar shows greater enlightenment than saying that they are different](http://lesswrong.com/lw/ic/the_virtue_of_narrowness/). Or maybe it's as simple as identifying \"recursive self-improvement\" as a term with positive [affective valence](http://lesswrong.com/lw/lg/the_affect_heuristic/), so you figure out a way to apply that term to humanity, and then you get a nice dose of warm fuzzies. Anyway.\n\nSo what happens when you start rewriting cognitive algorithms?\n\nWell, we do have *one* well-known historical case of an optimization process writing cognitive algorithms to do further optimization; this is the case of [natural selection, our alien god](http://lesswrong.com/lw/kr/an_alien_god/).\n\nNatural selection seems to have produced a pretty smooth trajectory of more sophisticated brains over the course of hundreds of millions of years. That gives us our first data point, with these characteristics:\n\n[]{#AI-FOOM-Debatech34.html#likesection.46}\n\n- Natural selection on sexual multicellular eukaryotic life can probably be treated as, to first order, an optimizer of *roughly constant efficiency and constant resources*.\n- Natural selection does not have anything akin to insights. It does sometimes stumble over adaptations that prove to be surprisingly reusable outside the context for which they were adapted, but it doesn't fly through the search space like a human. Natural selection is just *searching the immediate neighborhood of its present point in the solution space, over and over and over.*\n- Natural selection *does* have cascades: adaptations open up the way for further adaptations.\n\nSo---*if* you're navigating the search space via the [ridiculously stupid and inefficient](http://lesswrong.com/lw/kt/evolutions_are_stupid_but_work_anyway/) method of looking at the neighbors of the current point, without insight---with constant optimization pressure---then . . .\n\nWell, I've heard it claimed that the evolution of biological brains has accelerated over time, and I've also heard that claim challenged. If there's actually been an acceleration, I would tend to attribute that to the \"adaptations open up the way for further adaptations\" phenomenon---the more brain genes you have, the more chances for a mutation to produce a new brain gene. (Or, more complexly: The more organismal error-correcting mechanisms the brain has, the more likely a mutation is to produce something useful rather than fatal.) In the case of hominids in particular over the last few million years, we may also have been experiencing accelerated *selection* on brain proteins, *per se*---which I would attribute to sexual selection, or brain variance accounting for a greater proportion of total fitness variance.\n\nAnyway, what we definitely do *not* see under these conditions is *logarithmic* or *decelerating* progress. It did *not* take ten times as long to go from *H. erectus* to *H. sapiens* as from *H. habilis* to *H. erectus*. Hominid evolution did *not* take eight hundred million years of additional time, after evolution immediately produced *Australopithecus*-level brains in just a few million years after the invention of neurons themselves.\n\n[]{#AI-FOOM-Debatech34.html#likesection.47} And another, similar observation: human intelligence does *not* require a hundred times as much computing power as chimpanzee intelligence. Human brains are merely three times too large, and our prefrontal cortices six times too large, for a primate with our body size.\n\nOr again: It does not seem to require a thousand times as many genes to build a human brain as to build a chimpanzee brain, even though human brains can build toys that are a thousand times as neat.\n\nWhy is this important? Because it shows that with *constant optimization pressure* from natural selection and *no intelligent insight*, there were *no diminishing returns* to a search for better brain designs up to at least the human level. There were probably *accelerating* returns (with a low acceleration factor). There are no *visible speed bumps*, [so far as I know](http://lesswrong.com/lw/kj/no_one_knows_what_science_doesnt_know/).\n\nBut all this is to say only of natural selection, which is not recursive.\n\nIf you have an investment whose output is not coupled to its input---say, you have a bond, and the bond pays you a certain amount of interest every year, and you spend the interest every year---then this will tend to return you a linear amount of money over time. After one year, you've received \\$10; after two years, \\$20; after three years, \\$30.\n\nNow suppose you *change* the qualitative physics of the investment, by coupling the output pipe to the input pipe. Whenever you get an interest payment, you invest it in more bonds. Now your returns over time will follow the curve of compound interest, which is exponential. (Please note: *Not all accelerating processes are smoothly exponential.* But this one happens to be.)\n\nThe first process grows at a rate that is linear over *time*; the second process grows at a rate that is linear in its *cumulative return so far*.\n\nThe too-obvious mathematical idiom to describe the impact of recursion is replacing an equation\n\n::: {.pic-align .align}\n*y = f(t)*\n:::\n\nwith\n\n::: {.pic-align .align}\n*dy/dt = f(y)*\n:::\n\nFor example, in the case above, reinvesting our returns transformed the *linearly* growing\n\n::: {.pic-align .align}\n*y = m × t*\n:::\n\ninto\n\n::: {.pic-align .align}\n*dy/dt = m × y*\n:::\n\nwhose solution is the exponentially growing\n\n::: {.pic-align .align}\n*y = e^m×t^.*\n:::\n\nNow . . . I do not think you can *really* solve equations like this to get anything like a description of a self-improving AI.\n\nBut it's the obvious reason why I *don't* expect the future to be a continuation of past trends. The future contains a feedback loop that the past does not.\n\nAs a different Eliezer Yudkowsky wrote, very long ago: \"If computing power doubles every eighteen months, what happens when computers are doing the research?\"^[2](#AI-FOOM-Debatech34.html#enz.47)^[]{#AI-FOOM-Debatech34.html#enz.47.backref}\n\nAnd this sounds horrifyingly naive to my present ears, because that's not really how it works at all---but still, it illustrates the idea of \"the future contains a feedback loop that the past does not.\"\n\nHistory up until this point was a long story about natural selection producing humans, and then, after humans hit a certain threshold, humans starting to rapidly produce knowledge and metaknowledge that could---among other things---feed more humans and support more of them in lives of professional specialization.\n\nTo a first approximation, natural selection held still during human cultural development. Even if [Gregory Clark's crazy ideas](https://en.wikipedia.org/wiki/Gregory_Clark_(economist)) (Wikipedia) are crazy enough to be true---i.e., some human populations evolved lower discount rates and more industrious work habits over the course of just a few hundred years from 1200 to 1800^[3](#AI-FOOM-Debatech34.html#enz.48)^[]{#AI-FOOM-Debatech34.html#enz.48.backref} ---that's just tweaking a few relatively small parameters; it is not the same as developing new complex adaptations with lots of interdependent parts. It's not a [chimp-human type gap](http://lesswrong.com/lw/ql/my_childhood_role_model/).\n\nSo then, *with human cognition remaining more or less constant*, we found that knowledge feeds off knowledge with k \\> 1---given a background of roughly constant cognitive algorithms at the human level. We discovered major chunks of metaknowledge, like Science and the notion of Professional Specialization, that changed the exponents of our progress; having lots more humans around, due to, e.g., the object-level innovation of farming, may have have also played a role. Progress in any one area tended to be choppy, with large insights leaping forward, followed by a lot of slow incremental development.\n\nWith history *to date*, we've got a series of integrals looking something like this:\n\n- Metacognitive = natural selection, optimization efficiency/resources roughly constant\n- Cognitive = Human intelligence = integral of evolutionary optimization velocity over a few hundred million years, then roughly *constant* over the last ten thousand years\n- Metaknowledge = Professional Specialization, Science, etc. = integral over cognition we did about procedures to follow in thinking, where metaknowledge can also feed on itself, there were major insights and cascades, etc.\n- Knowledge = all that actual science, engineering, and general knowledge accumulation we did = integral of cognition + metaknowledge (current knowledge) over time, where knowledge feeds upon itself in what seems to be a roughly exponential process\n- Object level = stuff we actually went out and did = integral of cognition + metaknowledge + knowledge (current solutions); over a short timescale this tends to be smoothly exponential to the degree that the people involved understand the idea of investments competing on the basis of interest rate, but over medium-range timescales the exponent varies, and on a long range the exponent seems to increase\n\nIf you were to summarize that in one breath, it would be, \"With constant natural selection pushing on brains, progress was linear or mildly accelerating; with constant brains pushing on metaknowledge and knowledge and object-level progress feeding back to metaknowledge and optimization resources, progress was exponential or mildly superexponential.\"\n\nNow fold back the object level so that it becomes the metacognitive level.\n\nAnd note that we're doing this through a chain of differential equations, not just one; it's the *final* output at the object level, after all those integrals, that becomes the velocity of metacognition.\n\nYou should get . . .\n\n. . . very fast progress? Well, no, not necessarily. You can also get nearly *zero* progress.\n\nIf you're a recursified [optimizing compiler](../Text/AI-FOOM-Debatech23.html#x27-), you rewrite yourself just once, get a single boost in speed (like 50% or something), and then never improve yourself any further, ever again.\n\nIf you're [[eurisko](../Text/AI-FOOM-Debatech23.html#x27-)]{.textsc}, you manage to modify some of your metaheuristics, and the metaheuristics work noticeably better, and they even manage to make a few further modifications to themselves, but then the whole process runs out of steam and flatlines.\n\nIt was human intelligence that produced these artifacts to begin with. Their *own* optimization power is far short of human---so incredibly weak that, after they push themselves along a little, they can't push any further. Worse, their optimization at any given level is characterized by a limited number of opportunities, which once used up are gone---extremely sharp diminishing returns.\n\n[]{#AI-FOOM-Debatech34.html#likesection.48} When you fold a complicated, choppy, cascade-y chain of differential equations in on itself via recursion, *it should either flatline or blow up*. You would need *exactly the right law of diminishing returns* to fly through the extremely narrow *soft-takeoff keyhole*.\n\nThe *observed history of optimization to date* makes this *even more unlikely*. I don't see any reasonable way that you can have constant evolution produce human intelligence on the observed historical trajectory (linear or accelerating), and constant human intelligence produce science and technology on the observed historical trajectory (exponential or superexponential), and *fold that in on itself* , and get out something whose rate of progress is in any sense *anthropomorphic*. From our perspective it should either flatline or FOOM.\n\nWhen you first build an AI, it's a baby---if it had to improve *itself* , it would almost immediately flatline. So you push it along using your own cognition, metaknowledge, and knowledge---*not* getting any benefit of recursion in doing so, just the usual human idiom of knowledge feeding upon itself and insights cascading into insights. Eventually the AI becomes sophisticated enough to start improving *itself* , not just small improvements, but improvements large enough to cascade into other improvements. (Though right now, due to lack of human insight, what happens when modern researchers push on their AGI design is mainly nothing.) And then you get what I. J. Good called an \"intelligence explosion.\"\n\nI even want to say that the functions and curves being such as to allow hitting the soft-takeoff keyhole is *ruled out* by observed history to date. But there are small conceivable loopholes, like \"maybe all the curves change drastically and completely as soon as we get past the part we know about in order to give us exactly the right anthropomorphic final outcome,\" or \"maybe the trajectory for insightful optimization of intelligence has a law of diminishing returns where blind evolution gets accelerating returns.\"\n\nThere's other factors contributing to hard takeoff, like the existence of hardware overhang in the form of the poorly defended Internet and fast serial computers. There's more than one possible species of AI we could see, given this whole analysis. I haven't yet touched on the issue of localization (though the basic issue is obvious: the initial recursive cascade of an intelligence explosion can't race through human brains because human brains are not modifiable until the AI is already superintelligent).\n\nBut today's post is already too long, so I'd best continue tomorrow.\n\n**Post scriptum:** It occurred to me just after writing this that I'd been victim of a cached Kurzweil thought in speaking of the knowledge level as \"exponential.\" Object-level resources are exponential in human history because of physical cycles of reinvestment. If you try defining knowledge as productivity per worker, I expect that's exponential too (or productivity growth would be unnoticeable by now as a component in economic progress). I wouldn't be surprised to find that published journal articles are growing exponentially. But I'm not quite sure that it makes sense to say humanity has learned as much since 1938 as in all earlier human history . . . though I'm quite willing to believe we produced more goods . . . then again we surely learned more since 1500 than in all the time before. Anyway, human knowledge being \"exponential\" is a more complicated issue than I made it out to be. But the human object level is more clearly exponential or superexponential.\n\n[]{#AI-FOOM-Debatech34.html#likesection.49}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/we/recursive_selfimprovement/pbh): Depending on which abstractions you emphasize, you can describe a new thing as something completely new under the sun, or as yet another example of something familiar. So the issue is which abstractions make the most sense to use. We have seen cases before where when one growth via some growth channel opened up more growth channels to further enable growth. So the question is how similar those situations are to this situation, where an AI getting smarter allows an AI to change its architecture in more and better ways. Which is another way of asking which abstractions are most relevant.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/we/recursive_selfimprovement/pbt): . . . Well, the whole post above is just putting specific details on that old claim, \"Natural selection producing humans and humans producing technology can't be extrapolated to an AI insightfully modifying its low-level brain algorithms, because the latter case contains a feedback loop of an importantly different type; it's like trying to extrapolate a bird flying outside the atmosphere or extrapolating the temperature/compression law of a gas past the point where the gas becomes a black hole.\"\n>\n> If you just pick an abstraction that isn't detailed enough to talk about the putative feedback loop, and then insist on extrapolating out the old trends from the absence of the feedback loop, I would consider this a weak response. . . .\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/we/recursive_selfimprovement/) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech34.html#enz.46} [1](../Text/AI-FOOM-Debatech34.html#enz.46.backref). []{#AI-FOOM-Debatech34.html#cite.0.Yudkowsky.2007a}Eliezer Yudkowsky, \"Levels of Organization in General Intelligence,\" in []{#AI-FOOM-Debatech34.html#cite.0.Goertzel.2007}*Artificial General Intelligence*, ed. Ben Goertzel and Cassio Pennachin, Cognitive Technologies (Berlin: Springer, 2007), doi:[10.1007/978-3-540-68677-4](http://dx.doi.org/10.1007/978-3-540-68677-4), 389--501.\n\n[]{#AI-FOOM-Debatech34.html#enz.47} [2](#AI-FOOM-Debatech34.html#enz.47.backref). []{#AI-FOOM-Debatech34.html#cite.0.Yudkowsky.1996}Eliezer Yudkowsky, \"Staring into the Singularity\" (Unpublished manuscript, 1996), last revised May 27, 2001, .\n\n[]{#AI-FOOM-Debatech34.html#enz.48} [3](#AI-FOOM-Debatech34.html#enz.48.backref). []{#AI-FOOM-Debatech34.html#cite.0.Clark.2007}Gregory Clark, *A Farewell to Alms: A Brief Economic History of the World*, 1st ed. (Princeton, NJ: Princeton University Press, 2007).\n\n[]{#AI-FOOM-Debatech35.html}\n\n## []{#AI-FOOM-Debatech35.html#x39-}[Chapter 34]{.titlemark} Whither Manufacturing? {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [2 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nBack in the '70s many folks thought they knew what the future of computing looked like: everyone sharing time slices of a few huge computers. After all, they saw that CPU cycles, the main computing cost, were cheaper on bigger machines. This analysis, however, ignored large administrative overheads in dealing with shared machines. People eagerly grabbed personal computers (PCs) to avoid those overheads, even though PC CPU cycles were more expensive.\n\nSimilarly, people seem to make lots of assumptions when they refer to \"full-scale nanotechnology.\" This phrase seems to elicit images of fridge-sized home appliances that, when plugged in and stocked with a few \"toner cartridges,\" make anything a CAD system can describe, and so quickly and cheaply that only the most price-sensitive folks would consider making stuff any other way. It seems people learned too much from the PC case, thinking everything must become personal and local. (Note computing is now getting *less* local.) But *there is no general law of increasingly local production*.\n\nThe locality of manufacturing, and computing as well, have always come from tradeoffs between economies and diseconomies of scale. Things can often be made cheaper in big centralized plants, especially if located near key inputs. When processing bulk materials, for example, there is a rough two-thirds-cost power law: throughput goes as volume, while the cost to make and manage machinery tends to go as surface area. But it costs more to transport products from a few big plants. Local plants can offer more varied products, explore more varied methods, and deliver cheaper and faster.\n\nInnovation and adaption to changing conditions can be faster or slower at centralized plants, depending on other details. Politics sometimes pushes for local production to avoid dependence on foreigners, and at other times pushes for central production to make succession more difficult. Smaller plants can better avoid regulation, while larger ones can gain more government subsidies. When formal intellectual property is weak (the usual case), producers can prefer to make and sell parts instead of selling recipes for making parts.\n\nOften producers don't even really know how they achieve the quality they do. Manufacturers today make great use of expensive intelligent labor; while they might prefer to automate all production, they just don't know how. It is not at all obvious how feasible is \"full nanotech,\" if defined as fully automated manufacturing, in the absence of full AI. Nor is it obvious that even fully automated manufacturing would be very local production. The optimal locality will depend on how all these factors change over the coming decades; don't be fooled by confident conclusions based on only one or two of these factors. More [here](http://hanson.gmu.edu/nanoecon.pdf).^[1](#AI-FOOM-Debatech35.html#enz.49)^[]{#AI-FOOM-Debatech35.html#enz.49.backref}\n\n[]{#AI-FOOM-Debatech35.html#likesection.50}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/whither-manufac.html#comment-518232637): I have no objection to most of this---the main thing that I think deserves pointing out is the idea that you can serve quite a lot of needs by having \"nanoblocks\" that reconfigure themselves in response to demands. I'd think this would be a localizing force with respect to production, and a globalizing force with respect to design.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/whither-manufac.html#comment-518232661): Eliezer, the less local is manufacturing, the harder it will be for your super-AI to build undetected the physical equipment it needs to take over the world.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/whither-manufac.html#comment-518232720): Robin, a halfway transhuman social intelligence should have *no trouble* coming up with good excuses or bribes to cover nearly anything it wants to do. We're not talking about grey goo here, we're talking about something that can invent its own cover stories. Current protein synthesis machines are not local---most labs send out to get the work done, though who knows how long that will stay true---but I don't think it would be very difficult for a smart AI to use them \"undetected,\" that is, without any alarms sounding about the order placed.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/whither-manufac.html#comment-518232798): Eliezer, it might take more than a few mail-order proteins to take over the world. . . .\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/whither-manufac.html#comment-518232836): . . . Robin, why does it realistically take more than a few mail-order proteins to take over the world? Ribosomes are reasonably general molecular factories and quite capable of self-replication to boot.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/whither-manufac.html#comment-518232849): Eliezer, I guess I'm just highlighting the extreme degree of intelligence postulated, that this week-old box that has made no visible outside mark beyond mail-ordering a few proteins knows enough to use those proteins to build a physically small manufacturing industry that is more powerful than the entire rest of the world.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/whither-manufac.html#comment-518232893): Ergh, just realized that I didn't do a post discussing the bogosity of \"human-equivalent computing power\" calculations. Well, here's a start in a quick comment---Moravec, in 1988, used Moore's Law to calculate how much power we'd have in 2008.^[2](#AI-FOOM-Debatech35.html#enz.50)^[]{#AI-FOOM-Debatech35.html#enz.50.backref} He more or less nailed it. He spent a lot of pages justifying the idea that Moore's Law could continue, but from our perspective that seems more or less prosaic.\n>\n> Moravec spent fewer pages than he did on Moore's Law justifying his calculation that the supercomputers we would have in 2008 would be \"human-equivalent brainpower.\"\n>\n> Did Moravec nail that as well? Given the sad state of AI theory, we actually have no evidence against it. But personally, I suspect that he overshot; I suspect that one could build a mind of formidability roughly comparable to human on a modern-day desktop computer, or maybe even a desktop computer from 1996; because I now think that evolution wasn't all that clever with our brain design, and that the 100 Hz serial speed limit on our neurons has to be having all sorts of atrocious effects on algorithmic efficiency. If it was a superintelligence doing the design, you could probably have roughly human formidability on something substantially smaller.\n>\n> Just a very rough eyeball estimate, no real numbers behind it.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/12/whither-manufac.html) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech35.html#enz.49} [1](#AI-FOOM-Debatech35.html#enz.49.backref). Hanson, [\"Five Nanotech Social Scenarios](../Text/AI-FOOM-Debatech27.html#cite.0.Hanson.2007a).\"\n\n[]{#AI-FOOM-Debatech35.html#enz.50} [2](#AI-FOOM-Debatech35.html#enz.50.backref). []{#AI-FOOM-Debatech35.html#cite.0.Moravec.1988}Hans P. Moravec, *Mind Children: The Future of Robot and Human Intelligence* (Cambridge, MA: Harvard University Press, 1988).\n\n[]{#AI-FOOM-Debatech36.html}\n\n## []{#AI-FOOM-Debatech36.html#x40-}[Chapter 35]{.titlemark} Hard Takeoff {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [2 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Continuation of:** [Recursive Self-Improvement](../Text/AI-FOOM-Debatech34.html#x38-)Constant natural selection pressure, operating on the genes of the hominid line, produced improvement in brains over time that seems to have been, roughly, *linear or accelerating*; the operation of constant human brains on a pool of knowledge seems to have produced returns that are, very roughly, *exponential or superexponential*. ([Robin proposes](http://hanson.gmu.edu/longgrow.pdf) that human progress is well characterized as a series of exponential modes with diminishing doubling times.^[1](#AI-FOOM-Debatech36.html#enz.51)^[]{#AI-FOOM-Debatech36.html#enz.51.backref} )\n\nRecursive self-improvement (RSI)---an AI rewriting its own cognitive algorithms---identifies the object level of the AI with a force acting on the metacognitive level; it \"closes the loop\" or \"folds the graph in on itself.\" E.g., the difference between returns on a constant investment in a bond and reinvesting the returns into purchasing further bonds is the difference between the equations *y = f(t) = m × t* and *dy/dt = f(y) = m × y*, whose solution is the compound interest exponential *y = e^m×t^*.\n\nWhen you fold a whole chain of differential equations in on itself like this, it should either peter out rapidly as improvements fail to yield further improvements, or else go FOOM. An *exactly right law of diminishing returns* that lets the system fly through the *soft-takeoff keyhole* is unlikely---*far* more unlikely than seeing such behavior in a system with a roughly constant underlying optimizer, like evolution improving brains, or human brains improving technology. Our present life is no good indicator of things to come.\n\nOr to try and compress it down to a slogan that fits on a T-shirt---not that I'm saying this is a good idea---\"Moore's Law is exponential *now*; it would be really odd if it *stayed* exponential with the improving computers *doing the research*.\" I'm not saying you literally get *dy/dt = e^y^* that goes to infinity after finite time---and hardware improvement is in some ways the least interesting factor here---but should we really see the same curve we do now?\n\nRSI is the biggest, most interesting, hardest-to-analyze, sharpest break with the past contributing to the notion of a \"hard takeoff\" a.k.a. \"AI go FOOM,\" but it's nowhere near being the *only* such factor. [The advent of human intelligence was a discontinuity with the past](../Text/AI-FOOM-Debatech19.html#x23-) even *without* RSI . . .\n\n. . . which is to say that observed evolutionary history---the discontinuity between humans and chimps, who share 95% of our DNA---*lightly* suggests a critical threshold built into the capabilities that we think of as \"general intelligence,\" a machine that becomes far more powerful once the last gear is added.\n\nThis is only a *light* suggestion because the branching time between humans and chimps *is* enough time for a good deal of complex adaptation to occur. We could be looking at the sum of a [cascade](../Text/AI-FOOM-Debatech21.html#x25-), not the addition of a final missing gear. On the other hand, we can look at the gross brain anatomies and see that human brain anatomy and chimp anatomy have not diverged all that much. On the gripping hand, there's the sudden cultural revolution---the sudden increase in the sophistication of artifacts---that accompanied the appearance of anatomically modern Cro-Magnons just a few tens of thousands of years ago.\n\nNow of course this might all just be completely inapplicable to the development trajectory of AIs built by human programmers rather than by evolution. But it at least *lightly suggests*, and provides a hypothetical *illustration* of, a discontinuous leap upward in capability that results from a natural feature of the solution space---a point where you go from sorta-okay solutions to totally amazing solutions as the result of a few final tweaks to the mind design.\n\nI could potentially go on about this notion for a bit---because, in an evolutionary trajectory, it can't *literally* be a \"missing gear,\" the sort of discontinuity that follows from removing a gear that an otherwise functioning machine was built around. So if you suppose that a final set of changes was enough to produce a sudden huge leap in effective intelligence, it does demand the question of what those changes were. Something to do with reflection---the brain modeling or controlling itself---would be one obvious candidate. Or perhaps a change in motivations (more curious individuals, using the brainpower they have in different directions) in which case you *wouldn't* expect that discontinuity to appear in the AI's development, but you would expect it to be more effective at earlier stages than humanity's evolutionary history would suggest . . . But you could have whole journal issues about that one question, so I'm just going to leave it at that.\n\nOr consider the notion of sudden resource bonanzas. Suppose there's a semi-sophisticated Artificial General Intelligence running on a cluster of a thousand CPUs. The AI has not hit a wall---it's still improving itself---but its self-improvement is going so *slowly* that, the AI calculates, it will take another fifty years for it to engineer/implement/refine just the changes it currently has in mind. Even if this AI would go FOOM eventually, its current progress is so slow as to constitute being flatlined . . .\n\nSo the AI turns its attention to examining certain blobs of binary code---code composing operating systems, or routers, or DNS services---and then takes over all the poorly defended computers on the Internet. This may not require what humans would regard as genius, just the ability to examine lots of machine code and do relatively low-grade reasoning on millions of bytes of it. (I have a saying/hypothesis that a *human* trying to write *code* is like someone without a visual cortex trying to paint a picture---we can do it eventually, but we have to go pixel by pixel because we lack a sensory modality for that medium; it's not our native environment.) The Future may also have more legal ways to obtain large amounts of computing power quickly.\n\nThis sort of resource bonanza is intriguing in a number of ways. By assumption, optimization *efficiency* is the same, at least for the moment---we're just plugging a few orders of magnitude more resource into the current input/output curve. With a stupid algorithm, a few orders of magnitude more computing power will buy you only a linear increase in performance---I would not fear Cyc even if it ran on a computer the size of the Moon, because there is no there there.\n\nOn the other hand, humans have a brain three times as large, and a prefrontal cortex six times as large, as that of a standard primate our size---so with software improvements of the sort that natural selection made over the last five million years, it does not require exponential increases in computing power to support linearly greater intelligence. Mind you, this sort of biological analogy is always fraught---maybe a human has not much more cognitive horsepower than a chimpanzee, the same underlying tasks being performed, but in a few more domains and with greater reflectivity---the engine outputs the same horsepower, but a few gears were reconfigured to turn each other less wastefully---and so you wouldn't be able to go from human to superhuman with just another sixfold increase in processing power . . . or something like that.\n\nBut if the lesson of biology suggests anything, it is that you do not run into logarithmic returns on *processing power* in the course of reaching human intelligence, even when that processing power increase is strictly parallel rather than serial, provided that you are at least as good as writing software to take advantage of that increased computing power as natural selection is at producing adaptations---five million years for a sixfold increase in computing power.\n\nMichael Vassar [observed](http://lesswrong.com/lw/we/recursive_selfimprovement/pbq) in yesterday's comments that humans, by spending linearly more time studying chess, seem to get linear increases in their chess rank (across a wide range of rankings), while putting exponentially more time into a search algorithm is usually required to yield the same range of increase. Vassar called this \"bizarre,\" but I find it quite natural. Deep Blue searched the raw game tree of chess; Kasparov searched the compressed regularities of chess. It's not surprising that the simple algorithm gives logarithmic returns and the sophisticated algorithm is linear. One might say similarly of the course of human progress seeming to be closer to exponential, while evolutionary progress is closer to being linear. Being able to understand the regularity of the search space counts for quite a lot.\n\nIf the AI is somewhere in between---not as brute-force as Deep Blue, nor as compressed as a human---then maybe a ten-thousand-fold increase in computing power will only buy it a tenfold increase in optimization velocity . . . but that's still quite a speedup.\n\nFurthermore, all *future* improvements the AI makes to itself will now be amortized over ten thousand times as much computing power to apply the algorithms. So a single improvement to *code* now has more impact than before; it's liable to produce more further improvements. Think of a uranium pile. It's always running the same \"algorithm\" with respect to neutrons causing fissions that produce further neutrons, but just piling on more uranium can cause it to go from subcritical to supercritical, as any given neutron has more uranium to travel through and a higher chance of causing future fissions.\n\nSo just the resource bonanza represented by \"eating the Internet\" or \"discovering an application for which there is effectively unlimited demand, which lets you rent huge amounts of computing power while using only half of it to pay the bills\"---even though this event isn't particularly *recursive* of itself, just an object-level fruit-taking---could potentially drive the AI from subcritical to supercritical.\n\nNot, mind you, that this will happen with an AI that's just stupid. But an AI already improving itself *slowly*---that's a different case.\n\nEven if this doesn't happen---if the AI uses this newfound computing power at all effectively, its optimization efficiency will increase more quickly than before---just because the AI has *more* optimization power to apply to the task of increasing its own efficiency, thanks to the sudden bonanza of optimization resources.\n\nSo the *whole trajectory* can conceivably change, just from so simple and straightforward and unclever and uninteresting-seeming an act as eating the Internet. (Or renting a bigger cloud.)\n\nAgriculture changed the course of human history by supporting a larger population---and that was just a question of having more humans around, not individual humans having a brain a hundred times as large. This gets us into the whole issue of the returns on scaling individual brains not being anything like the returns on scaling the number of brains. A big-brained human has around four times the cranial volume of a chimpanzee, but four chimps ≠ one human. (And for that matter, sixty squirrels ≠ one chimp.) Software improvements here almost certainly completely dominate hardware, of course. But having a thousand scientists who collectively read all the papers in a field, and who talk to each other, is not like having one superscientist who has read all those papers and can correlate their contents directly using native cognitive processes of association, recognition, and abstraction. Having more humans talking to each other using low-bandwidth words cannot be expected to achieve returns similar to those from scaling component cognitive processes within a coherent cognitive system.\n\nThis, too, is an idiom outside human experience---we *have* to solve big problems using lots of humans, because there is no way to solve them using [one big]{.textsc} human. But it never occurs to anyone to substitute four chimps for one human; and only a certain very foolish kind of boss thinks you can substitute ten programmers with one year of experience for one programmer with ten years of experience.\n\n(Part of the general Culture of Chaos that praises emergence, and thinks evolution is smarter than human designers, also has a mythology of groups being inherently superior to individuals. But this is generally a matter of poor individual rationality, and various arcane group structures that are supposed to compensate, rather than an inherent fact about cognitive processes somehow *scaling better when chopped up into distinct brains*. If that were *literally* more efficient, evolution would have designed humans to have four chimpanzee heads that argued with each other. In the realm of AI, it seems much more straightforward to have a single cognitive process that lacks the emotional stubbornness to cling to its accustomed theories, and doesn't *need* to be argued out of it at gunpoint or replaced by a new generation of grad students. I'm not going to delve into this in detail for now, just warn you to be suspicious of this particular creed of the Culture of Chaos; it's not like they actually *observed* the relative performance of a hundred humans versus one [big]{.textsc} mind with a brain fifty times human size.)\n\nSo yes, there was a lot of software improvement involved---what we are seeing with the modern human brain size, is probably not so much the brain volume *required* to support the software improvement, but rather the *new evolutionary equilibrium* for brain size *given* the improved software.\n\nEven so---hominid brain size increased by a factor of five over the course of around five million years. You might want to think *very seriously* about the contrast between that idiom, and a successful AI being able to expand onto five thousand times as much hardware over the course of five minutes---when you are pondering possible hard takeoffs, and whether the AI trajectory ought to look similar to human experience.\n\nA subtler sort of hardware overhang, I suspect, is represented by modern CPUs having a 2 GHz *serial speed*, in contrast to neurons that spike a hundred times per second on a good day. The \"hundred-step rule\" in computational neuroscience is a rule of thumb that any postulated neural algorithm which runs in real time has to perform its job in less than one hundred *serial* steps one after the other.^[2](#AI-FOOM-Debatech36.html#enz.52)^[]{#AI-FOOM-Debatech36.html#enz.52.backref} We do not understand how to efficiently use the computer hardware we have now to do intelligent thinking. But the much-vaunted \"massive parallelism\" of the human brain is, I suspect, [mostly cache lookups](http://lesswrong.com/lw/k5/cached_thoughts/) to make up for the sheer awkwardness of the brain's *serial* slowness---if your computer ran at 200 Hz, you'd have to resort to all sorts of absurdly massive parallelism to get anything done in real time. I suspect that, if *correctly designed*, a midsize computer cluster would be able to get high-grade thinking done at a serial speed much faster than human, even if the total parallel computing power was less.\n\nSo that's another kind of overhang: because our computing hardware has run so far ahead of AI *theory*, we have incredibly fast computers we don't know how to use *for thinking*; getting AI *right* could produce a huge, discontinuous jolt, as the speed of high-grade thought on this planet suddenly dropped into computer time.\n\nA still subtler kind of overhang would be represented by human [failure to use our gathered experimental data efficiently](http://lesswrong.com/lw/qk/that_alien_message/).\n\nOn to the topic of insight, another potential source of discontinuity: The course of hominid evolution was driven by evolution's neighborhood search; if the evolution of the brain accelerated to some degree, this was probably due to existing adaptations creating a greater number of possibilities for further adaptations. (But it couldn't accelerate past a certain point, because evolution is limited in how much selection pressure it can apply---if someone succeeds in breeding due to adaptation A, that's less variance left over for whether or not they succeed in breeding due to adaptation B.)\n\nBut all this is searching the raw space of genes. Human design intelligence, or sufficiently sophisticated AI design intelligence, isn't like that. One might even be tempted to make up a completely different curve out of thin air---like, intelligence will take all the easy wins first, and then be left with only higher-hanging fruit, while increasing complexity will defeat the ability of the designer to make changes. So where blind evolution accelerated, intelligent design will run into diminishing returns and grind to a halt. And as long as you're making up fairy tales, you might as well further add that the law of diminishing returns will be exactly right, and have bumps and rough patches in exactly the right places, to produce a smooth gentle takeoff even after recursion and various hardware transitions are factored in . . . One also wonders why the story about \"intelligence taking easy wins first in designing brains\" *tops out* at or before human-level brains, rather than going *a long way beyond human* before topping out. But one suspects that if you tell *that* story, there's no point in inventing a law of diminishing returns to begin with.\n\n(Ultimately, if the character of physical law is anything like our current laws of physics, there will be limits to what you can do on finite hardware, and limits to how much hardware you can assemble in finite time, but if they are very *high* limits relative to human brains, it doesn't affect the basic prediction of hard takeoff, \"AI go FOOM.\")\n\nThe main thing I'll venture into actually expecting from adding \"insight\" to the mix, is that there'll be a discontinuity at the point where the AI *understands how to do AI theory*, the same way that human researchers try to do AI theory. An AI, to swallow its own optimization chain, must not just be able to rewrite its own source code; it must be able to, say, rewrite *Artificial Intelligence: A Modern Approach* (2nd Edition). An ability like this seems (untrustworthily, but I don't know what else to trust) like it ought to appear at around the same time that the architecture is at the level of, or approaching the level of, being able to handle what humans handle---being no shallower than an actual human, whatever its inexperience in various domains. It would produce further discontinuity at around that time.\n\nIn other words, when the AI becomes smart enough to *do AI theory*, that's when I expect it to fully swallow its own optimization chain and for the *real* FOOM to occur---though the AI might *reach* this point as part of a cascade that started at a more primitive level.\n\nAll these complications are why I don't believe we can *really* do any sort of math that will predict *quantitatively* the trajectory of a hard takeoff. You can make up models, but real life is going to include all sorts of discrete jumps, bottlenecks, bonanzas, insights---and the \"fold the curve in on itself\" paradigm of recursion is going to amplify even small roughnesses in the trajectory.\n\nSo I stick to qualitative predictions. \"AI go FOOM.\"\n\nTomorrow I hope to tackle locality, and a bestiary of some possible qualitative trajectories the AI might take given this analysis. Robin Hanson's summary of \"primitive AI fooms to sophisticated AI\" doesn't fully represent my views---that's just one entry in the bestiary, albeit a major one.\n\n[]{#AI-FOOM-Debatech36.html#likesection.51}\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/wf/hard_takeoff/) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech36.html#enz.51} [1](#AI-FOOM-Debatech36.html#enz.51.backref). []{#AI-FOOM-Debatech36.html#cite.0.Hanson.1998a}Robin Hanson, \"Long-Term Growth as a Sequence of Exponential Modes\" (Unpublished manuscript, 1998), last revised December 2000, .\n\n[]{#AI-FOOM-Debatech36.html#enz.52} [2](#AI-FOOM-Debatech36.html#enz.52.backref). []{#AI-FOOM-Debatech36.html#cite.0.Feldman.1982}J. A. Feldman and Dana H. Ballard, \"Connectionist Models and Their Properties,\" *Cognitive Science* 6, no. 3 (1982): 205--254, doi:[10.1207/s15516709cog0603_1](http://dx.doi.org/10.1207/s15516709cog0603_1).\n\n[]{#AI-FOOM-Debatech37.html}\n\n## []{#AI-FOOM-Debatech37.html#x41-}[Chapter 36]{.titlemark} Test Near, Apply Far {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [3 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nCompanies often ask me if prediction markets can forecast distant future topics. I tell them yes, but that is not the place to test any doubts about prediction markets. To vet or validate prediction markets, you want topics where there will be many similar forecasts over a short time, with other mechanisms making forecasts that can be compared.\n\nIf you came up with an account of the cognitive processes that allowed Newton or Einstein to make their great leaps of insight, you would want to look for where that, or related accounts, applied to more common insight situations. An account that only applied to a few extreme \"geniuses\" would be much harder to explore, since we know so little about those few extreme cases.\n\nIf you wanted to explain the vast voids we seem to see in the distant universe, and you came up with a theory of a new kind of matter that could fill that void, you would want to ask where nearby one might find or be able to create that new kind of matter. Only after confronting this matter theory with local data would you have much confidence in applying it to distant voids.\n\nIt is easy, way too easy, to generate new mechanisms, accounts, theories, and abstractions. To see if such things are *useful*, we need to vet them, and that is easiest \"nearby,\" where we know a lot. When we want to deal with or understand things \"far,\" where we know little, we have little choice other than to rely on mechanisms, theories, and concepts that have worked well near. Far is just the wrong place to try new things.\n\nThere are a bazillion possible abstractions we could apply to the world. For each abstraction, the question is not whether one *can* divide up the world that way, but whether it \"carves nature at its joints,\" giving *useful* insight not easily gained via other abstractions. We should be wary of inventing new abstractions just to make sense of things far; we should insist they first show their value nearby.\n\n[]{#AI-FOOM-Debatech37.html#likesection.52}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/test-near-apply.html#comment-518247842): Considering the historical case of the advent of human intelligence, how would you have wanted to handle it using only abstractions that could have been tested before human intelligence showed up?\n>\n> (This being one way of testing your abstraction about abstractions . . .)\n>\n> We recently had a cute little \"black swan\" in our financial markets. It wasn't really very black. But some people predicted it well enough to make money off it, and some people didn't. Do you think that someone could have triumphed using your advice here, with regards to that particular event which is now near to us? If so, how?\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/test-near-apply.html#comment-518247867): Eliezer, it is very hard to say what sort of other experience and evidence there would have been \"near\" hypothetical creatures who know of Earth history before humans, to guess if that evidence would have been enough to guide them to good abstractions to help them anticipate and describe the arrival of humans. For some possible creatures, they may well not have had enough to do a decent job.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/12/test-near-apply.html) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech38.html}\n\n## []{#AI-FOOM-Debatech38.html#x42-}[Chapter 37]{.titlemark} Permitted Possibilities and Locality {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [3 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Continuation of:** [Hard Takeoff](../Text/AI-FOOM-Debatech36.html#x40-)The analysis given in the last two days permits more than one possible AI trajectory:\n\n1. [Programmers, smarter than evolution at finding tricks that work, but operating without fundamental insight or with only partial insight, create a mind that is dumber than the researchers but performs lower-quality operations much faster. This mind reaches k \\> 1, cascades up to the level of a very smart human, *itself* achieves insight into intelligence, and undergoes the really fast part of the FOOM, to superintelligence. This would be the major nightmare scenario for the origin of an unFriendly AI.]{#AI-FOOM-Debatech38.html#x42-41002x1}\n2. [Programmers operating with partial insight create a mind that performs a number of tasks very well, but can't really handle self-modification let alone AI theory. A mind like this might progress with something like smoothness, pushed along by the researchers rather than itself, even all the way up to average-human capability---not having the insight into its own workings to push itself any further. We also suppose that the mind is either already using huge amounts of available hardware, or scales *very* poorly, so it cannot go FOOM just as a result of adding a hundred times as much hardware. This scenario seems less likely to my eyes, but it is not *ruled out* by any effect I can see.]{#AI-FOOM-Debatech38.html#x42-41004x2}\n3. [Programmers operating with strong insight into intelligence directly create, along an efficient and planned pathway, a mind capable of modifying itself with deterministic precision---provably correct or provably noncatastrophic self-modifications. This is the only way I can see to achieve narrow enough targeting to create a Friendly AI. The \"natural\" trajectory of such an agent would be slowed by the requirements of precision, and sped up by the presence of insight; but because this is a Friendly AI, notions like \"You can't yet improve yourself this far, your goal system isn't verified enough\" would play a role.]{#AI-FOOM-Debatech38.html#x42-41006x3}\n\nSo these are some things that I think are permitted to happen, albeit that case (2) would count as a hit against me to some degree because it does seem unlikely.\n\nHere are some things that *shouldn't* happen, on my analysis:\n\n- An *ad hoc* self-modifying AI as in (1) undergoes a cycle of self-improvement, starting from stupidity, that carries it up to the level of a very smart human---and then stops, unable to progress any further. (The upward slope in this region is supposed to be very steep!)\n- A mostly non-self-modifying AI as in (2) is pushed by its programmers up to a roughly human level . . . then to the level of a very smart human . . . then to the level of a mild transhuman . . . but the mind still does not achieve insight into its own workings and still does not undergo an intelligence explosion---just continues to increase smoothly in intelligence from there.\n\nAnd I also don't think this is allowed: the \"scenario that Robin Hanson seems to think is the line-of-maximum-probability for AI as heard and summarized by Eliezer Yudkowsky\":\n\n- No one AI that does everything humans do, but rather a large, diverse population of AIs. These AIs have various *domain-specific* competencies that are \"human+ level\"---not just in the sense of Deep Blue beating Kasparov, but in the sense that, in these domains, the AIs seem to have good \"common sense\" and can, e.g., recognize, comprehend and handle situations that weren't in their original programming. But only in the special domains for which that AI was crafted/trained. Collectively, these AIs may be strictly more competent than any one human, but no individual AI is more competent than any one human.\n- Knowledge and even skills are widely traded in this economy of AI systems.\n- In concert, these AIs, and their human owners, and the economy that surrounds them, undergo a *collective* FOOM of self-improvement. No local agent is capable of doing all this work, only the collective system.\n- The FOOM's benefits are distributed through a whole global economy of trade partners and suppliers, including existing humans and corporations, though existing humans and corporations may form an increasingly small fraction of the New Economy.\n- This FOOM looks like an exponential curve of compound interest, like the modern world but with a substantially shorter doubling time.\n\nMostly, Robin seems to think that uploads will come first, but that's a whole 'nother story. So far as AI goes, this looks like Robin's maximum line of probability---and if I got this mostly wrong or all wrong, that's no surprise. Robin Hanson did the same to me when summarizing what he thought were my own positions. I have never thought, in prosecuting this Disagreement, that we were starting out with a mostly good understanding of what the Other was thinking; and this seems like an important thing to have always in mind.\n\nSo---bearing in mind that I may well be criticizing a straw misrepresentation, and that I know this full well, but I am just trying to guess my best---here's what I see as wrong with the elements of this scenario:The abilities we call \"human\" are the final products of an [economy of mind](http://lesswrong.com/lw/vd/intelligence_in_economics/)---not in the sense that there are selfish agents in it, but in the sense that there are production lines; and I would even expect evolution to enforce something approaching fitness as a common unit of currency. (Enough selection pressure to create an adaptation from scratch should be enough to fine-tune the resource curves involved.) It's the production lines, though, that are the main point---that your brain has specialized parts and the specialized parts pass information around. All of this goes on behind the scenes, but it's what finally *adds up* to any *single* human ability.\n\nIn other words, trying to get humanlike performance in *just one* domain is divorcing a final product of that economy from all the work that stands behind it. It's like having a global economy that can *only* manufacture toasters, but not dishwashers or light bulbs. You can have something like Deep Blue that beats humans at chess in an inhuman, specialized way; but I don't think it would be easy to get humanish performance at, say, biology R&D, without a whole mind and architecture standing behind it that would also be able to accomplish other things. Tasks that draw on our cross-domain-ness, or our long-range real-world strategizing, or our ability to formulate new hypotheses, or our ability to use very high-level abstractions---I don't think that you would be able to replace a human in just that one job, without also having something that would be able to learn many different jobs.\n\nI think it is a fair analogy to the idea that you shouldn't see a global economy that can manufacture toasters but not manufacture anything else.\n\nThis is why I don't think we'll see a system of AIs that are diverse, individually highly specialized, and *only collectively* able to do anything a human can do.Trading cognitive content around between diverse AIs is more difficult and less likely than it might sound. Consider the field of AI as it works today. Is there *any* standard database of cognitive content that you buy off the shelf and plug into your amazing new system, whether it be a chess player or a new data-mining algorithm? If it's a chess-playing program, there are databases of stored games---but that's not the same as having databases of preprocessed cognitive content.\n\nSo far as I can tell, the diversity of cognitive architectures acts as a *tremendous* barrier to trading around cognitive content. If you have many AIs around that are all built on the same architecture by the same programmers, they might, *with a fair amount of work*, be able to pass around learned cognitive content. Even this is less trivial than it sounds. If two AIs both see an apple for the first time, and they both independently form concepts about that apple, and they both independently build some new cognitive content around those concepts, then their *thoughts* are effectively written in a different language. By seeing a single apple at the same time, they could identify a concept they both have in mind, and in this way build up a common language . . .\n\n. . . the point being that, even when two separated minds are running literally the same source code, it is still difficult for them to trade new knowledge *as raw cognitive content* without having a special language designed just for sharing knowledge.\n\nNow suppose the two AIs are built around different architectures.\n\nThe barrier this opposes to a true, cross-agent, literal \"economy of mind\" is so strong that, in the vast majority of AI applications you set out to write today, you will not bother to import any standardized preprocessed cognitive content. It will be easier for your AI application to start with some standard examples---databases of *that* sort of thing do exist, in some fields anyway---and *redo all the cognitive work of learning* on its own.\n\nThat's how things stand today.\n\nAnd I have to say that, looking over the diversity of architectures proposed at any AGI conference I've attended, it is very hard to imagine directly trading cognitive content between any two of them. It would be an immense amount of work just to set up a language in which they could communicate what they take to be facts about the world---never mind preprocessed cognitive content.\n\nThis is a force for *localization*: unless the condition I have just described changes drastically, it means that agents will be able to do their own cognitive labor, rather than needing to get their brain content manufactured elsewhere, or even being *able* to get their brain content manufactured elsewhere. I can imagine there being an exception to this for *non*-diverse agents that are deliberately designed to carry out this kind of trading within their code-clade. (And in the long run, difficulties of translation seems less likely to stop superintelligences.)\n\nBut in *today's* world, it seems to be the rule that when you write a new AI program, you can sometimes get preprocessed raw data, but you will not buy any preprocessed cognitive content---the internal content of your program will come from within your program.\n\nAnd it actually does seem to me that AI would have to get *very* sophisticated before it got over the \"hump\" of increased sophistication making sharing harder instead of easier. I'm not sure this is pre-takeoff sophistication we're talking about, here. And the cheaper computing power is, the easier it is to just share the *data* and do the *learning* on your own.\n\nAgain---in today's world, sharing of cognitive content between diverse AIs doesn't happen, even though there are lots of machine learning algorithms out there doing various jobs. You could say things would happen differently in the future, but it'd be up to you to make that case.Understanding the difficulty of interfacing diverse AIs is the next step toward understanding why it's likely to be a *single coherent* cognitive system that goes FOOM via recursive self-improvement. The same sort of barriers that apply to trading direct cognitive content would also apply to trading changes in cognitive source code.\n\nIt's a whole lot easier to modify the source code in the interior of your own mind than to take that modification and sell it to a friend who happens to be written on different source code.\n\nCertain kinds of abstract insights would be more tradeable, among sufficiently sophisticated minds; and the major insights might be well worth selling---like, if you invented a new *general* algorithm at some subtask that many minds perform. But if you again look at the modern state of the field, then you find that it is only a few algorithms that get any sort of general uptake.\n\nAnd if you hypothesize minds that understand these algorithms, and the improvements to them, and what these algorithms are for, and how to implement and engineer them---then these are already very sophisticated minds; at this point, they are AIs that can do their own AI theory. So the hard takeoff has to have not already started, yet, at this point where there are many AIs around that can do AI theory. If they can't do AI theory, diverse AIs are likely to experience great difficulties trading code improvements among themselves.\n\nThis is another localizing force. It means that the improvements you make to yourself, and the compound interest earned on those improvements, are likely to stay local.\n\nIf the scenario with an AI takeoff is anything at all like the modern world in which all the attempted AGI projects have completely incommensurable architectures, then any self-improvements will definitely stay put, not spread.But suppose that the situation *did* change drastically from today, and that you had a community of diverse AIs which were sophisticated enough to share cognitive content, code changes, and even insights. And suppose even that this is true at the *start* of the FOOM---that is, the community of diverse AIs got all the way up to that level, without yet using a FOOM or starting a FOOM at a time when it would still be localized.\n\nWe can even suppose that most of the code improvements, algorithmic insights, and cognitive content driving any particular AI are coming from outside that AI---sold or shared---so that the improvements the AI makes to *itself* do not dominate its total velocity.\n\nFine. The *humans* are not out of the woods.\n\nEven if we're talking about uploads, it will be immensely more difficult to apply any of the algorithmic insights that are tradeable between AIs to the undocumented human brain that is a huge mass of spaghetti code, that was never designed to be upgraded, that is not end-user-modifiable, that is not hot-swappable, that is written for a completely different architecture than what runs efficiently on modern processors . . .\n\nAnd biological humans? Their neurons just go on doing whatever neurons do, at one hundred cycles per second (tops).\n\nSo this FOOM that follows from recursive self-improvement, the cascade effect of using your increased intelligence to rewrite your code and make yourself even smarter---\n\nThe barriers to sharing cognitive improvements among diversely designed AIs are large; the barriers to sharing with uploaded humans are incredibly huge; the barrier to sharing with biological humans is essentially absolute. (Barring a \\[benevolent\\] superintelligence with nanotechnology, but if one of those is around, you have already won.)\n\nIn this hypothetical global economy of mind, the humans are like a country that no one can invest in, that cannot adopt any of the new technologies coming down the line.\n\nI once observed that Ricardo's Law of Comparative Advantage is the theorem that unemployment should not exist. The gotcha being that if someone is sufficiently unreliable, there is a cost to you to train them, a cost to stand over their shoulders and monitor them, a cost to check their results for accuracy---the existence of unemployment in our world is a combination of transaction costs like taxes, regulatory barriers like minimum wage, and above all, *lack of trust*. There are a dozen things I would pay someone else to do for me---if I wasn't paying taxes on the transaction, and if I could trust a stranger as much as I trust myself (both in terms of their honesty and of acceptable quality of output). Heck, I'd as soon have some formerly unemployed person walk in and spoon food into my mouth while I kept on typing at the computer---if there were no transaction costs, and I trusted them.\n\nIf high-quality thought drops into a speed closer to computer time by a few orders of magnitude, no one is going to take a subjective year to explain to a biological human an idea that they will be barely able to grasp, in exchange for an even slower guess at an answer that is probably going to be wrong anyway.\n\nEven *uploads* could easily end up doomed by this effect, not just because of the immense overhead cost and slowdown of running their minds, but because of the continuing error-proneness of the human architecture. Who's going to trust a giant messy undocumented neural network, any more than you'd run right out and hire some unemployed guy off the street to come into your house and do your cooking?\n\nThis FOOM leaves humans behind . . .\n\n. . . unless you go the route of Friendly AI, and make a superintelligence that simply *wants* to help humans, not for any economic value that humans provide to it, but because that is its nature.\n\nAnd just to be clear on something---which really should be clear by now, from all my other writing, but maybe you're just wandering in---it's not that having squishy things running around on two legs is the ultimate height of existence. But if you roll up a random AI with a random utility function, it just ends up turning the universe into patterns we would not find very eudaimonic---turning the galaxies into paperclips. If you try a haphazard attempt at making a \"nice\" AI, the sort of not-even-half-baked theories I see people coming up with on the spot and occasionally writing whole books about, like using reinforcement learning on pictures of smiling humans to train the AI to value happiness (yes, this was a book) then the AI just transforms the galaxy into tiny molecular smileyfaces . . .\n\nIt's not some small, mean desire to survive for myself, at the price of greater possible futures, that motivates me. The thing is---those greater possible futures, they don't happen automatically. There are stakes on the table that are so much an invisible background of your existence that it would never occur to you they could be lost; and these things will be shattered by default, if not specifically preserved.And as for the idea that the whole thing would happen slowly enough for humans to have plenty of time to react to things---a smooth exponential shifted into a shorter doubling time---of that, I spoke yesterday. Progress seems to be exponential now, more or less, or at least accelerating, and that's with constant human brains. If you take a nonrecursive accelerating function and fold it in on itself, you are going to get superexponential progress. \"If computing power doubles every eighteen months, what happens when computers are doing the research\" should not just be a faster doubling time. (Though, that said, on any sufficiently short timescale progress might well *locally* approximate an exponential because investments will shift in such fashion that the marginal returns on investment balance, even in the interior of a single mind; interest rates consistent over a timespan imply smooth exponential growth over that timespan.)\n\nYou can't count on warning, or time to react. If an accident sends a sphere of plutonium, not critical, but *prompt critical*, neutron output can double in a tenth of a second even with k = 1.0006. It can deliver a killing dose of radiation or blow the top off a nuclear reactor before you have time to draw a breath. Computers, like neutrons, already run on a timescale much faster than human thinking. We are already past the world where we can definitely count on having time to react.\n\nWhen you move into the transhuman realm, you also move into the realm of adult problems. To wield great power carries a price in great precision. You can build a nuclear reactor but you can't ad-lib it. On the problems of this scale, if you want the universe to end up a worthwhile place, you can't just throw things into the air and trust to luck and later correction. That might work in childhood, but not on adult problems where the price of one mistake can be instant death.\n\nMaking it into the future is an adult problem. That's not a death sentence. I think. It's not the *inevitable* end of the world. I hope. But if you want human*kind* to survive, and the future to be a worthwhile place, then this will take careful crafting of the first superintelligence---not just letting economics or *whatever* take its easy, natural course. The easy, natural course is fatal---not just to ourselves but to all our hopes.\n\nThat, itself, is natural. It is only to be expected. To hit a narrow target you must aim; to reach a good destination you must steer; to win, you must make an extra-ordinary effort.\n\n[]{#AI-FOOM-Debatech38.html#likesection.53}\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/wg/permitted_possibilities_locality/) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech39.html}\n\n## []{#AI-FOOM-Debatech39.html#x43-}[Chapter 38]{.titlemark} Underconstrained Abstractions {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [4 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Followup to:** [The Weak Inside View](../Text/AI-FOOM-Debatech6.html#x9-80005)[Saith Robin](../Text/AI-FOOM-Debatech37.html#x41-):\n\n> It is easy, way too easy, to generate new mechanisms, accounts, theories, and abstractions. To see if such things are *useful*, we need to vet them, and that is easiest \"nearby,\" where we know a lot. When we want to deal with or understand things \"far,\" where we know little, we have little choice other than to rely on mechanisms, theories, and concepts that have worked well near. Far is just the wrong place to try new things.\n\nWell . . . I understand why one would have that reaction. But I'm not sure we can *really* get away with that.\n\nWhen possible, I try to talk in concepts that can be verified with respect to existing history. When I talk about natural selection not running into a law of diminishing returns on genetic complexity or brain size, I'm talking about something that we can try to verify by looking at the capabilities of other organisms with brains big and small. When I talk about the boundaries to sharing cognitive content between AI programs, you can look at the field of AI the way it works today and see that, lo and behold, there isn't a lot of cognitive content shared.\n\nBut in my book this is just *one* trick in a *library* of methodologies for dealing with the Future, which is, in general, a hard thing to predict.\n\nLet's say that instead of using my complicated-sounding disjunction (many *different* reasons why the growth trajectory might contain an upward cliff, which don't *all* have to be true), I instead staked my *whole* story on the critical threshold of human intelligence. Saying, \"Look how sharp the slope is here!\"---well, it would *sound* like a simpler story. It would be closer to fitting on a T-shirt. And by talking about *just* that one abstraction and no others, I could make it sound like I was dealing in verified historical facts---humanity's evolutionary history is something that has already happened.\n\nBut speaking of an abstraction being \"verified\" by previous history is a tricky thing. There is this little problem of *underconstraint*---of there being more than one possible abstraction that the data \"verifies.\"\n\nIn \"[Cascades, Cycles, Insight](../Text/AI-FOOM-Debatech21.html#x25-)\" I said that economics does not seem to me to deal much in the origins of novel knowledge and novel designs, and said, \"If I underestimate your power and merely parody your field, by all means inform me what kind of economic study has been done of such things.\" This challenge was answered by [comments](../Text/AI-FOOM-Debatech21.html#x25-) directing me to some papers on \"endogenous growth,\" which happens to be the name of theories that don't take productivity improvements as exogenous forces.\n\n[]{#AI-FOOM-Debatech39.html#likesection.54} I've looked at some literature on endogenous growth. And don't get me wrong, it's probably not too bad as economics. However, the seminal literature talks about ideas being generated by combining other ideas, so that if you've got N ideas already and you're combining them three at a time, that's a potential N! / ((3!)(N - 3)!) new ideas to explore. And then goes on to note that, in this case, there will be vastly more ideas than anyone can explore, so that the rate at which ideas are exploited will depend more on a paucity of explorers than a paucity of ideas.\n\nWell . . . first of all, the notion that \"ideas are generated by combining other ideas N at a time\" is not exactly an amazing AI theory; it is an economist looking at, essentially, the whole problem of AI, and trying to solve it in five seconds or less. It's not as if any experiment was performed to actually watch ideas recombining. Try to build an AI around this theory and you will find out in very short order how useless it is as an account of where ideas come from . . .\n\nBut more importantly, if the only proposition you actually *use* in your theory is that there are more ideas than people to exploit them, then this is the only proposition that can even be *partially* verified by testing your theory.\n\nEven if a recombinant growth theory can be fit to the data, then the historical data still underconstrains the *many* possible abstractions that might describe the number of possible ideas available---any hypothesis that has around \"more ideas than people to exploit them\" will fit the same data equally well. You should simply say, \"I assume there are more ideas than people to exploit them,\" not go so far into mathematical detail as to talk about N choose 3 ideas. It's not that the dangling math here is underconstrained by the *previous* data, but that you're not even using it *going forward*.\n\n(And does it even fit the data? I have friends in venture capital who would laugh like hell at the notion that there's an unlimited number of really good ideas out there. Some kind of Gaussian or power-law or something distribution for the goodness of available ideas seems more in order . . . I don't object to \"endogenous growth\" simplifying things for the sake of having one simplified abstraction and seeing if it fits the data well; we all have to do that. Claiming that the underlying math doesn't *just* let you build a useful model, but *also* has a fairly direct correspondence to reality, ought to be a whole 'nother story, in economics---or so it seems to me.)\n\n(If I merely misinterpret the endogenous growth literature or underestimate its sophistication, by all means correct me.)\n\nThe further away you get from highly regular things like atoms, and the closer you get to surface phenomena that are the final products of many moving parts, the more history underconstrains the abstractions that you use. This is part of what makes futurism difficult. If there were obviously only one story that fit the data, who would bother to use anything else?\n\nIs Moore's Law a story about the increase in computing power *over time*---the number of transistors on a chip as a function of how far the planets have spun in their orbits, or how many times a light wave emitted from a cesium atom has changed phase?\n\nOr does the same data equally verify a hypothesis about exponential increases in investment in manufacturing facilities and R&D, with an even higher exponent, showing a law of diminishing returns?\n\nOr is Moore's Law showing the increase in computing power as a function of some kind of optimization pressure applied by human researchers, themselves thinking at a certain rate?\n\n[]{#AI-FOOM-Debatech39.html#likesection.55} That last one might seem hard to verify, since we've never watched what happens when a chimpanzee tries to work in a chip R&D lab. But on some raw, elemental level---would the history of the world *really* be just the same, proceeding on *just exactly* the same timeline as the planets move in their orbits, if, for these last fifty years, the researchers themselves had been running on the latest generation of computer chip at any given point? That sounds to me even sillier than having a financial model in which there's no way to ask what happens if real estate prices go down.\n\nAnd then, when you apply the abstraction going forward, there's the question of whether there's more than one way to apply it---which is one reason why a lot of futurists tend to dwell in great gory detail on the past events that seem to support their abstractions, but just *assume* a single application forward.\n\nE.g., Moravec in '88, spending a lot of time talking about how much \"computing power\" the human brain seems to use---but much less time talking about whether an AI would use the same amount of computing power, or whether using Moore's Law to extrapolate the first supercomputer of this size is the right way to time the arrival of AI. (Moravec thought we were supposed to have AI around *now*, based on his calculations---and he *under*estimated the size of the supercomputers we'd actually have in 2008.^[1](#AI-FOOM-Debatech39.html#enz.53)^[]{#AI-FOOM-Debatech39.html#enz.53.backref} )\n\nThat's another part of what makes futurism difficult---after you've told your story about the past, even if it seems like an abstraction that can be \"verified\" with respect to the past (but what if you overlooked an alternative story for the same evidence?) that often leaves a lot of slack with regards to exactly what will happen with respect to that abstraction, going forward.\n\nSo if it's not as simple as *just* using the one trick of finding abstractions you can easily verify on available data . . .\n\n. . . what are some other tricks to use?\n\n[]{#AI-FOOM-Debatech39.html#likesection.56}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/wh/underconstrained_abstractions/pdk): So what exactly are you concluding from the fact that a seminal model has some unrealistic aspects, and that the connection between models and data in this field is not direct? That this field is useless as a source of abstractions? That it is no more useful than any other source of abstractions? That your abstractions are just as good?\n\n> [Robin Hanson](http://lesswrong.com/lw/wh/underconstrained_abstractions/pdl): Eliezer, is there some existing literature that has found \"natural selection not running into a law of diminishing returns on genetic complexity or brain size,\" or are these new results of yours? These would seem to me quite publishable, though journals would probably want to see a bit more analysis than you have shown us.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wh/underconstrained_abstractions/pdn): Robin, for some odd reason, it seems that a lot of fields in a lot of areas just analyze the abstractions they need for their own business, rather than the ones that you would need to analyze a self-improving AI.\n>\n> I don't know if anyone has previously asked whether natural selection runs into a law of diminishing returns. But I observe that the human brain is only four times as large as a chimp brain, not a thousand times as large. And that most of the architecture seems to be the same; but I'm not deep enough into that field to know whether someone has tried to determine whether there are a lot more genes involved. I do know that brain-related genes were under stronger positive selection in the hominid line, but not so much stronger as to imply that, e.g., a thousand times as much selection pressure went into producing human brains from chimp brains as went into producing chimp brains in the first place. This is good enough to carry my point.\n>\n> I'm not picking on endogenous growth, just using it as an example. I wouldn't be at all surprised to find that it's a fine theory. It's just that, so far as I can tell, there's some math tacked on that isn't actually used for anything, but provides a causal \"good story\" that doesn't actually sound all that good if you happen to study idea generation on a more direct basis. I'm just using it to make the point---it's not enough for an abstraction to fit the data, to be \"verified.\" One should actually be aware of how the data is *constraining* the abstraction. The recombinant growth notion is an example of an abstraction that fits, but isn't constrained. And this is a general problem in futurism.\n>\n> If you're going to start criticizing the strength of abstractions, you should criticize your own abstractions as well. How constrained are they by the data, really? Is there more than one reasonable abstraction that fits the same data?\n>\n> Talking about what a field uses as \"standard\" doesn't seem like a satisfying response. Leaving aside that this is also the plea of those whose financial models don't permit real estate prices to go down---\"it's industry standard, everyone is doing it\"---what's standard in one field may not be standard in another, and you should be careful when turning an old standard to a new purpose. Sticking with standard endogenous growth models would be one matter if you wanted to just look at a human economy investing a usual fraction of money in R&D, and another matter entirely if your real interest and major concern was how ideas scale *in principle*, for the sake of doing new calculations on what happens when you can buy research more cheaply.\n>\n> There's no free lunch in futurism---no simple rule you can follow to make sure that your own preferred abstractions will automatically come out on top.\n\n> [Robin Hanson](http://lesswrong.com/lw/wh/underconstrained_abstractions/pds): Eliezer, the factor of four between human and chimp brains seems to be far from sufficient to show that natural selection doesn't hit diminishing returns. In general I'm complaining that you mainly seem to ask us to believe your own new unvetted theories and abstractions, while I try when possible to rely on abstractions developed in fields of research (e.g., growth theory and research policy) where hundreds of researchers have worked full-time for decades to make and vet abstractions, confronting them with each other and data. You say your new approaches are needed because this topic area is far from previous ones, and I say [test near, apply far](../Text/AI-FOOM-Debatech37.html#x41-); there is no free lunch in vetting; unvetted abstractions cannot be trusted just because it would be convenient to trust them. Also, note you keep talking about \"verify,\" a very high standard, whereas I talked about the lower standards of \"vet and validate.\"\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wh/underconstrained_abstractions/pdt): Robin, suppose that 1970 was the year when it became possible to run a human-equivalent researcher in real time using the computers of that year. Would the further progress of Moore's Law have been different from that in our own world, relative to sidereal time? Which abstractions are you using to answer this question? Have they been vetted and validated by hundreds of researchers?\n\n> [Robin Hanson](http://lesswrong.com/lw/wh/underconstrained_abstractions/pdu): Eliezer, my \"[Economic Growth Given Machine Intelligence](http://hanson.gmu.edu/aigrow.pdf)\"^[2](#AI-FOOM-Debatech39.html#enz.54)^[]{#AI-FOOM-Debatech39.html#enz.54.backref} *does* use one of the simplest endogenous growth models to explore how Moore's Law changes with computer-based workers. It is an early and crude attempt, but it is the sort of approach I think promising.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wh/underconstrained_abstractions/pdx): Robin, I just read through that paper. Unless I missed something, you do not discuss, or even mention as a possibility, the effect of having around minds that are *faster* than human. You're just making a supply of em labor *cheaper* over time due to Moore's Law *treated as an exogenous growth factor*. Do you see why I might not think that this model was *even remotely on the right track*?\n>\n> So . . . to what degree would you call the abstractions in your model \"standard\" and \"vetted\"?\n>\n> How many new assumptions, exactly, are fatal? How many new terms are you allowed to introduce into an old equation before it becomes \"unvetted,\" a \"new abstraction\"?\n>\n> And if I devised a model that was no *more* different from the standard---departed by no *more* additional assumptions---than this one, which described the effect of faster researchers, would it be just as good, in your eyes?\n>\n> Because there's a very simple and obvious model of what happens when your researchers obey Moore's Law, which makes even fewer new assumptions, and adds fewer terms to the equations . . .\n>\n> You understand that if we're to have a standard that excludes some new ideas as being too easy to make up, then---even if we grant this standard---it's very important to ensure that standard is being applied *evenhandedly*, and not just *selectively* to exclude models that arrive at the wrong conclusions, because only in the latter case does it seem \"obvious\" that the new model is \"unvetted.\" Do you *know* the criterion---can you say it aloud for all to hear---that you use to determine whether a model is based on vetted abstractions?\n\n> [Robin Hanson](http://lesswrong.com/lw/wh/underconstrained_abstractions/pe0): . . . Eliezer, the simplest standard model of endogenous growth is \"learning by doing,\" where productivity increases with quantity of practice. That is the approach I tried in my paper. Also, while economists have many abstractions for modeling details of labor teams and labor markets, our standard is that the simplest versions should be of just a single aggregate quantity of labor. This one parameter of course implicitly combines the number of workers, the number of hours each works, how fast each thinks, how well trained they are, etc. If you instead have a one-parameter model that only considers how fast each worker thinks, you must be implicitly assuming all these other contributions stay constant. When you have only a single parameter for a sector in a model, it is best if that single parameter is an aggregate intended to describe that entire sector, rather than a parameter of one aspect of that sector.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wh/underconstrained_abstractions/pe1): If one woman can have a baby in nine months, nine women can have a baby in one month? Having a hundred times as many people does not seem to scale even close to the same way as the effect of working for a hundred times as many years. This is a thoroughly vetted truth in the field of software management.\n>\n> In science, time scales as the cycle of picking the best ideas in each generation and building on them; population would probably scale more like the right end of the curve generating what will be the best ideas of that generation.\n>\n> Suppose Moore's Law to be endogenous in research. If I have new research-running CPUs with a hundred times the speed, I can use that to run the same number of researchers a hundred times as fast, or I can use it to run a hundred times as many researchers, or any mix thereof which I choose. I will choose the mix that maximizes my speed, of course. So the effect has to be at *least* as strong as speeding up time by a factor of a hundred. If you want to use a labor model that gives results stronger than that, go ahead . . .\n\n> [Robin Hanson](http://lesswrong.com/lw/wh/underconstrained_abstractions/pe5): Eliezer, it would be reasonable to have a model where the research sector of labor had a different function for how aggregate quantity of labor varied with the speed of the workers. . . .\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/wh/underconstrained_abstractions/) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech39.html#enz.53} [1](#AI-FOOM-Debatech39.html#enz.53.backref). Moravec, [*Mind Children*](../Text/AI-FOOM-Debatech35.html#cite.0.Moravec.1988).\n\n[]{#AI-FOOM-Debatech39.html#enz.54} [2](#AI-FOOM-Debatech39.html#enz.54.backref). []{#AI-FOOM-Debatech39.html#cite.0.Hanson.1998c}Robin Hanson, \"Economic Growth Given Machine Intelligence\" (Unpublished manuscript, 1998), accessed May 15, 2013, .\n\n[]{#AI-FOOM-Debatech40.html}\n\n## []{#AI-FOOM-Debatech40.html#x44-}[Chapter 39]{.titlemark} Beware Hockey-Stick Plans {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [4 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nEliezer [yesterday](http://lesswrong.com/lw/wf/hard_takeoff/pcs):\n\n> So really, the whole hard takeoff analysis of \"flatline or FOOM\" just ends up saying, \"the AI will not hit the human timescale keyhole.\" From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM. When you look at it that way, it's not so radical a prediction, is it?\n\nDot-com business plans used to have infamous \"hockey-stick\" market projections, a slow start that soon \"fooms\" into the stratosphere. From \"[How to Make Your Business Plan the Perfect Pitch](http://money.cnn.com/magazines/business2/business2_archive/2005/09/01//)\":\n\n> Keep your market-size projections conservative and defend whatever numbers you provide. If you're in the very early stages, most likely you can't calculate an accurate market size anyway. Just admit that. Tossing out ridiculous hockey-stick estimates will only undermine the credibility your plan has generated up to this point.^[1](#AI-FOOM-Debatech40.html#enz.55)^[]{#AI-FOOM-Debatech40.html#enz.55.backref}\n\nImagine a business trying to justify its hockey-stick forecast:\n\n> We analyzed a great many models of product demand, considering a wide range of possible structures and parameter values (assuming demand never shrinks, and never gets larger than world product). We found that almost all these models fell into two classes: slow cases where demand grew much slower than the interest rate, and fast cases where it grew much faster than the interest rate. In the slow class we basically lose most of our million-dollar investment, but in the fast class we soon have profits of billions. So in expected value terms, our venture is a great investment, even if there is only a 0.1% chance the true model falls in this fast class.\n\nWhat is wrong with this argument? It is that we have seen very few million-dollar investments ever give billions in profits. Nations and species can also have very complex dynamics, especially when embedded in economies and ecosystems, but few ever grow a thousandfold, or have long stretches of accelerating growth. And the vast silent universe also suggests explosive growth is rare. So we are rightly skeptical about hockey-stick forecasts, even if they in some sense occupy half of an abstract model space.\n\nEliezer [seems impressed](../Text/AI-FOOM-Debatech34.html#x38-) that he can think of many ways in which AI growth could be \"recursive,\" i.e., where all else equal one kind of growth makes it easier, rather than harder, to grow in other ways. But standard growth theory has many situations like this. For example, rising populations have more people to develop innovations of all sorts; lower transportation costs allow more scale economies over larger integrated regions for many industries; tougher equipment allows more kinds of places to be farmed, mined and colonized; and lower info storage costs allow more kinds of business processes to be studied, tracked, and rewarded. And note that new ventures rarely lack for coherent stories to justify their hockey-stick forecasts.\n\nThe strongest data suggesting that accelerating growth is possible for more than a short while is the overall accelerating growth seen in human history. But since that acceleration has actually been quite discontinuous, concentrated in three sudden growth-rate jumps, I'd look more for sudden jumps than continuous acceleration in future growth as well. And unless new info sharing barriers are closer to the human-chimp barrier than to the farming and industry barriers, I'd also expect worldwide rather than local jumps. (More to come on locality.)\n\n[]{#AI-FOOM-Debatech40.html#likesection.57}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/beware-hockey-s.html#comment-518244276): The vast majority of AIs *won't* hockey-stick. In fact, creating a good AI design appears to be even harder than creating Microsoft's business plan.\n>\n> But it would seem that, in fact, some companies do successfully create really high demand for their products. That is, the hockey-stick projection comes true in some cases. So it can't be the case that there's a universal law of diminishing returns that would prevent Microsoft or Google from existing---no matter how many dot-com companies made stupid claims. Reversed stupidity is not intelligence.\n>\n> If everyone wants to *claim* they'll get the hockey-stick, that's not too surprising. Lots of people want to claim they've got the True AI Design, too, but that doesn't make the problem of intelligence any more intrinsically difficult; it is what it is.\n>\n> Human economies have many kinds of diminishing returns stemming from poor incentives, organizational scaling, regulatory interference, increased taxation when things seem to be going well enough to get away with it, etc., which would not plausibly carry over to a single mind. What argument is there for *fundamentally* diminishing returns?\n>\n> And the basic extrapolation from Moore's Law to \"Moore's Law when computers are doing the research\" just doesn't seem like something you could acceptably rely on. *Recursion* is not the same as *cascades*. This is not just that one thing leads to another. What was once a protected level exerting a constant pressure will putatively have the output pipe connected straight into it. The very nature of the curve should change, like the jump from owning one bond that makes regular payments to reinvesting the payments.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/beware-hockey-s.html#comment-518244507): I'm not saying nothing ever explodes; I'm saying the mere ability to find models wherein an explosion happens says little about if it will actually happen.\n>\n> Eliezer, grabbing low-hanging fruit first is a very fundamental cause of diminishing returns. You don't seem to accept my description of \"recursion\" as \"where all else equal one kind of growth makes it easier, rather than harder, to grow in other ways.\" Can you offer a precise but differing definition? . . .\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/beware-hockey-s.html#comment-518244528): A \"recursive\" version of a scenario differs from a \"nonrecursive\" one in that there is a new feedback loop, connecting the final output of a chain of one or more optimizations to the design and structural state of an optimization process close to the start of the chain.\n>\n> E.g., instead of evolution making minds, there are minds making minds.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/beware-hockey-s.html#comment-518244553): Eliezer, but in my \"recursion\" examples there are new feedback loops. For example, before transportation tech starts changing, the scale of interaction is limited, but after it starts changing interaction scales increase, allowing a more specialized economy, including more specialized transportation, which allows transportation tech to better evolve.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/12/beware-hockey-s.html) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech40.html#enz.55} [1](#AI-FOOM-Debatech40.html#enz.55.backref). []{#AI-FOOM-Debatech40.html#cite.0.Copeland.2005}Michael V. Copeland, \"How to Make Your Business Plan the Perfect Pitch,\" *Business 2.0*, September 1, 2005, />.\n\n[]{#AI-FOOM-Debatech41.html}\n\n## []{#AI-FOOM-Debatech41.html#x45-}[Chapter 40]{.titlemark} Evolved Desires {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [5 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nTo a first approximation, the future will either be a *singleton*, a single integrated power choosing the future of everything, or it will be *competitive*, with conflicting powers each choosing how to perpetuate themselves. Selection effects apply robustly to competition scenarios; some perpetuation strategies will tend to dominate the future. To help us choose between a singleton and competition, and between competitive variations, we can analyze selection effects to understand competitive scenarios. In particular, selection effects can tell us the key feature without which it is very hard to forecast: *what creatures want*.\n\nThis seems to me a promising place for mathy folks to contribute to our understanding of the future. Current formal modeling techniques are actually up to this task, and theorists have already learned lots about evolved preferences:\n\n**Discount Rates:** Sexually reproducing creatures discount reproduction-useful resources given to their half-relations (e.g., kids, siblings) at a rate of one-half relative to themselves. Since in a generation they get too old to reproduce, and then only half-relations are available to help, they discount time at a rate of one-half per generation. Asexual creatures do not discount this way, though both types discount in addition for overall population growth rates. This suggests a substantial advantage for asexual creatures when discounting is important.**Local Risk:** Creatures should care about their lineage success, i.e., the total number of their gene's descendants, weighted perhaps by their quality and relatedness, but shouldn't otherwise care *which* creatures sharing their genes now produce those descendants. So they are quite tolerant of risks that are uncorrelated, or negatively correlated, within their lineage. But they can care a lot more about risks that are correlated across such siblings. So they can be terrified of global catastrophe, mildly concerned about car accidents, and completely indifferent to within-lineage tournaments.**Global Risk:** The total number of descendants within a lineage, and the resources it controls to promote future reproduction, vary across time. How risk averse should creatures be about short-term fluctuations in these such totals? If long-term future success is directly linear in current success, so that having twice as much now gives twice as much in the distant future, all else equal, you might think creatures would be completely risk-neutral about their success now. Not so. Turns out selection effects *robustly* prefer creatures who have logarithmic preferences over success now. On global risks, they are quite risk averse.Carl Shulman disagrees, claiming risk-neutrality:\n\n> For such entities utility will be close to linear with the fraction of the accessible resources in our region that are dedicated to their lineages. A lineage . . . destroying all other life in the Solar System before colonization probes could escape . . . would gain nearly the maximum physically realistic utility. . . . A 1% chance of such victory would be 1% as desirable, but equal in desirability to an even, transaction-cost free division of the accessible resources with 99 other lineages.^[1](#AI-FOOM-Debatech41.html#enz.56)^[]{#AI-FOOM-Debatech41.html#enz.56.backref}\n\nWhen I pointed Carl to [the literature](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=343622),^[2](#AI-FOOM-Debatech41.html#enz.57)^[]{#AI-FOOM-Debatech41.html#enz.57.backref} he replied:\n\n> The main proof about maximizing log growth factor in individual periods . . . involves noting that, if a lineage takes gambles involving a particular finite risk of extinction in exchange for an increased growth factor in that generation, the probability of extinction will go to 1 over infinitely many trials. . . . But I have been discussing a finite case, and with a finite maximum of possible reproductive success attainable within our Hubble Bubble, expected value will generally not climb to astronomical heights as the probability of extinction approaches 1. So I stand by the claim that a utility function with utility linear in reproductive success over a world history will tend to win out from evolutionary competition.^[3](#AI-FOOM-Debatech41.html#enz.58)^[]{#AI-FOOM-Debatech41.html#enz.58.backref}\n\nImagine creatures that cared only about their lineage's fraction of the Hubble volume in a trillion years. If total success over this time is the product of success factors for many short time intervals, then induced preferences over each factor quickly approach log as the number of factors gets large. This happens for a wide range of risk attitudes toward final success, as long as the factors are not perfectly correlated. (Technically, if U(∏ ~t~^N^r~t~) = ∑ ~t~^N^u(r~t~), most U(x) give u(x) near log(x) for N large.)\n\nA battle for the solar system is only one of many events where a lineage could go extinct in the next trillion years; why should evolved creatures treat it differently? Even if you somehow knew that it was in fact that last extinction possibility forevermore, how could evolutionary selection have favored a different attitude toward such that event? There cannot have been a history of previous last extinction events to select against creatures with preferences poorly adapted to such events. Selection prefers log preferences over a wide range of timescales up to some point where selection gets quiet. For an intelligence (artificial or otherwise) inferring very long term preferences by abstracting from its shorter time preferences, the obvious option is log preferences over *all* possible timescales.\n\n**Added:** To explain my formula U(∏ ~t~^N^r~t~) = ∑ ~t~^N^u(r~t~),\n\n- U(x) is your final preferences over resources/copies of x at the \"end.\"\n- r~t~ is the ratio by which your resources/copies increase in each time step.\n- u(r~t~) is your preferences over the next time step.\n\nThe right-hand side is expressed in a linear form so that if probabilities and choices are independent across time steps, then to maximize U, you'd just pick r~t~ to max the expected value of u(r~t~). For a wide range of U(x), u(x) goes to log(x) for N large.\n\n[]{#AI-FOOM-Debatech41.html#likesection.58}\n\n------------------------------------------------------------------------\n\n> [Carl Shulman](http://www.overcomingbias.com/2008/12/evolved-desires.html#comment-518248177):\n>\n> > If total success over this time is the product of success factors for many short time intervals . . . \\[a\\] battle for the solar system is only one of many events where a lineage could go extinct in the next trillion years; why should evolved creatures treat it differently?\n>\n> What sort of factors are you thinking about for a singleton expanding into our limited and apparently uninhabited accessible region, with current physical limits (thermodynamics, no FTL, etc.) assumed? Are you thinking about the entities' credence in the hypothesis that resources can increase vastly beyond those that physical limits seem to suggest? If resources could grow indefinitely, e.g., if there was a technological way to circumvent the laws of thermodynamics, then entities with unbounded utility functions (whether linear or log in reproductive success) will all have their calculations dominated by that possibility, and avoid struggles in the solar system that reduce their chances of getting access to such unbounded growth. I'm planning to talk more about that, but I started off with an assumption of common knowledge of current physics to illustrate dynamics.\n>\n> > There cannot have been a history of previous last extinction events to select against creatures with preferences poorly adapted to such events.\n>\n> Intelligent, foresightful entities with direct preferences for total reproductive success will mimic whatever local preferences would do best in a particular situation, so they won't be selected against; but in any case where the environment changes so that evolved local preferences are no longer optimal, those with direct preferences for total success will be able to adapt immediately, without mutation and selection.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/evolved-desires.html#comment-518248269): Carl, you lost me. Your first quote of me isn't talking about a singleton, and I don't see how physics knowledge is relevant. On your response to your second quote of me, you can't just assume you know what sort of risk aversion regarding the final outcome is the \"true\" preferences for \"total success.\" If evolution selects for log preferences on all timescales on which it acts, why isn't log risk aversion the \"true\" total-success risk aversion? . . .\n\n> [Carl Shulman](http://www.overcomingbias.com/2008/12/evolved-desires.html#comment-518248290): I'll reply in a post.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/evolved-desires.html#comment-518248334):\n>\n> > [Robin](http://www.overcomingbias.com/2008/12/evolved-desires.html#comment-518248269): If evolution selects for log preferences on all timescales on which it acts, why isn't log risk aversion the \"true\" total success risk aversion?\n>\n> Entities with logarithmic preferences over their aggregate number of copies in total world-histories should behave sublogarithmically when making local, independent choices on the next generation. The evolutionary analysis similarly talks about entities that you are likely to see in the sense of their being most frequent, not entities whose logarithms you are likely to see.\n>\n> You can't literally have logarithmic preferences at both global and local timescales, I think. If global preference is logarithmic, wouldn't local preference be log-log?\n>\n> Anyway, would you agree that: a linear aggregate utility over *complete world-histories* corresponds to logarithmic choices over *spatially global, temporally local options*, whose outcome you believe to be *uncorrelated* to the outcome of similar choices in future times.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/evolved-desires.html#comment-518248387): Eliezer, I think you are just mistaken; log preferences aggregate or split in time to log preferences. Regarding your last question, I said a wide range of preferences over final outcomes, including linear preferences, converge to log preferences over each step. . . .\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/evolved-desires.html#comment-518248406):\n>\n> > Eliezer, I think you are just mistaken; log preferences aggregate or split in time to log preferences.\n>\n> Ah, okay, I see my problem. I was assuming that taking the log of population sizes just put us into a log-world, exchanging multiplication for addition. But in the new world, options add fixed amounts to your current total, regardless of your initial position, so preferences are just aggregative (not logarithmic) in the new world.\n>\n> (*Thinks*.)\n>\n> I think what this reveals is that, for repeatable choices with a certain kind of temporal independence and an indefinite time horizon, your local preferences will start corresponding to a representation under which the effect of those choices is purely aggregative, if such a representation exists. A representation where -4 units of negative is exactly balanced by +1 and +3 positive outcomes. As your time horizon approaches the indefinite, such an approach will dominate.\n>\n> If you expect to encounter lots of options with nonmultiplicative effects---like \"this will square my population, this will take the square root of my population\"---then you'll be wise to regard those as +1 and -1 respectively, even though a logarithmic analysis will call this +X vs. -0.5X.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/evolved-desires.html#comment-518248434): Eliezer, it sounds like you are probably right with your ending comment, though it could be interesting to hear it elaborated, for a wider audience.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/evolved-desires.html#comment-518248698): Well, either you and I have really different visualizations of what the coherent parts of humanity's reflective equilibria would look like, or you don't think the Friendly AI project has the described outcome, or you have a really different moral reaction to that outcome.\n>\n> If an AI goes FOOM, you seem to recognize that condition, or that prospect, as \"total war.\" Afterward, you seem to recognize the resultant as a \"God,\" and its relation to humanity as \"rule.\" So either we've got really different visualizations of this process, or we have really different moral reactions to it. This seems worth exploring, because I suspect that it accounts for a large fraction of the real fuel in the argument.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/evolved-desires.html#comment-518248856): I don't consider myself a super-reliable math source. If the fate of the world isn't at stake, I'll often state an intuition rather than trying to prove it. For that matter, if the fate of the world *were* at stake, the first thing I'd do would be consult Marcello.\n>\n> Robin, I accept the part about locally logarithmic behavior on spatially global and temporally local problems when there will be many future options and all are multiplicative. I don't accept the claim that evolution turns future entities into log-population maximizers. In a sense, you've actually shown just the opposite; *because* aggregative maximizers or log-maximizers will both show *instrumental* log-seeking behavior, entities with *terminal* log valuations have no fitness advantage. Evolution requires visible differences of behavior on which to operate.\n>\n> If there are many nonmultiplicative options---say, there are ways to form trustworthy contracts, and a small party can contract with an intergalactic Warren Buffett---\"I will give you 10% of my lineage's resources now, if you agree to use the same amount of resources to recreate copies of me in a billion years\"---then it's not clear to me that logarithmics have an advantage; most of the numbers might be in aggregators because numbers are what they want, and that's what they use nonmultiplicative options to get.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/evolved-desires.html#comment-518248872): Eliezer, I agree one might analyze nonmultiplicative worlds, but no one has done so yet, and the world so far has been pretty multiplicative. Please recall that I was initially responding to confident claims by Carl and others that evolution would make for terrible wars over the solar system because evolved creatures would be terminal-outcome-oriented and risk neutral about such outcomes. In this context I make three claims:\n>\n> 1. [It is not obvious evolution would create terminal-outcome-oriented creatures.]{#AI-FOOM-Debatech41.html#x45-44002x1}\n> 2. [It is not obvious such creatures would be risk-neutral about terminal outcomes.]{#AI-FOOM-Debatech41.html#x45-44004x2}\n> 3. [Even if they were, they would have to be rather confident this conflict was in fact the last such conflict to be risk-neutral about resources gained from it.]{#AI-FOOM-Debatech41.html#x45-44006x3}\n>\n> Do you disagree with any of these claims?\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/evolved-desires.html#comment-518248988): I don't know about *evolution* creating terminal-outcome-oriented creatures, but the case for self-modifying AIs by default converging to expected utility maximization has been written up by, e.g., Omohundro. But I think that what you mean here is aggregate valuation by expected utility maximizers. This wouldn't be *created* per se by either evolution or self-modification, but it also seems fairly likely to emerge as an idiom among utility functions not strictly specified. Other possible minds could be satisficers, and these would be less of a threat in a competitive situation (they would only take over the world if they knew they could win, or if they expected a strong threat to their button-to-keep-pressed if they weren't in sole charge of the galaxy).\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/evolved-desires.html#comment-518249007): I'm frustrated that I seem unable to communicate what should be a precise technical claim: evolution need *not* select for creatures who maximize expected future descendants. People keep claiming this as if it had been proven, but it has not, because it is not so.\n>\n> The paper I cite is a clear precise counterexample. It considers a case where choices and probabilities are independent across time periods, and in this case it is optimal, *nonmyopically*, to make choices locally in time to max the expected log of period payoffs.\n>\n> That case easily generalizes to chunks of N periods that are correlated arbitrarily internally, but independent across chunks. Again agents max the expected sum of log period returns, which is the same as maxing the expected sum of chunk returns. And you can make N as large as you like.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/12/evolved-desires.html) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech41.html#enz.56} [1](#AI-FOOM-Debatech41.html#enz.56.backref). []{#AI-FOOM-Debatech41.html#cite.0.Shulman.2008}Carl Shulman, \"Zero and Non-zero-sum Games for Humans,\" private post, *Reflective Disequilibria* (blog), November 2008, .\n\n[]{#AI-FOOM-Debatech41.html#enz.57} [2](#AI-FOOM-Debatech41.html#enz.57.backref). []{#AI-FOOM-Debatech41.html#cite.0.Sinn.2003}Hans-Werner Sinn, \"Weber's Law and the Biological Evolution of Risk Preferences: The Selective Dominance of the Logarithmic Utility Function,\" *Geneva Papers on Risk and Insurance Theory* 28, no. 2 (2003): 87--100, doi:[10.1023/A:1026384519480](http://dx.doi.org/10.1023/A:1026384519480).\n\n[]{#AI-FOOM-Debatech41.html#enz.58} [3](#AI-FOOM-Debatech41.html#enz.58.backref). []{#AI-FOOM-Debatech41.html#cite.0.Shulman.2008a}Carl Shulman, \"Evolutionary Selection of Preferences,\" private post, *Reflective Disequilibria* (blog), November 2008, .\n\n[]{#AI-FOOM-Debatech42.html}\n\n## []{#AI-FOOM-Debatech42.html#x46-}[Chapter 41]{.titlemark} Sustained Strong Recursion {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [5 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n``{=html}\n\n**Followup to:** [Cascades, Cycles, Insight](../Text/AI-FOOM-Debatech21.html#x25-), [Recursion, Magic](../Text/AI-FOOM-Debatech23.html#x27-)We seem to have a sticking point at the concept of \"recursion,\" so I'll zoom in.\n\nYou have a friend who, even though he makes plenty of money, just spends all that money every month. You try to persuade your friend to *invest* a little---making valiant attempts to explain the wonders of compound interest by pointing to analogous processes in nature, like fission chain reactions.\n\n\"All right,\" says your friend, and buys a ten-year bond for \\$10,000, with an annual coupon of \\$500. Then he sits back, satisfied. \"There!\" he says. \"Now I'll have an extra \\$500 to spend every year, without my needing to do any work! And when the bond comes due, I'll just roll it over, so this can go on *indefinitely*. Surely, *now* I'm taking advantage of *the power of recursion*!\"\n\n\"Um, no,\" you say. \"That's not exactly what I had in mind when I talked about 'recursion.' \"\n\n\"But I used some of my cumulative money earned to increase my very earning *rate*,\" your friend points out, quite logically. \"If that's not 'recursion,' what *is?* My earning power has been 'folded in on itself,' just like you talked about!\"\n\n\"Well,\" you say, \"not exactly. Before, you were earning \\$100,000 per year, so your cumulative earnings went as 100,000 × *t*. Now, your cumulative earnings are going as 100,500 × *t*. That's not really much of a change. What we want is for your cumulative earnings to go as *B × e^A×t^* for some constants *A* and *B*---to grow *exponentially*.\"\n\n\"*Exponentially!*\" says your friend, shocked.\n\n\"Yes,\" you say, \"recursification has an amazing power to transform growth curves. In this case, it can turn a linear process into an exponential one. But to get that effect, you have to *reinvest the coupon payments* you get on your bonds---or at least reinvest some of them, instead of just spending them all. And you must be able to do this *over and over again*. Only *then* will you get the 'folding in' transformation, so that instead of your cumulative earnings going as *y = F(t) = A×t*, your earnings will go as the differential equation *dy/dt = F(y) = A×y* whose solution is *y = e^A×t^*.\"\n\n(I'm going to go ahead and leave out various constants of integration; feel free to add them back in.)\n\n\"Hold on,\" says your friend. \"I don't understand the justification for what you just did there.\"\n\n\"Right now,\" you explain, \"you're earning a steady income at your job, and you also have \\$500/year from the bond you bought. These are just things that go on generating money at a constant rate per unit time, in the background. So your cumulative earnings are the integral of that constant rate. If your earnings are *y*, then *dy/dt = A*, which resolves to *y = A × t*. But now, suppose that, instead of having these constant earning forces operating in the background, we introduce a strong *feedback loop* from your cumulative earnings to your earning power.\"\n\n\"But I bought this one bond here---\" says your friend.\n\n\"That's not enough for a *strong* feedback loop,\" you say. \"Future increases in your cumulative earnings aren't going to increase the value of this one bond, or your salary, any *further*. One unit of force transmitted back is not a feedback loop---it has to be *repeatable*. You need a *sustained* recursion, not a one-off event.\"\n\n\"Okay,\" says your friend. \"How about if I buy a \\$100 bond every year, then? Will *that* satisfy the strange requirements of this ritual?\"\n\n\"Still not a strong feedback loop,\" you say. \"Suppose that next year your salary went up \\$10,000/year---no, an even simpler example: suppose \\$10,000 fell in your lap out of the sky. If you only buy \\$100/year of bonds, that extra \\$10,000 isn't going to make any long-term difference to the earning curve. But if you're in the habit of investing 50% of found money, then there's a *strong* feedback loop from your cumulative earnings back to your earning power---we can pump up the cumulative earnings and watch the earning power rise as a direct result.\"\n\n\"How about if I just invest 0.1% of all my earnings, including the coupons on my bonds?\" asks your friend.\n\n\"Well . . .\" you say slowly. \"That would be a *sustained* feedback loop but an extremely *weak* one, where marginal changes to your earnings have relatively small marginal effects on future earning power. I guess it would genuinely be a recursified process, but it would take a long time for the effects to become apparent, and any stronger recursions would easily outrun it.\"\n\n\"Okay,\" says your friend, \"I'll start by investing a dollar, and I'll fully reinvest all the earnings from it, and the earnings on those earnings as well---\"\n\n\"I'm not really sure there are any good investments that will let you invest just a dollar without it being eaten up in transaction costs,\" you say, \"and it might not make a difference to anything on the timescales we have in mind---though there's an old story about a king, and grains of wheat placed on a chessboard . . . But realistically, a dollar isn't enough to get started.\"\n\n\"All right,\" says your friend, \"suppose I start with \\$100,000 in bonds, and reinvest 80% of the coupons on those bonds plus rolling over all the principle, at a 5% interest rate, and we ignore inflation for now.\"\n\n\"Then,\" you reply, \"we have the differential equation *dy/dt* = 0.8 × 0.05 ×*y*, with the initial condition *y* = \\$100,000 at *t* = 0, which works out to *y* = \\$100,000 ×*e*^0.04×*t*^. Or if you're reinvesting discretely rather than continuously, *y* = \\$100,000 × (1.04)^*t*^.\"\n\nWe can similarly view the self-optimizing compiler in this light---it speeds itself up once, but never makes any further improvements, like buying a single bond; it's not a sustained recursion.\n\nAnd now let us turn our attention to Moore's Law.\n\nI am not a fan of Moore's Law. I think it's a red herring. I don't think you can forecast AI arrival times by using it, I don't think that AI (especially the good kind of AI) depends on Moore's Law continuing. I am agnostic about how long Moore's Law can continue---I simply leave the question to those better qualified, because it doesn't interest me very much . . .\n\nBut for our next simpler illustration of a strong recursification, we shall consider Moore's Law.\n\nTim Tyler serves us the duty of representing our strawman, repeatedly [telling us](http://lesswrong.com/lw/we/recursive_selfimprovement/pb8), \"But chip engineers use computers *now*, so Moore's Law is *already recursive*!\"\n\nTo test this, we perform the equivalent of the thought experiment where we drop \\$10,000 out of the sky---push on the cumulative \"wealth,\" and see what happens to the output rate.\n\nSuppose that Intel's engineers could only work using computers of the sort available in 1998. How much would the next generation of computers be slowed down?\n\nSuppose we gave Intel's engineers computers from 2018, in sealed black boxes (not transmitting any of 2018's knowledge). How much would Moore's Law speed up?\n\nI don't work at Intel, so I can't actually answer those questions. I think, though, that if you said in the first case, \"Moore's Law would drop way down, to something like 1998's level of improvement measured linearly in additional transistors per unit time,\" you would be way off base. And if you said in the second case, \"I think Moore's Law would speed up by an order of magnitude, doubling every 1.8 months, until they caught up to the 2018 level,\" you would be equally way off base.\n\nIn both cases, I would expect the actual answer to be \"not all that much happens.\" Seventeen instead of eighteen months, nineteen instead of eighteen months, something like that.\n\nYes, Intel's engineers have computers on their desks. But the serial speed or per-unit price of computing power is not, so far as I know, the limiting resource that bounds their research velocity. You'd probably have to ask someone at Intel to find out how much of their corporate income they spend on computing clusters/supercomputers, but I would guess it's not much compared to how much they spend on salaries or fab plants.\n\nIf anyone from Intel reads this, and wishes to explain to me how it would be unbelievably difficult to do their jobs using computers from ten years earlier, so that Moore's Law would slow to a crawl---then I stand ready to be corrected. But relative to my present state of partial knowledge, I would say that this does not look like a strong feedback loop.\n\nHowever . . .\n\nSuppose that the *researchers themselves* are running as uploads, software on the computer chips produced by their own factories.\n\nMind you, this is not the tiniest bit realistic. By my standards it's not even a very *interesting* way of looking at the Intelligence Explosion, because it does not deal with *smarter* minds but merely *faster* ones---it dodges the really difficult and interesting part of the problem.\n\nJust as nine women cannot gestate a baby in one month; just as ten thousand researchers cannot do in one year what a hundred researchers can do in a hundred years; so too, a chimpanzee cannot do in four years what a human can do in one year, even though the chimp has around one-fourth the human's cranial capacity. And likewise a chimp cannot do in a hundred years what a human does in ninety-five years, even though they share 95% of our genetic material.\n\n*Better-designed* minds don't scale the same way as *larger* minds, and *larger* minds don't scale the same way as *faster* minds, any more than *faster* minds scale the same way as *more numerous* minds. So the notion of merely *faster* researchers, in my book, fails to address the interesting part of the \"intelligence explosion.\"\n\nNonetheless, for the sake of illustrating this matter in a relatively simple case . . .\n\nSuppose the researchers and engineers themselves---and the rest of the humans on the planet, providing a market for the chips and investment for the factories---are all running on the same computer chips that are the product of these selfsame factories. Suppose also that robotics technology stays on the same curve and provides these researchers with fast manipulators and fast sensors. We also suppose that the technology feeding Moore's Law has not yet hit physical limits. And that, as human brains are already highly parallel, we can speed them up even if Moore's Law is manifesting in increased parallelism instead of faster serial speeds---we suppose the uploads aren't *yet* being run on a fully parallelized machine, and so their actual serial speed goes up with Moore's Law. *Et cetera*.\n\nIn a fully naive fashion, we just take the economy the way it is today, and run it on the computer chips that the economy itself produces.\n\nIn our world where human brains run at constant speed (and eyes and hands work at constant speed), Moore's Law for computing power s is:\n\n::: {.equation-star .align}\n::: {.math-display}\n*s = R(t) = e^t^*\n:::\n:::\n\nThe function *R* is the Research curve that relates the amount of Time *t* passed, to the current Speed of computers s.\n\nTo understand what happens when the researchers themselves are running on computers, we simply suppose that *R* does not relate computing technology to *sidereal* time---the orbits of the planets, the motion of the stars---but, rather, relates computing technology to the amount of subjective time spent researching it.\n\nSince in *our* world subjective time is a linear function of sidereal time, this hypothesis fits *exactly the same curve* R to observed human history so far.\n\nOur direct measurements of observables do not constrain between the two hypotheses:\n\n1. [Moore's Law is exponential in the number of orbits of Mars around the Sun.]{#AI-FOOM-Debatech42.html#x46-45002x1}\n2. [Moore's Law is exponential in the amount of subjective time that researchers spend thinking and experimenting and building using a proportional amount of sensorimotor bandwidth.]{#AI-FOOM-Debatech42.html#x46-45004x2}\n\nBut our prior knowledge of causality may lead us to prefer the second hypothesis.\n\nSo to understand what happens when the Intel engineers themselves run on computers (and use robotics) subject to Moore's Law, we recursify and get:\n\n::: {.pic-align .align}\n*dy/dt = s = R(y) = e^y^*\n:::\n\nHere y is the total amount of elapsed *subjective* time, which at any given point is increasing according to the computer speed s given by Moore's Law, which is determined by the same function *R* that describes how Research converts elapsed subjective time into faster computers. Observed human history to date roughly matches the hypothesis that *R* is exponential with a doubling time of eighteen subjective months (or whatever). Solving\n\n::: {.pic-align .align}\n*dy/dt = e^y^*\n:::\n\nyields\n\n::: {.pic-align .align}\n*y = -*ln*(C - t)*\n:::\n\nOne observes that this function goes to +infinity at a finite time *C*.\n\nThis is only to be expected, given our assumptions. After eighteen sidereal months, computing speeds double; after another eighteen subjective months, or nine sidereal months, computing speeds double again; etc.\n\nNow, unless the physical universe works in a way that is not only *different* from the current standard model, but has a different *character of physical law* than the current standard model; you can't *actually* do infinite computation in finite time.\n\nLet us suppose that if our biological world had no Intelligence Explosion, and Intel just kept on running as a company, populated by humans, forever, that Moore's Law would start to run into trouble around 2020. Say, after 2020 there would be a ten-year gap where chips simply stagnated, until the next doubling occurred after a hard-won breakthrough in 2030.\n\nThis just says that *R(y)* is not an indefinite exponential curve. By hypothesis, from subjective years 2020 to 2030, R(y) is flat, corresponding to a constant computer speed s. So *dy/dt* is constant over this same time period: Total elapsed subjective time y grows at a linear rate, and as y grows, *R(y)* and computing speeds remain flat until ten subjective years have passed. So the *sidereal* bottleneck lasts ten subjective years times the current sidereal/subjective conversion rate at 2020's computing speeds.\n\nIn short, the whole scenario behaves exactly like what you would expect---the simple transform really does describe the naive scenario of \"drop the economy into the timescale of its own computers.\"\n\nAfter subjective year 2030, things pick up again, maybe---there are ultimate physical limits on computation, but they're pretty damned high, and we've got a ways to go until there. But maybe Moore's Law is slowing down---going subexponential, and then, as the physical limits are approached, logarithmic, and then simply giving out.\n\nBut whatever your beliefs about where Moore's Law ultimately goes, you can just map out the way you would expect the research function *R* to work as a function of sidereal time in our own world, and then apply the transformation *dy/dt = R(y)* to get the progress of the uploaded civilization over sidereal time *t*. (Its progress over *subjective* time is simply given by *R*.)\n\nIf sensorimotor bandwidth is the critical limiting resource, then we instead care about R&D on fast sensors and fast manipulators. We want *R~sm~(y)* instead *R(y)*, where *R~sm~* is the progress rate of sensors and manipulators as a function of elapsed sensorimotor time. And then we write *dy/dt = R~sm~(y)* and crank on the equation again to find out what the world looks like from a sidereal perspective.\n\nWe can verify that the Moore's Researchers scenario is a strong positive feedback loop by performing the \"drop \\$10,000\" thought experiment. Say, we drop in chips from another six doublings down the road---letting the researchers run on those faster chips, while holding constant their state of technological knowledge.\n\nLo and behold, this drop has a rather *large* impact, much larger than the impact of giving faster computers to our own biological world's Intel. *Subjectively* the impact may be unnoticeable---as a citizen, you just see the planets slow down again in the sky. But sidereal growth rates increase by a factor of sixty-four.\n\nSo this is indeed deserving of the names \"strong positive feedback loop\" and \"sustained recursion.\"\n\nAs disclaimed before, all this isn't *really* going to happen. There would be effects like those Robin Hanson prefers to analyze, from being able to spawn new researchers as the cost of computing power decreased. You might be able to pay more to get researchers twice as fast. Above all, someone's bound to try hacking the uploads for increased intelligence . . . and then those uploads will hack themselves even further . . . Not to mention that it's not clear how this civilization cleanly dropped into computer time in the first place.\n\nSo no, this is not supposed to be a realistic vision of the future.\n\nBut, alongside our earlier parable of compound interest, it *is* supposed to be an illustration of how strong, sustained recursion has much more drastic effects on the shape of a growth curve than a one-off case of one thing leading to another thing. Intel's engineers *running on* computers is not like Intel's engineers *using* computers.\n\n[]{#AI-FOOM-Debatech42.html#likesection.59}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/wi/sustained_strong_recursion/pec): You can define \"recursive\" as accelerating growth, in which case it remains an open question whether any particular scenario, such as sped-up folks researching how to speed up, is in fact recursive. Or you can, as I had thought you did, define \"recursive\" as a situation of a loop of growth factors each encouraging the next one in the loop, in which case it is an open question if that results in accelerating growth. I was pointing out before that there exist loops of encouraging growth factors that do not result in accelerating growth. If you choose the other definition strategy, I'll note that your model is extremely stark and leaves out the usual items in even the simplest standard growth models.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wi/sustained_strong_recursion/pef): Robin, like I say, most AIs won't hockey-stick, and when you fold a function in on itself this way, it can bottleneck for a billion years if its current output is flat or bounded. That's why self-optimizing compilers don't go FOOM.\n>\n> \"Recursion\" is not accelerating growth. It is not a loop of growth factors. \"Adding a recursion\" describes situations where you might naively be tempted to take an existing function\n>\n> ::: {.pic-align .align}\n> *y = F(t)*\n> :::\n>\n> and rewrite it as\n>\n> ::: {.pic-align .align}\n> *dy/dt = F(y)*.\n> :::\n>\n> Does that make it any clearer?\n\n> [Robin Hanson](http://lesswrong.com/lw/wi/sustained_strong_recursion/peg): Eliezer, if \"adding a recursion\" means adding one more power to the derivative in the growth equation, then it is an open question what sorts of AIs would do that. And then it isn't clear why you would say Engelbart was \"not recursive enough,\" since this is a discrete definition without some parameter you can have not enough of.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wi/sustained_strong_recursion/pei): Robin, how is the transition\n>\n> ::: {.pic-align .align}\n> *y = e^t^ ⇒ dy/dt = e^t^*\n> :::\n>\n> to\n>\n> ::: {.pic-align .align}\n> *dy/dt = e^y^ ⇒ y = -*ln*(C - t) ⇒ dy/dt = 1 / (C - t)*\n> :::\n>\n> \"adding one more power to the derivative in the growth equation\"?\n>\n> I'm not sure what that phrase you used means, exactly, but I wonder if you may be mis-visualizing the general effect of what I call \"recursion.\"\n>\n> Or what about\n>\n> ::: {.pic-align .align}\n> *y = t^2^ → dy/dt = y^2^*\n> :::\n>\n> etc. Or\n>\n> ::: {.pic-align .align}\n> *y =* log *t → dy/dt =* log *y*,\n> :::\n>\n> etc.\n>\n> Like I said, this doesn't necessarily hockey-stick; if you get sublinear returns the recursified version will be slower than the original.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wi/sustained_strong_recursion/pej): Engelbart was \"not recursive enough\" in the sense that he didn't have a *strong, sustained* recursion; his tech improvements did not yield an increase in engineering velocity which was sufficient to produce tech improvements that would further improve his engineering velocity. He wasn't running on his own chips. Like [eurisko]{.textsc}, he used his scientific prowess to buy some bonds (computer tech) that paid a relatively low coupon on further scientific prowess, and the interest payments didn't let him buy all that many more bonds.\n\n> [Robin Hanson](http://lesswrong.com/lw/wi/sustained_strong_recursion/pf2): In the post and comment discussion with me Eliezer tries to offer a math definition of \"recursive\" but in this discussion about Intel he seems to revert to the definition I thought he was using all along, about whether growing X helps Y grow better which helps X grow better. I don't see any differential equations in the Intel discussion.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wi/sustained_strong_recursion/pf3): Does it help if I say that \"recursion\" is not something which is true or false of a given system, but rather something by which one version of a system *differs* from another?\n>\n> The question is not \"Is Intel recursive?\" but rather \"Which of these two systems is the case? Does intervening on Intel to provide them with much less or much more computing power tremendously slow or accelerate their progress? Or would it have only small fractional effects?\"\n>\n> In the former case, the research going into Moore's Law is being kept *rigidly* on track by the computers' output by Moore's Law, and this would make it plausible that the exponential form of Moore's Law was due *primarily* to this effect.\n>\n> In the latter case, computing power is only loosely coupled to Intel's research activities, and we have to search for other explanations for Moore's Law, such as that the market's sensitivity to computing power is logarithmic and so Intel scales its resources as high as necessary to achieve a certain multiplicative improvement, but no higher than that. . . .\n\n> [Robin Hanson](http://lesswrong.com/lw/wi/sustained_strong_recursion/pfc): Eliezer, I don't know what is your implicit referent to divide \"tremendous\" from \"fractional\" influence of growth of X on growth of Y. Perhaps you can define that clearly in a very simple model, but I don't see how to generalize that to more realistic models. . . .\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/wi/sustained_strong_recursion/) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech43.html}\n\n## []{#AI-FOOM-Debatech43.html#x47-}[Chapter 42]{.titlemark} Friendly Projects vs. Products {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [5 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nI'm a big board game fan, and my favorite these days is *Imperial*. *Imperial* looks superficially like the classic strategy-intense war game *Diplomacy*, but with a crucial difference: instead of playing a nation trying to win WWI, you play a banker trying to make money from that situation. If a nation you control (by having loaned it the most) is threatened by another nation, you might indeed fight a war, but you might instead just buy control of that nation. This is a great way to mute conflicts in a modern economy: have conflicting groups buy shares in each other.\n\nFor projects to create new creatures, such as ems or AIs, there are two distinct friendliness issues:\n\n**Project Friendliness**: *Will the race make winners and losers, and how will winners treat losers?* While any race might be treated as part of a [total war](../Text/AI-FOOM-Debatech28.html#x32-) on several sides, usually the inequality created by the race is moderate and tolerable. For larger inequalities, projects can explicitly join together, agree to cooperate in weaker ways such as by sharing information, or they can buy shares in each other. Naturally arising info leaks and shared standards may also reduce inequality even without intentional cooperation. The main reason for failure here would seem to be the sorts of distrust that plague all human cooperation.**Product Friendliness**: *Will the creatures cooperate with or rebel against their creators?* Folks running a project have reasonably strong incentives to avoid this problem. Of course for the case of extremely destructive creatures the project might internalize more of the gains from cooperative creatures than they do the losses from rebellious creatures. So there might be some grounds for wider regulation. But the main reason for failure here would seem to be poor judgment, thinking you had your creatures more surely under control than in fact you did.It hasn't been that clear to me which of these is the main concern re \"friendly AI.\"**Added:** Since Eliezer [says](#AI-FOOM-Debatech43.html#x47-) product friendliness is his main concern, let me note that the main problem there is the tails of the distribution of *bias* among project leaders. If all projects agreed the problem was very serious they would take near-appropriate caution to isolate their creatures, test creature values, and slow creature development enough to track progress sufficiently. Designing and advertising a solution is one approach to reducing this bias, but it need not need the best approach; perhaps institutions like prediction markets that aggregate info and congeal a believable consensus would be more effective.\n\n[]{#AI-FOOM-Debatech43.html#likesection.60}\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech43.html#likesection.61}\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/friendly-projec.html#comment-518246537): The second one, he said without the tiniest trace of hesitation.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/friendly-projec.html#comment-518246687): I just added to the post.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/friendly-projec.html#comment-518246890):\n>\n> > If all projects agreed the problem was very serious they would take near-appropriate caution to isolate their creatures, test creature values, and slow creature development enough to track progress sufficiently.\n>\n> Robin, I agree this is a left-tail problem, or to be more accurate, the right tail of the left hump of a two-hump camel.\n>\n> But your suggested description of a solution *is not going to work*. You need something that can carry out a billion sequential self-modifications on itself without altering its terminal values, and you need exactly the right terminal values because missing or distorting a single one can spell the difference between utopia or dystopia. The former requires new math, the latter requires extremely meta thinking plus additional new math. *If no one has this math, all good guys are helpless* and the game is lost automatically.\n>\n> That's why I see this as currently having the status of a math problem even more than a PR problem.\n>\n> For all the good intentions that ooze from my every pore, right now I do not, technically speaking, *know* how to build a Friendly AI---though thankfully, I know enough to know why \"testing\" isn't a solution (context not i.i.d.) which removes me from the right tail of the left hump.\n>\n> Now, some aspects of this can be viewed as a PR problem---you want to remove researchers from the right tail of the left hump, which you can do up to a point through publicizing dangers. And you want to add researchers to the right tail of the right hump, which you can do by, among other strategies, having math geniuses read *Overcoming Bias* at age fifteen and then waiting a bit. (Some preliminary evidence indicates that this strategy may already be working.)\n>\n> But above all, humanity is faced with a win-or-fail *math* problem, a challenge of pure technical knowledge stripped of all social aspects. It's not that this is the only part of the problem. It's just the only impossible part of the problem.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/friendly-projec.html#comment-518246986): . . . Eliezer, I'd like to hear more about why testing and monitoring creatures as they develop through near-human levels, slowing development as needed, says nothing useful about their values as transhuman creatures. And about why it isn't enough to convince most others that the problem is as hard as you say: in that case many others would also work to solve the problem, and would avoid inducing it until they had a solution. And hey, if you engage them there's always a chance they'll convince you they are right and you are wrong. Note that your social strategy, of avoiding standard credentials, is about the worst case for convincing a wide audience.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/12/friendly-projec.html) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech44.html}\n\n## []{#AI-FOOM-Debatech44.html#x48-}[Chapter 43]{.titlemark} Is That Your True Rejection? {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [6 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nIt happens every now and then that the one encounters some of my transhumanist-side beliefs---as opposed to my ideas having to do with human rationality---strange, exotic-sounding ideas like superintelligence and Friendly AI. And the one rejects them.\n\nIf the one is called upon to explain the rejection, not uncommonly the one says,\n\n\"Why should I believe anything Yudkowsky says? He doesn't have a PhD!\"\n\nAnd occasionally someone else, hearing, says, \"Oh, you should get a PhD, so that people will listen to you.\" Or this advice may even be offered by the same one who disbelieved, saying, \"Come back when you have a PhD.\"\n\nNow there are good and bad reasons to get a PhD, but this is one of the bad ones.\n\nThere's many reasons why someone *actually* has an adverse reaction to transhumanist theses. Most are matters of pattern recognition, rather than verbal thought: the thesis [matches](http://lesswrong.com/lw/ir/science_as_attire/) against \"strange weird idea\" or \"science fiction\" or \"end-of-the-world cult\" or \"overenthusiastic youth.\"\n\nSo immediately, at the speed of perception, the idea is rejected. If, afterward, someone says, \"Why not?\" this launches a search for justification. But this search will not necessarily hit on the true reason---by \"true reason\" I mean not the *best* reason that could be offered, but rather, whichever causes were [decisive as a matter of historical fact](http://lesswrong.com/lw/js/the_bottom_line/), [at the *very first* moment the rejection occurred](http://lesswrong.com/lw/jx/we_change_our_minds_less_often_than_we_think/).\n\nInstead, the search for justification hits on the justifying-sounding fact, \"This speaker does not have a PhD.\"\n\nBut I also don't have a PhD when I talk about human rationality, so [why is the same objection not raised there](http://lesswrong.com/lw/md/cultish_countercultishness/)?\n\nAnd more to the point, if I *had* a PhD, people would not treat this as a decisive factor indicating that they ought to believe everything I say. Rather, the same initial rejection would occur, for the same reasons; and the search for justification, afterward, would terminate at a different stopping point.\n\nThey would say, \"Why should I believe *you?* You're just some guy with a PhD! There are lots of those. Come back when you're well-known in your field and tenured at a major university.\"\n\nBut do people *actually* believe arbitrary professors at Harvard who say weird things? Of course not. (But if I were a professor at Harvard, it would in fact be easier to get *media attention*. Reporters initially disinclined to believe me---who would probably be equally disinclined to believe a random PhD-bearer---would still report on me, because it would be news that a Harvard professor believes such a weird thing.)\n\nIf you are saying things that sound *wrong* to a novice, as opposed to just rattling off magical-sounding technobabble about leptical quark braids in N + 2 dimensions; and the hearer is a stranger, unfamiliar with you personally *and* with the subject matter of your field; then I suspect that the point at which the average person will *actually* start to grant credence overriding their initial impression, purely *because* of academic credentials, is somewhere around the Nobel Laureate level. If that. Roughly, you need whatever level of academic credential qualifies as \"beyond the mundane.\"\n\nThis is more or less what happened to Eric Drexler, as far as I can tell. He presented his vision of nanotechnology, and people said, \"Where are the technical details?\" or, \"Come back when you have a PhD!\" And Eric Drexler spent six years writing up technical details and got his PhD under Marvin Minsky for doing it. And *Nanosystems* is a great book. But did the same people who said, \"Come back when you have a PhD,\" actually change their minds at all about molecular nanotechnology? Not so far as I ever heard.\n\nIt has similarly been a general rule with the Machine Intelligence Research Institute that, whatever it is we're supposed to do to be more credible, when we actually do it, nothing much changes. \"Do you do any sort of code development? I'm not interested in supporting an organization that doesn't develop code\" → OpenCog → nothing changes. \"Eliezer Yudkowsky lacks academic credentials\" → Professor Ben Goertzel installed as Director of Research → nothing changes. The one thing that actually *has* seemed to raise credibility is famous people associating with the organization, like Peter Thiel funding us, or Ray Kurzweil on the Board.\n\nThis might be an important thing for young businesses and new-minted consultants to keep in mind---that what your failed prospects *tell* you is the reason for rejection may not make the *real* difference, and you should ponder that carefully before spending huge efforts. If the venture capitalist says, \"If only your sales were growing a little faster!\"---if the potential customer says, \"It seems good, but you don't have feature X\"---that may not be the true rejection. Fixing it may or may not change anything.\n\nAnd it would also be something to keep in mind during disagreements. Robin and I share a belief that two rationalists should not [agree to disagree](http://www.overcomingbias.com/2006/12/agreeing_to_agr.html): they should not have common knowledge of epistemic disagreement unless something is very wrong.\n\nI suspect that, in general, if two rationalists set out to resolve a disagreement that persisted past the first exchange, they should expect to find that the true sources of the disagreement are either hard to communicate, or hard to expose. E.g:\n\n- Uncommon, but well-supported, scientific knowledge or math\n- Long [inferential distances](http://lesswrong.com/lw/kg/expecting_short_inferential_distances/)\n- Hard-to-verbalize intuitions, perhaps stemming from specific visualizations\n- Zeitgeists inherited from a profession (which may have good reason for it)\n- Patterns perceptually recognized from experience\n- Sheer habits of thought\n- Emotional commitments to believing in a particular outcome\n- Fear of a past mistake being disproven\n- Deep self-deception for the sake of pride or other personal benefits\n\nIf the matter were one in which *all* the true rejections could be *easily* laid on the table, the disagreement would probably be so straightforward to resolve that it would never have lasted past the first meeting.\n\n\"Is this my true rejection?\" is something that both disagreers should surely be asking *themselves*, to make things easier on the Other Fellow. However, attempts to directly, publicly psychoanalyze the Other may cause the conversation to degenerate *very* fast, in my observation.\n\nStill---\"Is that your true rejection?\" should be fair game for Disagreers to humbly ask, if there's any productive way to pursue that subissue. Maybe the rule could be that you can openly ask, \"Is that simple straightforward-sounding reason your *true* rejection, or does it come from intuition X or professional zeitgeist Y?\" While the more embarrassing possibilities lower on the table are left to the Other's conscience, as their own responsibility to handle.***Post scriptum*:** This post is not *really* about PhDs in general, or their credibility value in particular. But I've always figured that, to the extent this was a strategically important consideration, it would make more sense to recruit an academic of existing high status than spend a huge amount of time trying to achieve low or moderate academic status.\n\nHowever, if any professor out there wants to let me come in and *just* do a PhD in analytic philosophy---*just* write the thesis and defend it---then I have, for my own use, worked out a general and mathematically elegant theory of [Newcomb-like decision problems](http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/). I think it would make a fine PhD thesis, and it is ready to be written---if anyone has the power to let me do things the old-fashioned way.\n\n[]{#AI-FOOM-Debatech44.html#likesection.62}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/wj/is_that_your_true_rejection/pfj): There need not be just one \"true objection\"; there can be many factors that together lead to an estimate. Whether you have a PhD, and whether folks with PhDs have reviewed your claims, and what they say, can certainly be relevant. Also remember that you should care lots more about the opinions of experts that could build on and endorse your work than about average-Joe opinions. Very few things ever convince average folks of anything unusual; target a narrower audience.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wj/is_that_your_true_rejection/pfm): . . . Robin, see the *post scriptum*. I would be willing to get a PhD thesis if it went by the old rules and the old meaning of \"Prove you can make an original, significant contribution to human knowledge and that you've mastered an existing field,\" rather than \"This credential shows you have spent X number of years in a building.\" (This particular theory *would* be hard enough to write up that I may not get around to it if a PhD credential isn't at stake.)\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/wj/is_that_your_true_rejection/) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech45.html}\n\n## []{#AI-FOOM-Debatech45.html#x49-}[Chapter 44]{.titlemark} Shared AI Wins {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [6 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nAlmost every new technology comes at first in a dizzying variety of styles and then converges to what later seems the \"obvious\" configuration. It is actually quite an eye-opener to go back and see old might-have-beens, from steam-powered cars to pneumatic tube mail to memex to Engelbart's computer tools. Techs that are only imagined, not implemented, take on the widest range of variations. When actual implementations appear, people slowly figure out what works better, while network and other scale effects lock in popular approaches. As standards congeal, competitors focus on smaller variations around accepted approaches. Those who stick with odd standards tend to be marginalized.\n\n[Eliezer says](../Text/AI-FOOM-Debatech38.html#x42-) standards barriers are why AIs would \"foom\" locally, with one AI quickly growing from so small no one notices to so powerful it takes over the world:\n\n> I also don't think this \\[scenario\\] is allowed: . . . knowledge and even skills are widely traded in this economy of AI systems. In concert, these AIs, and their human owners, and the economy that surrounds them, undergo a *collective* FOOM of self-improvement. No local agent is capable of doing all this work, only the collective system. . . .\n>\n> \\[The reason is that\\] trading cognitive content around between diverse AIs is more difficult and less likely than it might sound. Consider the field of AI as it works today. Is there *any* standard database of cognitive content that you buy off the shelf and plug into your amazing new system, whether it be a chess player or a new data-mining algorithm? . . .\n>\n> . . . The diversity of cognitive architectures acts as a *tremendous* barrier to trading around cognitive content. . . . If two AIs both see an apple for the first time, and they both independently form concepts about that apple . . . their *thoughts* are effectively written in a different language. . . .\n>\n> The barrier this opposes to a true, cross-agent, literal \"economy of mind,\" is so strong, that in the vast majority of AI applications you set out to write today, you will not bother to import any standardized preprocessed cognitive content. It will be easier for your AI application to start with some standard examples---databases of *that* sort of thing do exist, in some fields anyway---and *redo all the cognitive work of learning* on its own. . . .\n>\n> . . . Looking over the diversity of architectures proposed at any AGI conference I've attended, it is very hard to imagine directly trading cognitive content between any two of them.\n\nBut *of course* \"visionaries\" take a wide range of incompatible approaches. Commercial software tries much harder to match standards and share sources. The whole [point of Cyc](../Text/AI-FOOM-Debatech32.html#x36-) was that AI researchers neglect compatibility and sharing because they are more interested in writing papers than making real systems. The idea that you could create human-level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy. You couldn't build an effective cell or ecosystem or developed economy or most any complex system that way either---such things require not just good structure but also lots of good content. Loners who start all over from scratch rarely beat established groups sharing enough standards to let them share improvements to slowly accumulate content.\n\nCyc content may or may not jump-start a sharing AI community, but AI just won't happen without a whole lot of content. If ems appear first, perhaps shareable em contents could form a different basis for shared improvements.\n\n[]{#AI-FOOM-Debatech45.html#likesection.63}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/shared-ai-wins.html#comment-518238035): It's generally a terrible analogy, but would you say that a human baby growing up is getting \"raw data\" fed into the right architecture, or that human babies are exposed to data preprocessed by their parents, or that human babies get standardized data?\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/shared-ai-wins.html#comment-518238144): . . . Eliezer, a human baby certainly gets raw data, and it has a good architecture too, but in addition I'd say it has lots of genetically encoded info about what sort of patterns in data to expect and attend to, i.e., what sort of abstractions to consider. In addition, when raising kids we focus their attention on relevant and useful patterns and abstractions. And of course we just tell them lots of stuff too. . . .\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/shared-ai-wins.html#comment-518238440): This is much like my visualization of how an AI works, except that there's substantially less \"genetically encoded info\" at the time you boot up the system---mostly consisting of priors that have to be encoded procedurally. This is work done by natural selection in the case of humans; so some of that is taken off your hands by programs that you write, and some of it is work you do at runtime over the course of the AI's development, rather than trying to encode into the very first initial system. But you can't exactly leave out Bayes' Rule, or causal graphs, or *modus ponens*, from the first system. . . .\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/shared-ai-wins.html#comment-518238476): . . . Eliezer, yes, well-chosen priors *are* the key \"encoded info.\" There may be a misunderstanding that when I say \"info\" people think I mean direct facts like \"Paris is capital of France,\" while I instead mean any content within your architecture that helps you focus attention well. Clearly human babies do leave out Bayes' Rule and *modus ponens*, but yes, we should put that in if we can cleanly do so. I'd just claim that doesn't get you very far; you'll need to find a way to inherit big chunks of the vast human content heritage.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/shared-ai-wins.html#comment-518238489): Robin, \"Bayes' Rule\" doesn't mean a little declarative representation of Bayes' Rule, it means updating in response to evidence that seems more likely in one case than another. Hence \"encoded procedurally.\"\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/shared-ai-wins.html#comment-518238500): Eliezer, yes, babies clearly do approximately encode some implications of Bayes' Rule, but also clearly fail to encode many other implications.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/12/shared-ai-wins.html) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech46.html}\n\n## []{#AI-FOOM-Debatech46.html#x50-}[Chapter 45]{.titlemark} Artificial Mysterious Intelligence {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [7 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Previously in series:** [Failure By Affective Analogy](http://lesswrong.com/lw/vy/failure_by_affective_analogy/)I once had a conversation that I still remember for its sheer, purified archetypicality. This was a nontechnical guy, but pieces of this dialog have also appeared in conversations I've had with professional [AI folk](http://lesswrong.com/lw/uc/aboveaverage_ai_scientists/) . . .\n\n> [Him]{.textsc}: Oh, you're working on AI! Are you using neural networks?\n>\n> [Me]{.textsc}: I think emphatically *not*.\n>\n> [Him]{.textsc}: But neural networks are so wonderful! They solve problems and we don't have any idea how they do it!\n>\n> [Me]{.textsc}: If you are ignorant of a phenomenon, that is a fact about your state of mind, not a fact about the phenomenon itself. Therefore your ignorance of how neural networks are solving a specific problem cannot be responsible for making them work better.\n>\n> [Him]{.textsc}: Huh?\n>\n> [Me]{.textsc}: If you don't know how your AI works, that is not good. It is bad.\n>\n> [Him]{.textsc}: Well, intelligence is much too difficult for us to understand, so we need to find *some* way to build AI without understanding how it works.\n>\n> [Me]{.textsc}: Look, even if you could do that, you wouldn't be able to predict any kind of positive outcome from it. For all you knew, the AI would go out and slaughter orphans.\n>\n> [Him]{.textsc}: Maybe we'll build Artificial Intelligence by scanning the brain and building a neuron-by-neuron duplicate. Humans are the only systems we know are intelligent.\n>\n> [Me]{.textsc}: It's hard to build a flying machine if the only thing you understand about flight is that somehow birds magically fly. What you need is a concept of aerodynamic lift, so that you can see how something can fly even if it isn't exactly like a bird.\n>\n> [Him]{.textsc}: That's too hard. We have to copy something that we know works.\n>\n> [Me]{.textsc}: (*reflectively*) What do people find so unbearably *awful* about the prospect of having to finally break down and solve the bloody problem? Is it really *that* horrible?\n>\n> [Him]{.textsc}: Wait . . . you're saying you want to actually *understand* intelligence?\n>\n> [Me]{.textsc}: Yeah.\n>\n> [Him]{.textsc}: (*aghast*) Seriously?\n>\n> [Me]{.textsc}: I don't know everything I need to know about intelligence, but I've learned a hell of a lot. Enough to know what happens if I try to build AI while there are still gaps in my understanding.\n>\n> [Him]{.textsc}: Understanding the problem is too hard. You'll never do it.\n\nThat's not just a difference of opinion you're looking at, it's a *clash of cultures*.\n\nFor a long time, many different parties and factions in AI, adherent to more than one ideology, have been trying to build AI *without* understanding intelligence. And their habits of thought have become ingrained in the field, and even transmitted to parts of the general public.\n\nYou may have heard proposals for building true AI which go something like this:\n\n1. [Calculate how many operations the human brain performs every second. This is \"the only amount of computing power that we know is actually sufficient for human-equivalent intelligence.\" Raise enough venture capital to buy a supercomputer that performs an equivalent number of floating-point operations in one second. Use it to run the most advanced available neural network algorithms.]{#AI-FOOM-Debatech46.html#x50-49002x1}\n2. [The brain is huge and complex. When the Internet becomes sufficiently huge and complex, intelligence is bound to emerge from the Internet. *(I get asked about this in 50% of my interviews.)*]{#AI-FOOM-Debatech46.html#x50-49004x2}\n3. [Computers seem unintelligent because they lack common sense. Program a very large number of \"common-sense facts\" into a computer. Let it try to reason about the relation of these facts. Put a sufficiently huge quantity of knowledge into the machine, and intelligence will emerge from it.]{#AI-FOOM-Debatech46.html#x50-49006x3}\n4. [Neuroscience continues to advance at a steady rate. Eventually, super-MRI or brain sectioning and scanning will give us precise knowledge of the local characteristics of all human brain areas. So we'll be able to build a duplicate of the human brain by duplicating the parts. \"The human brain is the only example we have of intelligence.\"]{#AI-FOOM-Debatech46.html#x50-49008x4}\n5. [Natural selection produced the human brain. It is \"the only method that we know works for producing general intelligence.\" So we'll have to scrape up a really huge amount of computing power, and *evolve* AI.]{#AI-FOOM-Debatech46.html#x50-49010x5}\n\nWhat do all these proposals have in common?\n\nThey are all ways to make yourself believe that you can build an Artificial Intelligence even if you don't understand exactly how intelligence works.\n\nNow, such a belief is not necessarily *false*! Methods (4) and (5), if pursued long enough and with enough resources, *will* eventually work. (Method (5) might require a computer the size of the Moon, but give it *enough* crunch and it will work, even if you have to simulate a quintillion planets and not just one . . .)\n\nBut regardless of whether any given method would work in principle, the unfortunate habits of thought will already begin to arise as soon as you start thinking of ways to create Artificial Intelligence without having to penetrate the *mystery of intelligence*.\n\nI have already spoken of some of the hope-generating tricks that appear in the examples above. There is [invoking similarity to humans](http://lesswrong.com/lw/vx/failure_by_analogy/), or using [words that make you feel good](http://lesswrong.com/lw/vy/failure_by_affective_analogy/). But really, a lot of the trick here just consists of imagining yourself hitting the AI problem with a *really big rock*.\n\nI know someone who goes around insisting that AI will cost a quadrillion dollars, and as soon as we're willing to spend a quadrillion dollars, we'll have AI, and we couldn't possibly get AI without spending a quadrillion dollars. \"Quadrillion dollars\" is his big rock that he imagines hitting the problem with, even though he doesn't quite understand it.\n\nIt often will not occur to people that the mystery of intelligence could be any more penetrable than it *seems*: By the power of the [Mind Projection Fallacy](http://lesswrong.com/lw/oi/mind_projection_fallacy/), being ignorant of how intelligence works will [make it seem like intelligence is inherently impenetrable and chaotic](http://lesswrong.com/lw/wb/chaotic_inversion/). They will think they possess a positive knowledge of intractability, rather than thinking, \"I am ignorant.\"\n\nAnd the thing to remember is that, for these last decades on end, *any* professional in the field of AI trying to build \"real AI\" had some reason for trying to do it without really understanding intelligence ([various fake reductions aside](http://lesswrong.com/lw/tf/dreams_of_ai_design/)).\n\nThe [New Connectionists](http://lesswrong.com/lw/vv/logical_or_connectionist_ai/) accused the [Good Old-Fashioned AI](http://lesswrong.com/lw/vt/the_nature_of_logic/) researchers of not being parallel enough, not being fuzzy enough, not being emergent enough. But they did not say, \"There is too much you do not understand.\"\n\nThe New Connectionists catalogued the flaws of GOFAI for years on end, with fiery castigation. But they couldn't ever actually say: \"How *exactly* are all these logical deductions going to produce 'intelligence,' anyway? Can you walk me through the cognitive operations, step by step, which lead to that result? Can you explain 'intelligence' and how you plan to get it, without pointing to humans as an example?\"\n\nFor they themselves would be subject to exactly the same criticism.\n\nIn the house of glass, somehow, no one ever gets around to talking about throwing stones.\n\nTo tell a lie, you have to lie about all the other facts entangled with that fact, and also lie about the methods used to arrive at beliefs: The culture of Artificial Mysterious Intelligence has developed its own [Dark Side Epistemology](http://lesswrong.com/lw/uy/dark_side_epistemology/), complete with reasons why it's actually *wrong* to try and understand intelligence.\n\nYet when you step back from the bustle of this moment's history, and think about the long sweep of science---there was a time when stars were mysterious, when chemistry was mysterious, when life was mysterious. And in this era, much was attributed to black-box essences. And there were many hopes based on the [similarity](http://lesswrong.com/lw/vx/failure_by_analogy/) of one thing to another. To many, I'm sure, alchemy just seemed very *difficult* rather than even seeming *mysterious*; most alchemists probably did not go around thinking, \"Look at how much I am disadvantaged by not knowing about the existence of chemistry! I must discover atoms and molecules as soon as possible!\" They just memorized libraries of random things you could do with acid and bemoaned how difficult it was to create the Philosopher's Stone.\n\nIn the end, though, what happened is that scientists achieved [insight](../Text/AI-FOOM-Debatech21.html#x25-), and *then* things got much easier to do. You also had a better idea of what you could or couldn't do. The problem stopped being *scary* and *confusing*.\n\nBut you wouldn't hear a New Connectionist say, \"Hey, maybe all the failed promises of 'logical AI' were basically due to the fact that, in their epistemic condition, they had no right to expect their AIs to work in the first place, because they couldn't actually have sketched out the link in any more detail than a medieval alchemist trying to explain why a particular formula for the Philosopher's Stone will yield gold.\" It would be like the Pope attacking Islam on the basis that faith is not an adequate justification for asserting the existence of their deity.\n\nYet, in fact, the promises *did* fail, and so we can conclude that the promisers overreached what they had a right to expect. The Way is not omnipotent, and a bounded rationalist cannot do all things. But even a bounded rationalist can aspire not to overpromise---to only *say* you can do that which you *can* do. So if we want to achieve that reliably, history shows that we should not accept certain kinds of hope. In the absence of insight, hopes tend to be unjustified because you lack the knowledge that would be needed to justify them.\n\nWe humans have a difficult time working in the absence of insight. It doesn't reduce us all the way down to being [as stupid as evolution](http://lesswrong.com/lw/kt/evolutions_are_stupid_but_work_anyway/). But it makes everything difficult and tedious and annoying.\n\nIf the prospect of having to finally break down and solve the bloody problem of intelligence seems scary, you underestimate the interminable hell of *not* solving it.\n\n[]{#AI-FOOM-Debatech46.html#likesection.64}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/wk/artificial_mysterious_intelligence/pgu): We shouldn't underrate the power of insight, but we shouldn't overrate it either; some systems can just be a mass of details, and to master such systems you must master those details. And if you pin your hopes for AI progress on powerful future insights, you have to ask how often such insights occur, and how many we would need. The track record so far doesn't look especially encouraging.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wk/artificial_mysterious_intelligence/pgx): Robin, the question of whether compact insights *exist* and whether they are *likely to be obtained in reasonable time* (and by how large a group, etc.) are very different questions and should be considered separately, in order. . . .\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/wk/artificial_mysterious_intelligence/) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech47.html}\n\n## []{#AI-FOOM-Debatech47.html#x51-}[Chapter 46]{.titlemark} Wrapping Up {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [7 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nThis Friendly AI discussion has taken more time than I planned or have. So let me start to wrap up.\n\nOn small scales we humans evolved to cooperate via various pair and group bonding mechanisms. But these mechanisms aren't of much use on today's evolutionarily unprecedented large scales. Yet we do in fact cooperate on the largest scales. We do this because we are risk averse, because our values mainly conflict on resource use which conflicts destroy, and because we have the intelligence and institutions to enforce win-win deals via property rights, etc.\n\nI raise my kids because they share my values. I teach other kids because I'm paid to. Folks raise horses because others pay them for horses, expecting horses to cooperate as slaves. You might expect your pit bulls to cooperate, but we should only let you raise pit bulls if you can pay enough damages if they hurt your neighbors.\n\nIn my preferred em (whole-brain emulation) [scenario](../Text/AI-FOOM-Debatech16.html#x20-), people would only authorize making em copies using borrowed or rented brains/bodies when they expected those copies to have lives worth living. With property rights enforced, both sides would expect to benefit more when copying was allowed. Ems would not exterminate humans mainly because that would threaten the institutions ems use to keep peace with each other.\n\nSimilarly, we expect AI developers to plan to benefit from AI cooperation via either direct control, indirect control such as via property-rights institutions, or such creatures having cooperative values. As with pit bulls, developers should have to show an ability, perhaps via insurance, to pay plausible hurt amounts if their creations hurt others. To the extent they or their insurers fear such hurt, they would test for various hurt scenarios, slowing development as needed in support. To the extent they feared inequality from some developers succeeding first, they could exchange shares, or share certain kinds of info. Naturally occurring info leaks, and shared sources, both encouraged by shared standards, would limit this inequality.\n\nIn this context, I read Eliezer as fearing that developers, insurers, regulators, and judges will vastly underestimate how dangerous are newly developed AIs. *Eliezer guesses that within a few weeks a single AI could grow via largely internal means from weak and unnoticed to so strong it takes over the world,* with no weak but visible moment between when others might just nuke it. Since its growth needs little from the rest of the world, and since its resulting power is so vast, only its values would make it treat others as much more than raw materials. But its values as seen when weak say little about its values when strong. Thus Eliezer sees little choice but to try to design a theoretically clean AI architecture allowing near-provably predictable values when strong, to in addition design a set of robust good values, and then to get AI developers to adopt this architecture/values combination.\n\nThis is not a choice to make lightly; declaring your plan to build an AI to take over the world would surely be seen as an [act of war](../Text/AI-FOOM-Debatech28.html#x32-) by most who thought you could succeed, no matter how benevolent you said its values would be. (But yes, if Eliezer were sure, he should push ahead anyway.) And note most of Eliezer's claim's urgency comes from the fact that most of the world, including most AI researchers, *disagree* with Eliezer; if they agreed, AI development would likely be severely regulated, like nukes today.\n\nOn the margin this scenario seems less a concern when [manufacturing is less local](../Text/AI-FOOM-Debatech35.html#x39-), when tech surveillance is stronger, and when intelligence is multidimensional. It also seems less of a concern with ems, as AIs would have less of a hardware advantage over ems, and modeling AI architectures on em architectures would allow more reliable value matches.\n\nWhile historical trends do suggest we watch for a several-year-long transition sometime in the next century to a global growth rate two or three orders of magnitude faster, Eliezer's postulated local growth rate seems much faster. I also find Eliezer's [growth math](../Text/AI-FOOM-Debatech34.html#x38-) unpersuasive. Usually dozens of relevant factors are coevolving, with several loops of, all else equal, X growth speeds Y growth speeds etc. Yet usually it all adds up to exponential growth, with rare jumps to faster growth rates. Sure, if you pick two things that plausibly speed each other and leave everything else out including diminishing returns, your math can suggest accelerating growth to infinity, but for a real foom that loop needs to be real strong, much stronger than contrary muting effects. []{#AI-FOOM-Debatech47.html#likesection.65}\n\nBut the real sticking point seems to be [locality](../Text/AI-FOOM-Debatech45.html#x49-). The \"content\" of a system is its small modular features while its \"architecture\" is its most important, least modular features. Imagine a large community of AI developers, with real customers, mostly adhering to common architectural standards and sharing common content; imagine developers trying to gain more market share and that AIs mostly got better by accumulating more better content, and that this rate of accumulation mostly depended on previous content; imagine architecture is a minor influence. In this case the whole AI sector of the economy might grow very quickly, but it gets pretty hard to imagine one AI project zooming vastly ahead of others.\n\nSo I suspect this all comes down to, how powerful is architecture in AI, and how many architectural insights can be found how quickly? If there were say a series of twenty deep powerful insights, each of which made a system twice as effective, just enough extra oomph to let the project and system find the next insight, it would add up to a factor of a million. Which would still be nowhere near enough, so imagine a lot more of them, or lots more powerful.\n\nThis scenario seems quite flattering to Einstein wannabes, making deep-insight-producing Einsteins vastly more valuable than they have ever been, even in percentage terms. But when I've looked at AI research I just haven't seen it. I've seen innumerable permutations on a few recycled architectural concepts, and way too much energy wasted on architectures in systems starved for content, content that academic researchers have little incentive to pursue. So we have come to: What evidence is there for a dense sequence of powerful architectural AI insights? Is there any evidence that natural selection stumbled across such things?\n\nAnd if Eliezer is the outlier he seems on the priority of friendly AI, what does Eliezer know that the rest of us don't? If he has such revolutionary clues, why can't he tell us? What else could explain his confidence and passion here if not such clues?\n\n[]{#AI-FOOM-Debatech47.html#likesection.66}\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech47.html#likesection.67}\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/wrapping-up.html#comment-518247642):\n>\n> > On small scales we humans evolved to cooperate via various pair and group bonding mechanisms. But these mechanisms aren't of much use on today's evolutionarily unprecedented large scales. Yet we do in fact cooperate on the largest scales. We do this because we are risk averse, because our values mainly conflict on resource use which conflicts destroy, and because we have the intelligence and institutions to enforce win-win deals via property rights, etc.\n>\n> Individual organisms are adaptation-executers, not fitness-maximizers. We seem to have a disagreement-of-fact here; I think that our senses of honor and of internalized group morality are operating to make us honor our agreements with trade partners and internalize certain capitalist values. If human beings were *really genuinely* selfish, the economy would fall apart or at least have to spend vastly greater resources policing itself---think Zimbabwe and other failed states where police routinely stop buses to collect bribes from all passengers, but without the sense of restraint: the police just shoot you and loot your corpse unless they expect to be able to extract further bribes from you in particular.\n>\n> I think the group coordination mechanisms, executing as adaptations, are *critical* to the survival of a global economy between imperfect minds of our level that cannot simultaneously pay attention to everyone who might betray us.\n>\n> > In this case the whole AI sector of the economy might grow very quickly, but it gets pretty hard to imagine one AI project zooming vastly ahead of others.\n>\n> Robin, you would seem to be [leaving out a key weak point](http://lesswrong.com/lw/jy/avoiding_your_beliefs_real_weak_points/) here. It's much easier to argue that AIs don't zoom ahead of each other than to argue that the AIs as a *collective* don't zoom ahead of the *humans*. To the extent where, if AIs lack innate drives to treasure sentient life and humane values, it would be a trivial coordination problem and a huge net benefit to all AIs to simply write the statue-slow, defenseless, noncontributing humans out of the system.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/wrapping-up.html#comment-518247689):\n>\n> > Eliezer: If human beings were *really genuinely* selfish, the economy would fall apart or at least have to spend vastly greater resources policing itself. . . . Group coordination mechanisms, executing as adaptations, are *critical* to the survival of a global economy. . . . It would be a trivial coordination problem and a huge net benefit to all AIs to simply write the statue-slow, defenseless, noncontributing humans out of the system.\n>\n> Here you disagree with most economists, including myself, about the sources and solutions of coordination problems. Yes, genuinely selfish humans would have to spend more resources to coordinate at the local level, because this is where adapted coordinations now help. But larger-scale coordination would be just as easy. Since coordination depends crucially on institutions, AIs would need to preserve those institutions as well. So AIs would not want to threaten the institutions they use to keep the peace among themselves. It is far from easy to coordinate to exterminate humans while preserving such institutions. Also, why assume AIs not explicitly designed to be friendly are in fact \"really genuinely selfish\"?\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/12/wrapping-up.html) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech48.html}\n\n## []{#AI-FOOM-Debatech48.html#x52-}[Chapter 47]{.titlemark} True Sources of Disagreement {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [8 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Followup to:** [Is That Your True Rejection?](../Text/AI-FOOM-Debatech44.html#x48-)I expected from the beginning that [the difficult part of two rationalists reconciling a persistent disagreement, would be for them to expose the true sources of their beliefs](../Text/AI-FOOM-Debatech44.html#x48-).\n\nOne suspects that this will only work if each party takes responsibility for their own end; it's very hard to see inside someone else's head. Yesterday I exhausted myself mentally while out on my daily walk, asking myself the Question \"What do you think you know, and why do you think you know it?\" with respect to \"How much of the AI problem compresses to large insights, and how much of it is unavoidable nitty-gritty?\" Trying to either understand why my brain believed what it believed, or else force my brain to experience enough genuine doubt that I could reconsider the question and arrive at a real justification that way. It's hard to see how Robin Hanson could have done any of this work for me.\n\nPresumably a symmetrical fact holds about my lack of access to the real reasons why Robin believes what he believes. To understand the true source of a disagreement, you have to know why *both* sides believe what they believe---one reason why disagreements are hard to resolve.\n\nNonetheless, here's my guess as to what this Disagreement is about:\n\nIf I had to pinpoint a single thing that strikes me as \"disagree-able\" about the way Robin frames his analyses, it's that there are a lot of *opaque* agents running around, little black boxes assumed to be similar to humans, but there are more of them and they're less expensive to build/teach/run. They aren't even any *faster*, let alone smarter. (I don't think that standard economics says that doubling the population halves the doubling time, so it matters whether you're making more minds or faster ones.)\n\nThis is Robin's model for uploads/ems, and his model for AIs doesn't seem to look any different. So that world looks like this one, except that the cost of \"human capital\" and labor is dropping according to (exogenous) Moore's Law, and it ends up that economic growth doubles every month instead of every sixteen years---but that's it. Being, myself, not an economist, this *does* look to me like a viewpoint with a distinctly economic zeitgeist.\n\nIn my world, you look inside the black box. (And, to be symmetrical, I don't spend much time thinking about more than one box at a time---if I have more hardware, it means I have to figure out how to scale a bigger brain.)\n\nThe human brain is a haphazard thing, thrown together by [idiot evolution](http://lesswrong.com/lw/kt/evolutions_are_stupid_but_work_anyway/) as an incremental layer of icing on a chimpanzee cake that never evolved to be generally intelligent, adapted in a distant world devoid of elaborate scientific arguments or computer programs or professional specializations.\n\nIt's amazing we can get *anywhere* using the damn thing. But it's worth remembering that if there were any *smaller* modification of a chimpanzee that spontaneously gave rise to a technological civilization, we would be having this conversation at that lower level of intelligence instead.\n\nHuman neurons run at less than a millionth the speed of transistors, transmit spikes at less than a millionth the speed of light, and dissipate around a million times the heat per synaptic operation as the thermodynamic minimum for a one-bit operation at room temperature. Physically speaking, it ought to be possible to run a brain at a million times the speed without shrinking it, cooling it, or invoking reversible computing or quantum computing.\n\nThere's no reason to think that the brain's software is any closer to the limits of the possible than its hardware, and indeed, if you've been following along on *Overcoming Bias* this whole time, you should be well aware of the manifold known ways in which our high-level thought processes fumble even the simplest problems.\n\nMost of these are not deep, inherent flaws of intelligence, or limits of what you can do with a mere hundred trillion computing elements. They are the results of a [really stupid process](http://lesswrong.com/lw/kt/evolutions_are_stupid_but_work_anyway/) that designed the retina backward, slapping together a brain we now use in contexts way outside its ancestral environment.\n\nTen thousand researchers working for one year cannot do the same work as a hundred researchers working for a hundred years; a chimpanzee's brain is one-fourth the volume of a human's but four chimps do not equal one human; a chimpanzee shares 95% of our DNA but a chimpanzee cannot understand 95% of what a human can. The scaling law for population is not the scaling law for time is not the scaling law for brain size is not the scaling law for mind design.\n\nThere's a parable I sometimes use, about how [the first replicator](../Text/AI-FOOM-Debatech8.html#x11-100007) was not quite the end of [the era of stable accidents](../Text/AI-FOOM-Debatech8.html#x11-100007), because the pattern of the first replicator was, of necessity, something that could happen by accident. It is only the *second* replicating pattern that you would never have seen without many copies of the first replicator around to give birth to it; only the *second* replicator that was part of the world of evolution, something you wouldn't see in a world of accidents.\n\nThat first replicator must have looked like one of the most bizarre things in the whole history of time---this *replicator* created purely by *chance*. But the history of time could never have been set in motion, otherwise.\n\nAnd what a bizarre thing a human must be, a mind born entirely of evolution, a mind that was not created by another mind.\n\nWe haven't yet *begun* to see the shape of the era of intelligence.\n\nMost of the universe is far more extreme than this gentle place, Earth's cradle. Cold vacuum or the interior of stars---either is far more common than the temperate weather of Earth's surface, where life first arose, in the balance between the extremes. And most possible intelligences are not balanced, like these first humans, in that strange small region of temperate weather between an amoeba and a Jupiter Brain.\n\nThis is the challenge of my own profession---to break yourself loose of [the tiny human dot in mind-design space](http://lesswrong.com/lw/rm/the_design_space_of_mindsingeneral/), in which we have lived our whole lives, our imaginations lulled to sleep by too-narrow experiences.\n\nFor example, Robin [says](../Text/AI-FOOM-Debatech47.html#x51-):\n\n> *Eliezer guesses that within a few weeks a single AI could grow via largely internal means from weak and unnoticed to so strong it takes over the world*. \\[his italics\\]\n\nI suppose that to a human a \"week\" sounds like a temporal constant describing a \"short period of time,\" but it's actually 10^49^ Planck intervals, or enough time for a population of 2 GHz processor cores to perform 10^15^ *serial* operations one after the other.\n\nPerhaps the thesis would sound less shocking if Robin had said, \"Eliezer guesses that 10^15^ sequential operations might be enough to . . .\"\n\nOne should also bear in mind that [the human brain, which is not designed for the primary purpose of scientific insights, does not spend its power efficiently on having many insights in minimum time](http://lesswrong.com/lw/q9/the_failures_of_eld_science/), but this issue is harder to understand than CPU clock speeds.\n\nRobin says he doesn't like \"[unvetted abstractions](../Text/AI-FOOM-Debatech37.html#x41-).\" Okay. That's a strong point. I get it. Unvetted abstractions go kerplooie, yes they do indeed. But something's wrong with using that as a justification for models where there are lots of little black boxes just like humans scurrying around and we never pry open the black box and scale the brain bigger or redesign its software or even just *speed up* the damn thing. The interesting part of the problem is *harder to analyze*, yes---more distant from [the safety rails of overwhelming evidence](http://lesswrong.com/lw/qj/einsteins_speed/)---but this is no excuse for *refusing to take it into account*.\n\nAnd in truth I do suspect that a strict policy against \"unvetted abstractions\" is not the [real issue](../Text/AI-FOOM-Debatech44.html#x48-) here. I [constructed a simple model of an upload civilization running on the computers their economy creates](../Text/AI-FOOM-Debatech42.html#x46-): If a nonupload civilization has an exponential Moore's Law, y = e^t^, then, naively, an upload civilization ought to have *dy/dt = e^y^ → y = -*ln*(C -t)*. *Not* necessarily up to infinity, but for as long as Moore's Law would otherwise stay exponential in a biological civilization. I walked though the implications of this model, showing that in many senses it behaves \"just like we would expect\" for describing a civilization running on its own computers.\n\nCompare this to Robin Hanson's \"[Economic Growth Given Machine Intelligence](http://hanson.gmu.edu/aigrow.pdf)\",^[1](#AI-FOOM-Debatech48.html#enz.59)^[]{#AI-FOOM-Debatech48.html#enz.59.backref} which Robin [describes](../Text/AI-FOOM-Debatech39.html#x43-) as using \"one of the simplest endogenous growth models to explore how Moore's Law changes with computer-based workers. It is an early but crude attempt, but it is the sort of approach I think promising.\" Take a quick look at that paper.\n\nNow, consider the *abstractions* used in my Moore's Researchers scenario, versus the *abstractions* used in Hanson's paper above, and ask yourself *only* the question of which looks more \"vetted by experience\"---given that both are models of a sort that haven't been used before, in domains not actually observed, and that both give results quite different from the world we see---and that would probably cause the vast majority of actual economists to say, \"Naaaah.\"\n\n[Moore's Researchers](../Text/AI-FOOM-Debatech42.html#x46-) versus \"Economic Growth Given Machine Intelligence\"---if you didn't think about the *conclusions* in advance of the reasoning; and if you also neglected that one of these has been written up in a way that is more impressive to economics journals; and you just asked the question, \"To what extent is the math used here, constrained by our prior experience?\" then I would think that the race would at best be even. Or possibly favoring \"Moore's Researchers\" as being more simple and intuitive, and involving less novel math as measured in additional quantities and laws introduced.\n\nI ask in all humility if Robin's [true rejection](../Text/AI-FOOM-Debatech44.html#x48-) is a strictly evenhandedly applied rule that rejects unvetted abstractions. Or if, in fact, Robin finds my conclusions, and the sort of premises I use, to be *objectionable for other reasons*---which, so far as we know at this point, may well be *valid* objections---and so it appears to him that my abstractions bear *a larger burden of proof* than the sort of mathematical steps he takes in \"Economic Growth Given Machine Intelligence.\" But rather than offering the reasons why the burden of proof appears larger to him, he says instead that it is \"not vetted enough.\"\n\nOne should understand that \"Your abstractions are unvetted!\" makes it difficult for me to engage properly. The core of my argument has to do with what happens when you pry open the black boxes that are your economic agents, and start fiddling with their brain designs, and leave the tiny human dot in mind-design space. If all such possibilities are rejected *on the basis of their being \"unvetted\" by experience*, it doesn't leave me with much to talk about.\n\nWhy not just accept the rejection? Because I expect that to give the wrong answer---I expect it to ignore the dominating factor in the Future, even if the dominating factor is harder to analyze.\n\nIt shouldn't be surprising if a persistent disagreement ends up resting on that point where your attempt to take into account the other person's view runs up against some question of simple fact where, it *seems* to you, *you know that can't possibly be right*.\n\nFor me, that point is reached when trying to visualize a model of interacting black boxes that behave like humans except they're cheaper to make. The world, which [shattered once with the first replicator](../Text/AI-FOOM-Debatech8.html#x11-100007), and [shattered for the second time with the emergence of human intelligence](../Text/AI-FOOM-Debatech19.html#x23-), somehow does *not* shatter a third time. Even in the face of blowups of brain size far greater than the size transition from chimpanzee brain to human brain; and changes in design far larger than the design transition from chimpanzee brains to human brains; and simple serial thinking speeds that are, maybe even right from the beginning, thousands or millions of times faster.\n\nThat's the point where I, having spent my career trying to look inside the black box, trying to wrap my tiny brain around the rest of mind-design space that isn't like our small region of temperate weather, just can't make myself believe that the Robin-world is *really truly actually* the way the future will be.\n\nThere are other things that seem like probable nodes of disagreement:\n\nRobin Hanson's description of Friendly AI development as \"[total war](../Text/AI-FOOM-Debatech28.html#x32-)\" that is harmful to even discuss, or his description of a realized Friendly AI as \"a God to rule us all.\" Robin must be visualizing an in-practice outcome very different from what I do, and this seems like a likely source of emotional fuel for the disagreement as well.\n\nConversely, Robin Hanson [seems to approve of a scenario](../Text/AI-FOOM-Debatech47.html#x51-) where lots of AIs, of arbitrary motives, constitute the vast part of the economic productivity of the Solar System, because he thinks that humans will be protected under the legacy legal system that grew continuously out of the modern world, and that the AIs will be unable to coordinate to transgress the legacy legal system for fear of losing their own legal protections. I tend to visualize a somewhat different outcome, to put it mildly, and would symmetrically be suspected of emotional unwillingness to accept that outcome as inexorable.\n\nRobin [doesn't dismiss Cyc out of hand](../Text/AI-FOOM-Debatech32.html#x36-) and even \"hearts\" it, which implies that we have extremely different pictures of how intelligence works.\n\nLike Robin, I'm also feeling burned on this conversation, and I doubt we'll finish it; but I should write at least two more posts to try to describe what I've learned, and some of the rules that I think I've been following.\n\n[]{#AI-FOOM-Debatech48.html#likesection.68}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/wl/true_sources_of_disagreement/php): Miscellaneous points:\n>\n> - I guessed a week to month doubling time, not six months.\n> - I've talked explicitly about integrated communities of faster ems.\n> - I used a learning-by-doing modeling approach to endogenize Moore's Law.\n> - Any model of minds usable for forecasting world trends must leave out detail.\n> - Most people complain that economists using game theory to model humans ignore too much human detail; what *excess* human detail do you think economists retain?\n> - Research labs hiring workers, e.g., Intel, are willing to trade off worker speed, i.e., hours per week, for worker salary, experience, etc.; a model that says Intel cares only about worker speed misses an awful lot.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wl/true_sources_of_disagreement/pht): Robin, I found different guesses at the doubling time listed in different places, so I just used one from \"Economic Growth Given Machine Intelligence.\" I'll change the text.\n\n> [Robin Hanson](http://lesswrong.com/lw/wl/true_sources_of_disagreement/pic): . . . Eliezer, most readers of this blog are not in a position to evaluate which model looks more vetted. The whole point is that a community of thousands of specialists has developed over decades vetting models of total system growth, and they are in the best position to judge. I have in fact not just talked about vetting, but have offered more detailed reasons why your model seems unsatisfactory.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wl/true_sources_of_disagreement/pie): . . . Robin, should we ask James Miller then? I have no problem with the detailed reasons you offer, it's just the \"insufficiently vetted\" part of the argument that I find difficult to engage with---unless I actually find members of this community and ask them which specific pieces are \"vetted\" in their view, by what evidence, and which not. I wouldn't necessarily trust them, to be frank, because it was never a condition of their profession that they should deal with nonhumans. But at least I would have some idea of what those laws were under which I was being judged.\n>\n> It's hard for me to accept as normative the part of this argument that is an appeal to authority (professional community that has learned good norms about constructing growth models) rather than an appeal to evidence (look at how well the evidence fits these specific growth models). It's not that I reject authority in general, but these people's professional experience is entirely about humans, and it's hard for me to believe that they have taken into account the considerations involved in extrapolating narrow experience to non-narrow experience when various basic assumptions are potentially broken. I would expect them to have norms that worked for describing humans, full stop.\n\n> [Robin Hanson](http://lesswrong.com/lw/wl/true_sources_of_disagreement/pif): Eliezer, I'm not sure James Miller has done much econ growth research. How about my colleague [Garrett Jones](http://mason.gmu.edu/~gjonesb/), who specializes in intelligence and growth?\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wl/true_sources_of_disagreement/pih): Robin, I'd be interested, but I'd ask whether you've discussed this particular issue with Jones before. (I.e., the same reason I don't cite Peter Cheeseman as support for, e.g., the idea that *general* AI mostly doesn't work if you don't have all the parts, and then undergoes something like a chimp → human transition as soon as all the parts are in place. So far as I can tell, Cheeseman had this idea before I met him; but he still wouldn't be an unbiased choice of referee, because I already know many of his opinions and have explicitly contaminated him on some points.)\n\n> [Robin Hanson](http://lesswrong.com/lw/wl/true_sources_of_disagreement/pij): Eliezer, Garrett has seen and likes my growth paper, but he and I have not talked at all about your concepts. I sent him a link once to [this post](http://lesswrong.com/lw/vc/economic_definition_of_intelligence/) of yours;^[2](#AI-FOOM-Debatech48.html#enz.60)^[]{#AI-FOOM-Debatech48.html#enz.60.backref} I'll email you his reply.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wl/true_sources_of_disagreement/pim): . . . Robin, email reply looks fine.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/wl/true_sources_of_disagreement/) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech48.html#enz.59} [1](#AI-FOOM-Debatech48.html#enz.59.backref). Hanson, [\"Economic Growth Given Machine Intelligence](../Text/AI-FOOM-Debatech39.html#cite.0.Hanson.1998c).\"\n\n[]{#AI-FOOM-Debatech48.html#enz.60} [2](#AI-FOOM-Debatech48.html#enz.60.backref). []{#AI-FOOM-Debatech48.html#cite.0.Yudkowsky.2008i}Eliezer Yudkowsky, \"Economic Definition of Intelligence?,\" *Less Wrong* (blog), October 29, 2008, .\n\n[]{#AI-FOOM-Debatech49.html}\n\n## []{#AI-FOOM-Debatech49.html#x53-}[Chapter 48]{.titlemark} The Bad Guy Bias {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [9 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n[Shankar Vedantam](http://www.washingtonpost.com/wp-dyn/content/article/2008/12/07/AR2008120702830.html):\n\n> Nations tend to focus far more time, money and attention on tragedies caused by human actions than on the tragedies that cause the greatest amount of human suffering or take the greatest toll in terms of lives. . . . In recent years, a large number of psychological experiments have found that when confronted by tragedy, people fall back on certain mental rules of thumb, or heuristics, to guide their moral reasoning. When a tragedy occurs, we instantly ask who or what caused it. When we find a human hand behind the tragedy---such as terrorists, in the case of the Mumbai attacks---something clicks in our minds that makes the tragedy seem worse than if it had been caused by an act of nature, disease or even human apathy. . . .\n>\n> Tragedies, in other words, cause individuals and nations to behave a little like the detectives who populate television murder mystery shows: We spend nearly all our time on the victims of killers and rapists and very little on the victims of car accidents and smoking-related lung cancer.\n>\n> \"We think harms of actions are much worse than harms of omission,\" said Jonathan Baron, a psychologist at the University of Pennsylvania. \"We want to punish those who act and cause harm much more than those who do nothing and cause harm. We have more sympathy for the victims of acts rather than the victims of omission. If you ask how much should victims be compensated, \\[we feel\\] victims harmed through actions deserve higher compensation.\"^[1](#AI-FOOM-Debatech49.html#enz.61)^[]{#AI-FOOM-Debatech49.html#enz.61.backref}\n\nThis bias should also afflict our future thinking, making us worry more about evil alien intent than unintentional catastrophe.\n\n[]{#AI-FOOM-Debatech49.html#likesection.69}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/the-bad-guy-bia.html#comment-518243510): Indeed, I've found that people repeatedly ask me about AI projects with ill intentions---Islamic terrorists building an AI---rather than trying to grasp the ways that well-intentioned AI projects go wrong by default.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/12/the-bad-guy-bia.html) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech49.html#enz.61} [1](#AI-FOOM-Debatech49.html#enz.61.backref). []{#AI-FOOM-Debatech49.html#cite.0.Vedantam.2008}Shankar Vedantam, \"In Face of Tragedy, 'Whodunit' Question Often Guides Moral Reasoning,\" *Washington Post*, December 8, 2008, accessed November 25, 2012, .\n\n[]{#AI-FOOM-Debatech50.html}\n\n## []{#AI-FOOM-Debatech50.html#x54-}[Chapter 49]{.titlemark} Disjunctions, Antipredictions, Etc. {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [9 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Followup to:** [Underconstrained Abstractions](../Text/AI-FOOM-Debatech39.html#x43-)[Previously](../Text/AI-FOOM-Debatech39.html#x43-):\n\n> So if it's not as simple as *just* using the one trick of finding abstractions you can easily verify on available data . . . what are some other tricks to use?\n\nThere are several, as you might expect . . .\n\nPreviously I talked about \"[permitted possibilities](../Text/AI-FOOM-Debatech38.html#x42-).\" There's a trick in debiasing that has mixed benefits, which is to try and visualize several specific possibilities instead of just one.\n\nThe reason it has \"mixed benefits\" is that being specific, at all, can have [biasing effects relative to just imagining a typical case](http://lesswrong.com/lw/jg/planning_fallacy/). (And believe me, if I'd seen the outcome of a hundred planets in roughly our situation, I'd be talking about that instead of all this [Weak Inside View](../Text/AI-FOOM-Debatech6.html#x9-80005) stuff.)\n\nBut if you're going to bother visualizing the future, it does seem to help to visualize more than one way it could go, instead of concentrating all your strength into *one* prediction.\n\nSo I try not to ask myself, \"What will happen?\" but rather, \"Is this possibility allowed to happen, or is it prohibited?\" There are propositions that seem forced to me, but those should be relatively rare---the first thing to understand about the future is that it is hard to predict, and you shouldn't seem to be getting strong information about most aspects of it.\n\nOf course, if you allow more than one possibility, then you have to discuss more than one possibility, and the total length of your post gets longer. If you just eyeball the length of the post, it looks like an unsimple theory; and then talking about multiple possibilities makes you sound weak and uncertain.\n\nAs Robyn Dawes [notes](http://www.amazon.com/Rational-Choice-Uncertain-World-Psychology/dp/076192275X/),\n\n> In their summations lawyers avoid arguing from disjunctions in favor of conjunctions. (There are not many closing arguments that end, \"Either the defendant was in severe financial straits and murdered the decedent to prevent his embezzlement from being exposed or he was passionately in love with the same coworker and murdered the decedent in a fit of jealous rage or the decedent had blocked the defendant's promotion at work and the murder was an act of revenge. The State has given you solid evidence to support each of these alternatives, all of which would lead to the same conclusion: first-degree murder.\") Rationally, of course, disjunctions are much *more* probable than are conjunctions.^[1](#AI-FOOM-Debatech50.html#enz.62)^[]{#AI-FOOM-Debatech50.html#enz.62.backref}\n\nAnother test I use is simplifiability---*after* I've analyzed out the idea, can I compress it *back* into an argument that fits on a T-shirt, even if it loses something thereby? Here's an example of some compressions:\n\n- The whole notion of recursion and feeding object-level improvements back into meta-level improvements: \"If computing power per dollar doubles every eighteen months, what happens if computers are doing the research?\"\n- No diminishing returns on complexity in the region of the transition to human intelligence: \"We're so similar to chimps in brain design, and yet so much more powerful; the upward slope must be really steep.\"\n- Scalability of hardware: \"Humans have only four times the brain volume of chimps---now imagine an AI suddenly acquiring a thousand times as much power.\"\n\nIf the whole argument was that T-shirt slogan, I wouldn't find it compelling---too simple and surface a metaphor. So you have to look more closely, and try visualizing some details, and make sure the argument can be consistently realized so far as you know. But if, *after* you do that, you can compress the argument back to fit on a T-shirt again---even if it sounds naive and stupid in that form---then that helps show that the argument doesn't *depend* on all the details being true simultaneously; the details might be different while fleshing out the same core idea.\n\nNote also that the three statements above are to some extent disjunctive---you can imagine only one of them being true, but a hard takeoff still occurring for just that reason alone.\n\nAnother trick I use is the idea of *antiprediction*. This is when the narrowness of our human experience distorts our metric on the answer space, and so you can make predictions that actually aren't far from max-entropy priors, but *sound* very startling.\n\nI shall explain:\n\nA news story about an Australian national lottery that was just starting up, interviewed a man on the street, asking him if he would play. He said yes. Then they asked him what he thought his odds were of winning. \"Fifty--fifty,\" he said, \"either I win or I don't.\"\n\nTo predict your odds of winning the lottery, you should invoke the Principle of Indifference with respect to all possible combinations of lottery balls. But this man was invoking the Principle of Indifference with respect to the partition \"win\" and \"not win.\" To him, they sounded like equally simple descriptions; but the former partition contains only one combination, and the latter contains the other N million combinations. (If you don't agree with this analysis, I'd like to sell you some lottery tickets.)\n\nSo the *antiprediction* is just \"You won't win the lottery.\" And the one may say, \"What? How do you know that? You have no evidence for that! You can't prove that I won't win!\" So they are focusing far too much attention on a small volume of the answer space, artificially inflated by the way their attention dwells upon it.\n\nIn the same sense, if you look at a television SF show, you see that [a remarkable number of aliens seem to have human body plans](http://lesswrong.com/lw/so/humans_in_funny_suits/)---two arms, two legs, walking upright, right down to five fingers per hand and the location of eyes in the face. But this is a very narrow partition in the body-plan space; and if you just said, \"They won't look like humans,\" that would be an antiprediction that just steps outside this artificially inflated tiny volume in the answer space.\n\nSimilarly with the true sin of television SF, which is too-human minds, even among aliens not meant to be sympathetic characters. \"If we meet aliens, they won't have a sense of humor,\" I antipredict; and to a human it sounds like I'm saying something highly specific, because [all minds by default have a sense of humor](http://lesswrong.com/lw/tt/points_of_departure/), and I'm predicting the presence of a no-humor attribute tagged on. But actually, I'm just predicting that a point in mind-design volume is outside the narrow hyperplane that contains humor.\n\nAn AI might go from infrahuman to transhuman in *less than a week*? But a week is 10^49^ Planck intervals---if you just look at the exponential scale that stretches from the Planck time to the age of the universe, there's nothing special about the timescale that 200 Hz humans happen to live on, any more than there's something special about the numbers on the lottery ticket you bought.\n\nIf we're talking about a starting population of 2 GHz processor cores, then any given AI that FOOMs at all is likely to FOOM in less than 10^15^ sequential operations or more than 10^19^ sequential operations, because the region between 10^15^ and 10^19^ isn't all that wide a target. So less than a week or more than a century, and in the latter case that AI will be trumped by one of a shorter timescale.\n\nThis is actually a pretty naive version of the timescale story. But as an example, it shows how a \"prediction\" that's close to just stating a maximum-entropy prior can sound amazing, startling, counterintuitive, and futuristic.\n\nWhen I make an antiprediction supported by disjunctive arguments that are individually simplifiable, I feel *slightly* less nervous about departing the rails of vetted abstractions. (In particular, I regard this as sufficient reason not to trust the results of generalizations over only human experiences.)\n\nFinally, there are three tests I apply to figure out how strong my predictions are.\n\nThe first test is to just ask myself the Question \"What do you think you know, and why do you think you know it?\" The future is something I haven't yet observed; if my brain claims to know something about it with any degree of confidence, what are the reasons for that? The first test tries to align the strength of my predictions with things that I have reasons to believe---a basic step, but one which brains are surprisingly won't to skip.\n\nThe second test is to ask myself, \"How worried do I feel that I'll have to write an excuse explaining why this happened anyway?\" If I don't feel worried about having to write an excuse---if I can stick my neck out and not feel too concerned about ending up with egg on my face---then clearly my brain really does believe this thing quite strongly, not as a point to be [professed](http://lesswrong.com/lw/i6/professing_and_cheering/) through enthusiastic argument, but as an ordinary sort of fact. Why?\n\nAnd the third test is the \"[So what?](http://lesswrong.com/lw/vx/failure_by_analogy/)\" test---to what degree will I feel indignant if Nature comes back and says, \"So what?\" to my clever analysis? Would I feel as indignant as if I woke up one morning to read in the newspaper that Mars had started orbiting the Sun in squares instead of ellipses? Or, to make it somewhat less strong, as if I woke up one morning to find that banks were charging negative interest on loans? If so, clearly I must possess some kind of *extremely* strong argument---one that even Nature Itself ought to find compelling, not just humans. What is it?\n\n[]{#AI-FOOM-Debatech50.html#likesection.70}\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/wm/disjunctions_antipredictions_etc/) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech50.html#enz.62} [1](#AI-FOOM-Debatech50.html#enz.62.backref). []{#AI-FOOM-Debatech50.html#cite.0.Dawes.1988}Robyn M. Dawes, *Rational Choice in An Uncertain World*, 1st ed., ed. Jerome Kagan (San Diego, CA: Harcourt Brace Jovanovich, 1988).\n\n[]{#AI-FOOM-Debatech51.html}\n\n## []{#AI-FOOM-Debatech51.html#x55-}[Chapter 50]{.titlemark} Are AIs *Homo Economicus*? {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [9 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nEliezer [yesterday](../Text/AI-FOOM-Debatech48.html#x52-):\n\n> If I had to pinpoint a single thing that strikes me as \"disagree-able\" about the way Robin frames his analyses, it's that there are a lot of *opaque* agents running around, little black boxes assumed to be similar to humans, but there are more of them and they're less expensive to build/teach/run. . . . The core of my argument has to do with what happens when you pry open the black boxes that are your economic agents, and start fiddling with their brain designs, and leave the tiny human dot in mind-design space.\n\nLots of folks complain about economists; believers in peak oil, the gold standard, recycling, electric cars, rent control, minimum wages, tariffs, and bans on all sorts of things complain about contrary economic analyses. Since compared to most social scientists economists use relatively stark mathy models, the usual complaint is that our models neglect relevant factors and make false assumptions.\n\nBut of course we must neglect most everything, and make false assumptions, to have tractable models; the question in each context is what neglected factors and false assumptions would most mislead us.\n\nIt is odd to hear complaints that economic models assume too much humanity; the usual complaint is the opposite. Unless physicists have reasons to assume otherwise, they usually assume masses are at points, structures are rigid, surfaces are frictionless, and densities are uniform. Similarly, unless economists have reasons to be more realistic in a context, they usually assume people are identical, risk neutral, live forever, have selfish material stable desires, know everything, make no mental mistakes, and perfectly enforce every deal. Products usually last one period or forever, are identical or infinitely varied, etc.\n\nOf course we often do have reasons to be more realistic, considering deals that may not be enforced; people who die; people with diverse desires, info, abilities, and endowments; people who are risk averse, altruistic, or spiteful; people who make mental mistakes; and people who follow \"behavioral\" strategies. But the point isn't just to add as much realism as possible; it is to be clever about knowing which sorts of detail are most relevant in what context.\n\nSo to a first approximation, economists can't usually tell if the agents in their models are AIs or human! But we can still wonder: how could economic models better capture AIs? In common with ems, AIs could make copies of themselves, save backups, and run at varied speeds. Beyond ems, AIs might buy or sell mind parts, and reveal mind internals, to show commitment to actions or honesty of stated beliefs. [Of course](http://hanson.gmu.edu/moretrue.pdf),\n\n> That might just push our self-deception back to the process that produced those current beliefs. To deal with self-deception in belief production, we might want to provide audit trails, giving more transparency about the origins of our beliefs.^[1](#AI-FOOM-Debatech51.html#enz.63)^[]{#AI-FOOM-Debatech51.html#enz.63.backref}\n\nSince economists feel they understand the broad outlines of cooperation and conflict pretty well using simple stark models, I am puzzled to hear Eliezer [say](../Text/AI-FOOM-Debatech47.html#x51-):\n\n> If human beings were *really genuinely* selfish, the economy would fall apart or at least have to spend vastly greater resources policing itself. . . . Group coordination mechanisms, executing as adaptations, are critical to the survival of a global economy.\n\nWe think we understand just fine how genuinely selfish creatures can cooperate. Sure, they might have to spend somewhat greater on policing, but not *vastly* greater, and a global economy could survive just fine. This seems an important point, as it seems to be why Eliezer fears even nonlocal AI fooms.\n\n[]{#AI-FOOM-Debatech51.html#likesection.71}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/are-ais-homo-ec.html#comment-518247116): The main part you're leaving out of your models (on my view) is the part where AIs can scale on hardware by expanding their brains, and scale on software by redesigning themselves, and these scaling curves are much sharper than \"faster\" let alone \"more populous.\" Aside from that, of course, AIs are more like economic agents than humans are.\n>\n> My statement about \"truly selfish humans\" isn't meant to be about truly selfish AIs, but rather, truly selfish entities with limited human attention spans, who have much worse agent problems than an AI that can monitor all its investments simultaneously and inspect the source code of its advisers. The reason I fear nonlocal AI fooms is precisely that they would have no trouble coordinating to cut the legacy humans out of their legal systems.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/are-ais-homo-ec.html#comment-518247168): Eliezer, economists assume that every kind of product can be improved, in terms of cost and performance, and we have many detailed models of product innovation and improvement. The hardware expansion and software redesign that you say I leave out seem to me included in the mind parts that can be bought or sold. How easy it is to improve such parts, and how much better parts add to mind productivity, is exactly the debate we've been having.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/12/are-ais-homo-ec.html) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech51.html#enz.63} [1](#AI-FOOM-Debatech51.html#enz.63.backref). []{#AI-FOOM-Debatech51.html#cite.0.Hanson.2009a}Robin Hanson, \"Enhancing Our Truth Orientation,\" in *Human Enhancement*, 1st ed., ed. Julian Savulescu and Nick Bostrom (New York: Oxford University Press, 2009), 257--274.\n\n[]{#AI-FOOM-Debatech52.html}\n\n## []{#AI-FOOM-Debatech52.html#x56-}[Chapter 51]{.titlemark} Two Visions Of Heritage {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [9 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nEliezer and I seem to disagree on our heritage.**I see** our main heritage from the past as all the innovations embodied in the design of biological cells/bodies, of human minds, and of the processes/habits of our hunting, farming, and industrial economies. These innovations are mostly steadily accumulating modular \"content\" within our architectures, produced via competitive processes and implicitly containing both beliefs and values. Architectures also change at times as well.\n\nSince older heritage levels grow more slowly, we switch when possible to rely on newer heritage levels. For example, we once replaced hunting processes with farming processes, and within the next century we may switch from bio to industrial mental hardware, becoming ems. We would then rely far less on bio and hunting/farm heritages, though still lots on mind and industry heritages. Later we could make AIs by transferring mind content to new mind architectures. As our heritages continued to accumulate, our beliefs and values should continue to change.\n\nI see the heritage we will pass to the future as mostly avoiding disasters to preserve and add to these accumulated contents. We might get lucky and pass on an architectural change or two as well. As ems [we can avoid](../Text/AI-FOOM-Debatech57.html#x63-) our bio death heritage, allowing some of us to continue on as ancients living on the margins of far future worlds, personally becoming a heritage to the future.\n\nEven today one could imagine overbearing systems of property rights giving almost all income to a few. For example, a few consortiums might own every word or concept and require payments for each use. But we do not have such systems, in part because they would not be enforced. One could similarly imagine future systems granting most future income to a few ancients, but those systems would also not be enforced. Limited property rights, however, such as to land or sunlight, would probably be enforced just to keep peace among future folks, and this would give even unproductive ancients a tiny fraction of future income, plenty for survival among such vast wealth.In contrast, it seems **Eliezer sees** a universe where In the Beginning arose a blind and indifferent but prolific creator, who eventually made a race of seeing creators, creators who could also love, and love well. His story of the universe centers on the loves and sights of a team of geniuses of mind design, a team probably alive today. This genius team will see deep into the mysteries of mind, far deeper than all before, and learn to create a seed AI mind architecture which will suddenly, and with little warning or outside help, grow to take over the world. If they are wise, this team will also see deep into the mysteries of love, to make an AI that forever loves what that genius team wants it to love.\n\nAs the AI creates itself it reinvents everything from scratch using only its architecture and raw data; it has little need for other bio, mind, or cultural content. All previous heritage aside from the genius team's architecture and loves can be erased more thoroughly than the Biblical flood supposedly remade the world. And forevermore from that point on, the heritage of the universe would be a powerful unrivaled AI singleton, i.e., a God to rule us all, that does and makes what it loves.\n\nIf God's creators were wise then God is unwavering in loving what it was told to love; if they were unwise, then the universe becomes a vast random horror too strange and terrible to imagine. Of course other heritages may be preserved if God's creators told him to love them; and his creators would probably tell God to love themselves, their descendants, their associates, and their values.\n\nThe contrast between these two views of our heritage seems hard to overstate. One is a dry account of small individuals whose abilities, beliefs, and values are set by a vast historical machine of impersonal competitive forces, while the other is a grand inspiring saga of absolute good or evil hanging on the wisdom of a few mythic heroes who use their raw genius and either love or indifference to make a God who makes a universe in the image of their feelings. How does one begin to compare such starkly different visions?\n\n[]{#AI-FOOM-Debatech52.html#likesection.72}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/two-visions-of.html#comment-518239467): Needless to say, I don't think this represents my views even poorly, but to focus on your own summary:\n>\n> > As our heritages continued to accumulate, our beliefs and values should continue to change.\n>\n> You don't seem very upset about this \"values change\" process. Can you give an example of a values change that might occur? Are there values changes that you wouldn't accept, or that you would regard as an overwhelming disaster?\n>\n> Naively, one would expect that a future in which very few agents share your utility function is a universe that will have very little utility from your perspective. Since you don't seem to feel that this is the case, are there things you value that you expect to be realized by essentially arbitrary future agents? What are these things?\n>\n> What is it that your Future contains which is good, which you expect to be realized even if almost no one values this good in itself?\n>\n> If the answer is \"nothing\" then the vision that you have sketched is of a universe empty of value; we should be willing to take almost any risk to prevent its realization.\n>\n> > Even today one could imagine overbearing systems of property rights giving almost all income to a few. For example, a few consortiums might own every word or concept and require payments for each use. But we do not have such systems, in part because they would not be enforced. One could similarly imagine future systems granting most future income to a few ancients, but those systems would also not be enforced.\n>\n> Please walk us through the process by which you think, if most future capital or income were granted to a few ancients under a legacy legal system, a poor majority of AIs would reject this legal system and replace it with something else. What exactly goes through their minds? How is the process of replacing the legacy legal system carried out?\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/two-visions-of.html#comment-518239592): . . . Eliezer, I'll correct errors you point out in views I attribute to you. This post is taking seriously your suggestion to look deeper for the core of our disagreement. My vision isn't of a universe as I want it to be, but of a universe as it is. An example of a future values change would be ems only mildly upset at death, when many other recent copies still live. I can see why they would have such values, and it doesn't seem a terrible thing to me. I'll consider writing a new post about rebellion against legacies.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/12/two-visions-of.html) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech53.html}\n\n## []{#AI-FOOM-Debatech53.html#x57-}[Chapter 52]{.titlemark} The Mechanics of Disagreement {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [10 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\nTwo ideal Bayesians cannot have common knowledge of disagreement; this is a theorem. If two rationalist wannabes have common knowledge of a disagreement between them, what could be going wrong?\n\nThe obvious interpretation of these theorems is that if you know that a cognitive machine is a rational processor of evidence, [its beliefs become evidence themselves](http://lesswrong.com/lw/jl/what_is_evidence/).\n\nIf you design an AI and the AI says, \"This fair coin came up heads with 80% probability,\" then you know that the AI has accumulated evidence with an likelihood ratio of 4:1 favoring heads---because the AI only emits that statement under those circumstances.\n\nIt's not a matter of charity; it's just that this is how you think the other cognitive machine works.\n\nAnd if you tell an ideal rationalist, \"I think this fair coin came up heads with 80% probability,\" and they reply, \"I now think this fair coin came up heads with 25% probability,\" and your sources of evidence are independent of each other, then you should accept this verdict, reasoning that (before you spoke) the other mind must have encountered evidence with a likelihood of 1:12 favoring tails.\n\nBut this *assumes* that the other mind also thinks that *you're* processing evidence correctly, so that, by the time it says \"I now think this fair coin came up heads, p = .25,\" it has already taken into account the full impact of all the evidence you know about, before adding more evidence of its own.\n\nIf, on the other hand, the other mind doesn't trust your rationality, then it won't accept your evidence at face value, and the estimate that it gives won't integrate the full impact of the evidence you observed.\n\nSo does this mean that when two rationalists trust each other's rationality less than completely, then they can agree to disagree?\n\nIt's not that simple. Rationalists should not trust *themselves* entirely, either.\n\nSo when the other mind accepts your evidence at less than face value, this doesn't say, \"You are less than a perfect rationalist,\" it says, \"I trust you less than you trust yourself; I think that you are discounting your own evidence too little.\"\n\nMaybe your raw arguments seemed to you to have a strength of 40:1, but you discounted for your own irrationality to a strength of 4:1, but the other mind thinks you still overestimate yourself and so it assumes that the actual force of the argument was 2:1.\n\nAnd if you *believe* that the other mind is discounting you in this way, and is unjustified in doing so, then when it says, \"I now think this fair coin came up heads with 25% probability,\" you might bet on the coin at odds of 57% in favor of heads---adding up your further-discounted evidence of 2:1 to the implied evidence of 1:6 that the other mind must have seen to give final odds of 2:6---*if* you even fully trust the other mind's further evidence of 1:6.\n\nI think we have to be very careful to avoid interpreting this situation in terms of anything like a *reciprocal trade*, like two sides making *equal concessions* in order to reach agreement on a business deal.\n\nShifting beliefs is not a concession that you make for the sake of others, expecting something in return; it is an advantage you take for your own benefit, to improve your own map of the world. I am, generally speaking, a [Millie-style altruist](http://ozyandmillie.org/2003/03/24/ozy-and-millie-1134/); but when it comes to *belief shifts* I espouse a pure and principled selfishness: don't believe you're doing it for anyone's sake but your own.\n\nStill, I once read that there's a principle among con artists that the main thing is to get the mark to believe that *you trust them*, so that they'll feel obligated to trust you in turn.\n\nAnd---even if it's for completely different theoretical reasons---if you want to persuade a rationalist to shift belief to match yours, you either need to persuade them that you have all of the same evidence they do and have already taken it into account, or that you already fully trust their opinions as evidence, or that you know better than they do how much they themselves can be trusted.\n\nIt's that last one that's the really sticky point, for obvious reasons of asymmetry of introspective access and asymmetry of motives for overconfidence---how do you resolve that conflict? (And if you started *arguing* about it, then the question wouldn't be which of these were more important as a factor, but rather, which of these factors the Other had under- or overdiscounted in forming their estimate of a given person's rationality . . .)\n\nIf I had to name a single reason why two wannabe rationalists wouldn't actually be able to agree in practice, it would be that once you trace the argument to the meta level where theoretically everything can be and must be resolved, the argument trails off into psychoanalysis and noise.\n\nAnd if you look at what goes on in *practice* between two arguing rationalists, it would probably mostly be trading object-level arguments; and the most meta it would get is trying to convince the other person that you've already taken their object-level arguments into account.\n\nStill, this does leave us with three clear reasons that someone might point to, to justify a persistent disagreement---even though the frame of mind of *justification* and having clear reasons to *point to* in front of others is itself antithetical to the spirit of resolving disagreements---but even so:\n\n- *Clearly*, the Other's object-level arguments are flawed; no amount of trust that I can have for another person will make me believe that rocks fall upward.\n- *Clearly*, the Other is not taking my arguments into account; there's an obvious asymmetry in how well I understand them and have integrated their evidence, versus how much they understand me and have integrated mine.\n- *Clearly*, the Other is completely biased in how much they trust themselves over others, versus how I humbly and evenhandedly discount my own beliefs alongside theirs.\n\nSince we don't want to go around encouraging disagreement, one might do well to ponder how all three of these arguments are used by creationists to justify their persistent disagreements with scientists.\n\nThat's one reason I say *clearly*---if it isn't obvious even to outside onlookers, maybe you shouldn't be confident of resolving the disagreement there. Failure at any of these levels implies failure at the meta-levels above it, but the higher-order failures might not be *clear*.\n\n[]{#AI-FOOM-Debatech53.html#likesection.73}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/wo/the_mechanics_of_disagreement/pjf): Of course if you knew that your disputant would only disagree with you when one of these three conditions clearly held, you would take their persistent disagreement as showing one of these conditions held, and then back off and stop disagreeing. So to apply these conditions you need the additional implicit condition that they do not believe that you could only disagree under one of these conditions.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/wo/the_mechanics_of_disagreement/) for all comments.\n:::\n\n[]{#AI-FOOM-Debatepa3.html}\n\n# []{#AI-FOOM-Debatepa3.html#x58-57000III}[Part III ]{.titlemark}Conclusion {.partHead}\n\n``{=html}\n\n{.dink}\n\n[]{#AI-FOOM-Debatech54.html}\n\n## []{#AI-FOOM-Debatech54.html#x59-}[Chapter 53]{.titlemark} What Core Argument? {.chapterHead}\n\n{.dink}\n\n### [Robin Hanson]{.chapterAuthor} [10 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n``{=html}\n\nPeople keep asking me to return to the core of the argument, but, well, there's just not much there. Let's review, again. Eliezer suggests someone soon may come up with a seed AI architecture allowing a single AI to within roughly a week grow from unimportant to strong enough to take over the world. I'd guess we are talking over twenty orders of magnitude growth in its capability, or sixty doublings.\n\nThis amazing growth rate sustained over such a large magnitude range is far beyond what the vast majority of AI researchers, growth economists, or most any other specialists would estimate. It is also far beyond estimates suggested by the usual choices of historical analogs or trends. Eliezer says the right reference set has two other elements, the origin of life and the origin of human minds, but why should we accept this reference? He also has a math story to suggest this high average growth, but [I've said](../Text/AI-FOOM-Debatech47.html#x51-):\n\n> I also find Eliezer's [growth math](../Text/AI-FOOM-Debatech34.html#x38-) unpersuasive. Usually dozens of relevant factors are coevolving, with several loops of all else equal X growth speeds Y growth speeds etc. Yet usually it all adds up to exponential growth, with rare jumps to faster growth rates. Sure, if you pick two things that plausibly speed each other and leave everything else out including diminishing returns, your math can suggest accelerating growth to infinity, but for a real foom that loop needs to be real strong, much stronger than contrary muting effects.\n\nEliezer has some story about how chimp vs. human brain sizes shows that mind design doesn't suffer diminishing returns or low-hanging-fruit-first slowdowns, but I have yet to comprehend this argument. Eliezer says it is a myth that chip developers need the latest chips to improve chips as fast as they do, so there aren't really diminishing returns there, but chip expert Jed Harris [seems to disagree](http://lesswrong.com/lw/wi/sustained_strong_recursion/per).\n\nMonday Eliezer [said](../Text/AI-FOOM-Debatech48.html#x52-):\n\n> Yesterday I exhausted myself . . . asking . . . \"What do you think you know, and why do you think you know it?\" with respect to, \"How much of the AI problem compresses to large insights, and how much of it is unavoidable nitty-gritty?\"\n\nHis [answer](../Text/AI-FOOM-Debatech48.html#x52-):\n\n> The human brain is a haphazard thing, thrown together by [idiot evolution](http://lesswrong.com/lw/kt/evolutions_are_stupid_but_work_anyway/). . . . If there were any *smaller* modification of a chimpanzee that spontaneously gave rise to a technological civilization, we would be having this conversation at that lower level of intelligence instead.\n>\n> Human neurons run at less than a millionth the speed of transistors. . . . There's no reason to think that the brain's software is any closer to the limits of the possible than its hardware. . . . \\[Consider\\] the manifold known ways in which our high-level thought processes fumble even the simplest problems. Most of these are not deep, inherent flaws of intelligence. . . .\n>\n> We haven't yet *begun* to see the shape of the era of intelligence. Most of the universe is far more extreme than this gentle place, Earth's cradle. . . . Most possible intelligences are not balanced, like these first humans, in that strange small region of temperate weather between an amoeba and a Jupiter Brain. . . . I suppose that to a human a \"week\" sounds like a temporal constant describing a \"short period of time,\" but it's actually 10^49^ Planck intervals.\n\nI feel like the woman in Monty Python's \"Can we have your liver?\" sketch, cowed into giving her liver after hearing how vast is the universe. Sure, evolution being stupid suggests there are substantial architectural improvements to be found. *But that says nothing about the relative contribution of architecture and content in minds, nor does it say anything about how easy it will be to quickly find a larger number of powerful architectural improvements!*\n\n[]{#AI-FOOM-Debatech54.html#likesection.74}\n\n------------------------------------------------------------------------\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/what-core-argument.html#comment-518246972): The question \"How compressible is it?\" is not related to the paragraph you quote. It is simply what I actually happened to be doing that day.\n>\n> Twenty orders of magnitude in a week doesn't sound right, unless you're talking about the tail end *after* the AI gets nanotechnology. Figure more like some number of years to push the AI up to a critical point, two to six orders of magnitude improvement from there to nanotech, then some more orders of magnitude after that.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/what-core-argument.html#comment-518247001): Also, the notion is not that mind design never runs into diminishing returns. Just that you don't hit that point up to human intelligence. The main easily accessible arguments for why you don't hit diminishing returns for some time *after* human intelligence has to do with the idea that there's (a) nothing privileged about human intelligence and (b) lots of visible flaws in it.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/what-core-argument.html#comment-518247067): I don't understand why visible flaws implies a lack of diminishing returns near the human level.\n\n> [Eliezer Yudkowsky](http://www.overcomingbias.com/2008/12/what-core-argument.html#comment-518247151): It means you can go on past human *just* by correcting the flaws. If you look at the actual amount of cognitive work that we devote to the key insights in science, as opposed to chasing red herrings, clinging to silly ideas, or going to the bathroom, then there's at least three orders of magnitude speedup right there, I'd say, on the cognitive part of the process.\n\n> [Robin Hanson](http://www.overcomingbias.com/2008/12/what-core-argument.html#comment-518247177): I'm talking orders of magnitude in total capacity to do things, something like economic product, because that seems the simplest overall metric. If the world has ten orders of magnitude of humans, then something that can take over the world is roughly that much bigger than a human. And presumably this AI starts as far less capable than a human. If this scenario happens in an em world, there'd be lots more stronger creatures to beat.\n>\n> Eliezer, I don't see how that follows *at all*. Just because I can tell that a car's bumper is too heavy doesn't mean I have any idea how to make a car. You need to make a direct and clear argument. . . .\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://www.overcomingbias.com/2008/12/what-core-argument.html) for all comments.\n:::\n\n[]{#AI-FOOM-Debatech55.html}\n\n## []{#AI-FOOM-Debatech55.html#x60-}[Chapter 54]{.titlemark} What I Think, If Not Why {.chapterHead}\n\n{.dink}\n\n### [Eliezer Yudkowsky]{.chapterAuthor} [11 December 2008]{.chapterDate} {.chapterSubHead .sigil_not_in_toc}\n\n**Reply to:** [Two Visions of Heritage](../Text/AI-FOOM-Debatech52.html#x56-)Though it really goes tremendously against my grain---it feels like sticking my neck out over a cliff (or something)---I guess I have no choice here but to try and make a list of *just* my positions, without justifying them. We can only talk justification, I guess, after we get straight what my positions *are*. I will also [leave off many disclaimers](http://www.overcomingbias.com/2008/06/against-disclai.html) to present the points *compactly* enough to be remembered.\n\n- A well-designed mind should be ***much more efficient* than a human**, capable of doing more with [less sensory data](http://lesswrong.com/lw/qk/that_alien_message/) and [fewer computing operations](http://lesswrong.com/lw/q9/the_failures_of_eld_science/). It is not *infinitely efficient* and **does not use *zero* data**. But it does use little enough that *local pipelines* such as a small pool of programmer-teachers, and later a huge pool of e-data, are sufficient.\n- An AI that reaches a certain point in its own development becomes able to ([sustainably, strongly](../Text/AI-FOOM-Debatech42.html#x46-)) improve itself. At this point, **[recursive](../Text/AI-FOOM-Debatech23.html#x27-) [cascades](../Text/AI-FOOM-Debatech21.html#x25-) slam over many internal growth curves to near the limits of their current hardware**, and the AI undergoes a vast increase in capability. This point is at, or probably considerably before, a minimally transhuman mind capable of writing its own AI theory textbooks---an upper bound beyond which it could swallow and improve its *entire* design chain.\n- It is *likely* that this capability increase or \"FOOM\" has an intrinsic maximum velocity that a human would regard as \"fast\" if it happens at all. A human week is \\~10^15^ serial operations for a population of 2 GHz cores, and a century is \\~10^19^ serial operations; this whole range is a narrow window. However, the core argument does not require one-week speed and a FOOM that takes two years (\\~10^17^ serial ops) will still carry the weight of the argument.\n- **The *default* case of FOOM is an unFriendly AI, built by researchers with shallow insights**. This AI becomes able to improve itself in a haphazard way, makes various changes that are net improvements but may introduce value drift, and then gets smart enough to do guaranteed self-improvement, at which point its values freeze (forever).\n- **The *desired* case of FOOM is a Friendly AI**, built using deep insight, so that the AI never makes any changes to itself that potentially change its internal values; all such changes are guaranteed using [strong techniques](http://lesswrong.com/lw/vt/the_nature_of_logic/) that allow for a billion sequential self-modifications without losing the guarantee. The guarantee is written over the AI's *internal search criterion* for actions, rather than *external consequences*.\n- The **good guys do *not* write** an AI which values **a bag of things that the programmers think are good ideas**, like libertarianism or socialism or making people happy or whatever. There were multiple *Less Wrong* sequences about this *one point*, like the [Fake Utility Function sequence](http://lesswrong.com/lw/lp/fake_fake_utility_functions/) and the sequence on metaethics. It is dealt with at length in the document [Coherent Extrapolated Volition](http://intelligence.org/files/CEV.pdf). It is the first thing, the last thing, and the middle thing that I say about Friendly AI. I have said it over and over. I truly do not understand how anyone can pay *any* attention to *anything* I have said on this subject and come away with the impression that I think programmers are supposed to directly impress their nonmeta personal philosophies onto a Friendly AI.\n- **The good guys do not directly impress their personal values onto a Friendly AI.**\n- Actually setting up a Friendly AI's values is **an extremely *meta* operation,** less \"make the AI want to make people happy\" and more like \"[**superpose** the possible **reflective equilibria** of the **whole human species**, and **output new code** that overwrites the current AI and has the **most coherent** support within that superposition](http://intelligence.org/files/CEV.pdf).\"^[1](#AI-FOOM-Debatech55.html#enz.64)^[]{#AI-FOOM-Debatech55.html#enz.64.backref} This actually seems to be something of a *pons asinorum* in FAI---the ability to understand and endorse metaethical concepts that do not *directly* sound like amazing wonderful happy ideas. **Describing this as declaring total war on the rest of humanity does not seem [fair](http://lesswrong.com/lw/ru/the_bedrock_of_fairness/)** (or accurate).\n- **I myself am strongly individualistic:** The most painful memories in my life have been when other people thought they knew better than me, and tried to do things on my behalf. It is also a known principle of hedonic psychology that people are happier when they're steering their own lives and doing their own interesting work. When I try myself to visualize what a beneficial superintelligence ought to do, it consists of **setting up a world that works by better rules, and then fading into the background,** silent as the laws of Nature once were, and finally folding up and vanishing when it is no longer needed. But this is only the thought of my mind that is merely human, and **I am barred from programming any such consideration *directly* into a Friendly AI,** for the reasons given above.\n- Nonetheless, it does seem to me that this particular scenario **could not be justly described as \"a God to rule over us all,\"** unless the current fact that humans age and die is \"a malevolent God to rule us all.\" So either Robin has a very different idea about what human reflective equilibrium values are likely to look like; or Robin believes that the Friendly AI project is bound to *fail* in such way as to create a paternalistic God; or---and this seems more likely to me---Robin didn't read all the way through all the blog posts in which I tried to explain all the ways that this is not how Friendly AI works.\n- **Friendly AI is technically difficult and requires an [extra-ordinary](http://lesswrong.com/lw/uo/make_an_extraordinary_effort/) effort on multiple levels.** [English sentences](http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/) like \"make people happy\" cannot describe the values of a Friendly AI. [Testing is not sufficient to guarantee that values have been successfully transmitted](http://lesswrong.com/lw/td/magical_categories/).\n- White-hat AI researchers are distinguished by the degree to which **they understand that a single misstep could be fatal, and can discriminate strong and weak assurances.** Good intentions are not only common, they're cheap. The story isn't about good versus evil, it's about people trying to [do the impossible](http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/) versus [others](http://lesswrong.com/lw/uc/aboveaverage_ai_scientists/) who . . . aren't.\n- Intelligence is about being able to **learn lots of things, not about knowing lots of things.** Intelligence is especially not about tape-recording lots of parsed English sentences à la Cyc. Old AI work was poorly focused due to inability to introspectively see the first and higher *derivatives* of knowledge; human beings have an easier time reciting sentences than reciting their ability to learn.\n- **Intelligence is mostly about architecture,** or \"knowledge\" along the lines of knowing to look for causal structure (Bayes-net type stuff) in the environment; this kind of knowledge will usually be expressed procedurally as well as declaratively. **Architecture is mostly about deep insights.** This point has not yet been addressed (much) on *Overcoming Bias*, but Bayes nets can be considered as an archetypal example of \"architecture\" and \"deep insight.\" Also, ask yourself how lawful intelligence seemed to you before you started reading this blog, how lawful it seems to you now, then extrapolate outward from that.\n\n[]{#AI-FOOM-Debatech55.html#likesection.75}\n\n------------------------------------------------------------------------\n\n> [Robin Hanson](http://lesswrong.com/lw/wp/what_i_think_if_not_why/pjt): I understand there are various levels on which one can express one's loves. One can love Suzy, or kind pretty funny women, or the woman selected by a panel of judges, or the the one selected by a judging process designed by a certain AI strategy, etc. But even very meta loves are loves. You want an AI that loves the choices made by a certain meta process that considers the wants of many, and that may well be a superior love. But it is still a love, your love, and the love you want to give the AI. You might think the world should be grateful to be placed under the control of such a superior love, but many of them will not see it that way; they will see your attempt to create an AI to take over the world as an act of war against them.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wp/what_i_think_if_not_why/pjy): Robin, using the word \"love\" sounds to me distinctly like something intended to evoke object-level valuation. \"Love\" is an archetype of direct valuation, not an archetype of metaethics.\n>\n> And I'm not so much of a mutant that, rather than liking cookies, I like everyone having their reflective equilibria implemented. Taking that step is *the substance of my attempt to be fair*. In the same way that someone voluntarily splitting up a pie into three shares is not on the same moral level as someone who seizes the whole pie for themselves---even if, *by volunteering to do the fair thing rather than some other thing*, they have shown themselves to value fairness.\n>\n> My take on this was given in \"[The Bedrock of Fairness](http://lesswrong.com/lw/ru/the_bedrock_of_fairness/)\".^[2](#AI-FOOM-Debatech55.html#enz.65)^[]{#AI-FOOM-Debatech55.html#enz.65.backref}\n>\n> But you might as well say, \"George Washington gave in to his desire to be a tyrant; he was just a tyrant who wanted democracy.\" Or, \"Martin Luther King declared total war on the rest of the US, since what he wanted was a nonviolent resolution.\"\n>\n> Similarly with \"I choose not to control you\" being a form of controlling.\n\n> [Robin Hanson](http://lesswrong.com/lw/wp/what_i_think_if_not_why/pk5): In a foom that took two years, if the AI was visible after one year, that might give the world a year to destroy it.\n\n> [Eliezer Yudkowsky](http://lesswrong.com/lw/wp/what_i_think_if_not_why/pk7): Robin, we're still talking about a local foom. Keeping security for two years may be difficult but is hardly unheard-of.\n\n------------------------------------------------------------------------\n\n::: {.center}\nSee [original post](http://lesswrong.com/lw/wp/what_i_think_if_not_why/) for all comments.\n:::\n\n------------------------------------------------------------------------\n\n[]{#AI-FOOM-Debatech55.html#enz.64} [1](#AI-FOOM-Debatech55.html#enz.64.backref). []{#AI-FOOM-Debatech55.html#cite.0.Yudkowsky.2004}Eliezer Yudkowsky, *Coherent Extrapolated Volition* (The Singularity Institute, San Francisco, CA, May 2004),