{"text": "AI Impacts Quarterly Newsletter, Jan-Mar 2023\n\nHarlan Stewart, 17 April 2023\nNews\nAI Impacts blog\nWe moved our blog to Substack! We think this platform has many advantages, and we’re excited for the blog to live here. You can now easily subscribe to the blog to receive regular newsletters as well as various thoughts and observations related to AI.\nAI Impacts wiki\nAll AI Impacts research pages now reside on the AI Impacts Wiki. The wiki aims to document what we know so far about decision-relevant questions about the future of AI. Our pages have always been wiki-like: updatable reference pages organized by topic. We hope that making it an actual wiki will make it clearer to everyone what’s going on, as well as better to use for this purpose, for both us and readers. We are actively looking for ways to make the wiki even better, and you can help with this by sharing your thoughts in our feedback form or in the comments of this blog post!\nNew office\nWe recently moved to a new office that we are sharing with FAR AI and other partner organizations. We’re extremely grateful to the team at FAR for organizing this office space, as well as to the Lightcone team for hosting us over the last year and a half.\nKatja Grace talks about forecasting AI risk at EA Global\nAt EA Global Bay Area 2023, Katja gave a talk titled Will AI end everything? A guide to guessing in which she outlined a way to roughly estimate the extent of AI risk.\nAI Impacts in the Media\n\nAI Impacts’ 2022 Expert Survey on Progress in AI was cited in an NBC Nightly News segment, an op-ed in Bloomberg, an op-ed in The New York Times, an article in Our World in Data, and an interview with Kelsey Piper.\nEzra Klein quoted Katja and separately cited the survey in his New York Times op-ed This Changes Everything.\nSigal Samuel interviewed Katja for the Vox article The case for slowing down AI.\n\nResearch and writing highlights\nAI Strategy\n\n“Let’s think about slowing down AI” argues that those who are concerned about existential risks from AI should think about strategies that could slow the progress of AI (Katja)\n“Framing AI strategy” discusses ten frameworks for thinking about AI strategy. (Zach)\n“Product safety is a poor model for AI governance” argues that a common type of policy proposal is inadequate to address the risks of AI. (Rick)\n“Alexander Fleming and Antibiotic Resistance” is a research report about early efforts to prevent antibiotic resistance and relevant lessons for AI risk. (Harlan)\n\nResisted technological temptations: how much economic value has been forgone for safety and ethics in past technologies?\n\n“What we’ve learned so far from our technological temptations project” is a blog post that summarizes the Technological Temptations project and some possible takeaways. (Rick)\nGeoengineering, nuclear power, and vaccine challenge trials were evaluated for the amount of value that may have been forgone by not using them. (Jeffrey)\n\nPublic awareness and opinions about AI\n\n“The public supports regulating AI for safety” summarizes the results from a survey of the American public about AI. (Zach)\n“How popular is ChatGPT?”: Part 1 looks at trends in AI-related search volume, and Part 2 refutes a widespread claim about the growth of ChatGPT. (Harlan and Rick)\n\nThe state of AI today: funding, hardware, and capabilities\n\n“Recent trends in funding for AI companies” analyzes data about the amount of funding AI companies have received. (Rick)\n“How much computing capacity exists in GPUs and TPUs in Q1 2023?” uses a back-of-the-envelope calculation to estimate the total amount of compute that exists on all GPUs and TPUs. (Harlan)\n“Capabilities of state-of-the-art AI, 2023” is a list of some noteworthy things that state-of-the-art AI can do. (Harlan and Zach)\n\nArguments for AI risk\n\nStill in progress, “Is AI an existential risk to humanity?” is a partially complete page summarizing various arguments for concern about existential risk from AI. A couple of specific arguments are examined more closely in “Argument for AI x-risk from competent malign agents” and “Argument for AI x-risk from large impacts” (Katja)\n\nChaos theory and what it means for AI safety\n\n“AI Safety Arguments Affected by Chaos” reasons about ways in which chaos theory could be relevant to predictions about AI, and “Chaos in Humans” explores the theoretical limits to predicting human behavior. The report “Chaos and Intrinsic Unpredictability” provides background, and a blog post summarizes the project. (Jeffrey and Aysja)\n\nMiscellany\n\n“How bad a future do ML researchers expect?” compares experts’ answers in 2016 and 2022 to the question “How positive or negative will the impacts of high-level machine intelligence on humanity be in the long run?” (Katja)\n“We don’t trade with ants” (crosspost) disputes the common claim that advanced AI systems won’t trade with humans for the same reason that humans don’t trade with ants. (Katja)\n\nFunding\nWe’re actively seeking financial support to continue our research and operations for the rest of the year. Previous funding allowed us to expand our research team and hold a summer internship program.\nIf you want to talk to us about why we should be funded or hear more details about what we would do with money, please write to Elizabeth, Rick, or Katja at [firstname]@aiimpacts.org.\nIf you’d like to donate to AI Impacts, you can do so here. (And we thank you!)Image credit: Midjourney", "url": "https://aiimpacts.org/ai-impacts-quarterly-newsletter-jan-mar-2023/", "title": "AI Impacts Quarterly Newsletter, Jan-Mar 2023", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2023-04-17T22:02:42+00:00", "paged_url": "https://aiimpacts.org/feed?paged=1", "authors": ["Harlan Stewart"], "id": "5767e4b5e1aabc3dcd39fadc893f40f7", "summary": []} {"text": "What we’ve learned so far from our technological temptations project\n\nRick Korzekwa, 11 April 2023, updated 13 April 2023\nAt AI Impacts, we’ve been looking into how people, institutions, and society approach novel, powerful technologies. One part of this is our technological temptations project, in which we are looking into cases where some actors had a strong incentive to develop or deploy a technology, but chose not to or showed hesitation or caution in their approach. Our researcher Jeffrey Heninger has recently finished some case studies on this topic, covering geoengineering, nuclear power, and human challenge trials.\nThis document summarizes the lessons I think we can take from these case studies. Much of it is borrowed directly from Jeffrey’s written analysis or conversations I had with him, some of it is my independent take, and some of it is a mix of the two, which Jeffrey may or may not agree with. All of it relies heavily on his research.\nThe writing is somewhat more confident than my beliefs. Some of this is very speculative, though I tried to flag the most speculative parts as such.\nSummary\nJeffrey Heninger investigated three cases of technologies that create substantial value, but were not pursued or pursued more slowly\nThe overall scale of value at stake was very large for these cases, on the order of hundreds of billions to trillions of dollars. But it’s not clear who could capture that value, so it’s not clear whether the temptation was closer to $10B or $1T.\nSocial norms can generate strong disincentives for pursuing a technology, especially when combined with enforceable regulation.\nScientific communities and individuals within those communities seem to have particularly high leverage in steering technological development at early stages.\nInhibiting deployment can inhibit development for a technology over the long term, at least by slowing cost reductions.\nSome of these lessons are transferable to AI, at least enough to be worth keeping in mind.\nOverview of cases\n\nGeoengineering could feasibly provide benefits of $1-10 trillion per year through global warming mitigation, at a cost of $1-10 billion per year, but actors who stand to gain the most have not pursued it, citing a lack of research into its feasibility and safety. Research has been effectively prevented by climate scientists and social activist groups.\nNuclear power has proliferated globally since the 1950s, but many countries have prevented or inhibited the construction of nuclear power plants, sometimes at an annual cost of tens of billions of dollars and thousands of lives. This is primarily done through legislation, like Italy’s ban on all nuclear power, or through costly regulations, like safety oversight in the US that has increased the cost of plant construction in the US by a factor of ten.\nHuman challenge trials may have accelerated deployment of covid vaccines by more than a month, saving many thousands of lives and billions or trillions of dollars. Despite this, the first challenge trial for a covid vaccine was not performed until after several vaccines had been tested and approved using traditional methods. This is consistent with the historical rarity of challenge trials, which seems to be driven by ethical concerns and enforced by institutional review boards.\n\nScale\nThe first thing to notice about these cases is the scale of value at stake. Mitigating climate change could be worth hundreds of billions or trillions of dollars per year, and deploying covid vaccines a month sooner could have saved many thousands of lives. While these numbers do not represent a major fraction of the global economy or the overall burden of disease, they are large compared to many relevant scales for AI risk. The world’s most valuable companies have market caps of a few trillion dollars, and the entire world spends around two trillion dollars per year on defense. In comparison, annual funding for AI is on the order of $100B.1\nComparison between the potential gains from mitigating global warming and deploying covid vaccines faster. These items were somewhat arbitrarily chosen, and most of the numbers were not carefully researched, but they should be in the right ballpark.\nSetting aside for the moment who could capture the value from a technology and whether the reasons for delaying or forgoing its development are rational or justified, I think it is worth recognizing that the potential upsides are large enough to create strong incentives.\nSocial norms\nMy read on these cases is that a strong determinant for whether a technology will be pursued is social attitudes toward the technology and its regulation. I’m not sure what would have happened if Pfizer had, in defiance of FDA standards and medical ethics norms, infected volunteers with covid as part of their vaccine testing, but I imagine it would have been more severe than fines or difficulty obtaining FDA approval. They would have lost standing in the medical community and possibly been unable to continue existing as a company. This goes similarly for other technologies and actors. Building nuclear power plants without adhering to safety standards is so far outside the range of acceptable actions that even suggesting it as a strategy for running a business or addressing climate change is a serious risk to reputation for a CEO or public official. An oil company executive who finances a project to disperse aerosols into the upper atmosphere to reduce global warming and protect his business sounds like a Bond movie villain.\n\nThis is not to suggest that social norms are infinitely strong or that they are always well-aligned with society’s interests. Governments and corporations will do things that are widely viewed as unethical if they think they can get away with it, for example, by doing it in secret.2 And I think that public support for our current nuclear safety regime is gravely mistaken. But strong social norms, either against a technology or against breaking regulations do seem able, at least in some cases, to create incentives strong enough to constrain valuable technologies.\nThe public\nThe public plays a major role in defining and enforcing the range of acceptable paths for technology. Public backlash in response to early challenge trials set the stage for our current ethics standards, and nuclear power faces crippling safety regulations in large part because of public outcry in response to a perceived lack of acceptable safety standards. In both of these cases, the result was not just the creation of regulations, but strong buy-in and a souring of public opinion on a broad category of technologies.3\nAlthough public opposition can be a powerful force in expelling things from the Overton window, it does not seem easy to predict or steer. The Chernobyl disaster made a strong case for designing reactors in a responsible way, but it was instead viewed by much of the public as a demonstration that nuclear power should be abolished entirely. I do not have a strong take on how hard this problem is in general, but I do think it is important and should be investigated further.\nThe scientific community\nThe precise boundaries of acceptable technology are defined in part by the scientific community, especially when technologies are very early in development. Policy makers and the public tend to defer to what they understand to be the official, legible scientific view when deciding what is or is not okay. This does not always match with actual views of scientists.\nGeoengineering as an approach to reducing global warming has not been recommended by the IPCC, and a minority of climate scientists support research into geoengineering. Presumably the advocacy groups opposing geoengineering experiments would have faced a tougher battle if the official stance from the climate science community were in favor of geoengineering.\nOne interesting aspect of this is that scientific communities are small and heavily influenced by individual prestigious scientists. The taboo on geoengineering research was broken by the editor of a major climate journal, after which the number of papers on the topic increased by more than a factor of 20 after two years.4\nScientific papers published on solar radiation management by year. Paul Crutzen, an influential climate scientist, published a highly-cited paper on the use of aerosols to mitigate global warming in 2006. Oldham, et al 2014.\nI suspect the public and policymakers are not always able to tell the difference between the official stance of regulatory bodies and the consensus of scientific communities. My impression is that scientific consensus is not in favor of radiation health models used by the Nuclear Regulatory Commission, but many people nonetheless believe that such models are sound science.\nWarning shots\nPast incidents like the Fukushima disaster and the Tuskegee syphilis study are frequently cited by opponents of nuclear power and human challenge trials. I think this may be significant, because it suggests that these “warning shots” have done a lot to shape perception of these technologies, even decades later. One interpretation of this is that, regardless of why someone is opposed to something, they benefit from citing memorable events when making their case. Another, non-competing interpretation is that these events are causally important in the trajectory of these technologies’ development and the public’s perception of them.\nI’m not sure how to untangle the relative contribution of these effects, but either way, it suggests that such incidents are important for shaping and preserving norms around the deployment of technology.\nLocality\nIn general, social norms are local. Building power plants is much more acceptable in France than it is in Italy. Even if two countries allow the construction of nuclear power plants and have similarly strong norms against breaking nuclear safety regulations, those safety regulations may be different enough to create a large difference in plant construction between countries, as seen with the US and France.\nBecause scientific communities have members and influence across international borders, they may have more sway over what happens globally (as we’ve seen with geoengineering), but this may be limited by local differences in the acceptability of going against scientific consensus.\nDevelopment trajectories\nA common feature of these cases is that preventing or limiting deployment of the technology inhibited its development. Because less developed technologies are less useful and harder to trust, this seems to have helped reduce deployment.\nNormally, things become cheaper to make as we make more of them in a somewhat predictable way. The cost goes down with the total amount that has been produced, following a power law. This is what has been happening with solar and wind power.5\nLevelized cost of energy for wind and solar power, as a function of total capacity built. Levelized cost includes cost building, operating, and maintaining wind and solar farms. Bolinger 2022\nInitially, building nuclear power plants seems to have become cheaper in the usual way for new technology—doubling the total capacity of nuclear power plants reduced the cost per kilowatt by a constant fraction. Starting around 1970, regulations and public opposition to building plants did more than increase construction costs in the near term. By reducing the number of plants built and inhibiting small-scale design experiments, it slowed the development of the technology, and correspondingly reduced the rate at which we learned to build plants cheaply and safely.6 Absent reductions in cost, they continue to be uncompetitive with other power generating technologies in many contexts.\nNuclear power in France and the US followed typical cost reduction curves until roughly 1970, after which they showed the opposite behavior. However, France showed a much more gradual increase. Lang 2017.\nBecause solar radiation management acts on a scale of months-to-years and the costs of global warming are not yet very high, I am not surprised that we have still not deployed it. But this does not explain the lack of research, and one of the reasons given for opposition to experiments is that it has not been shown to be safe. But the reason we lack evidence on safety is because research has been opposed, even at small scales.\nIt is less clear to me how much the relative lack of human challenge trials in the past7 has made us less able to do them well now. I’m also not sure how much a stronger past record of challenge trials would cause them to be viewed more positively. Still, absent evidence that medical research methodology does not improve in the usual way with quantity of research, I expect we are at least somewhat less effective at performing human challenge trials than we otherwise would be.\nSeparating safety decisions from gains of deployment\nI think it’s impressive that regulatory bodies are able to prevent use of technology even when the cost of doing so is on the scale of many billions, plausibly trillions of dollars. One of the reasons this works seems to be that regulators will be blamed if they approve something and it goes poorly, but they will not receive much credit if things go well. Similarly, they will not be held accountable for failing to approve something good. This creates strong incentives for avoiding negative outcomes while creating little incentive to seek positive outcomes. I’m not sure if this asymmetry was deliberately built into the system or if it is a side effect of other incentive structures (e.g, at the level of politics, there is more benefit from placing blame than there is from giving credit), but it is a force to be reckoned with, especially in contexts where there is a strong social norm against disregarding the judgment of regulators.\nWho stands to gain\nIt is hard to assess which actors are actually tempted by a technology. While society at large could benefit from building more nuclear power plants, much of the benefit would be dispersed as public health gains, and it is difficult for any particular actor to capture that value. Similarly, while many deaths could have been prevented if the covid vaccines had been available two months earlier, it is not clear if this value could have been captured by Pfizer or Moderna–demand for vaccines was not changing that quickly.\nOn the other hand, not all the benefits are external–switching from coal to nuclear power in the US could save tens of billions of dollars a year, and drug companies pay billions of dollars per year for trials. Some government institutions and officials have the stated goal of creating benefits like public health, in addition to economic and reputational stakes in outcomes like the quick deployment of vaccines during a pandemic. These institutions pay costs and make decisions on the basis of economic and health gains from technology (for example, subsidizing photovoltaics and obesity research), suggesting they have incentive to create that value.\nOverall, I think this lack of clarity around incentives and capture of value is the biggest reason for doubt that these cases demonstrate strong resistance to technological temptation.\nWhat this means for AI\nHow well these cases generalize to AI will depend on facts about AI that are not yet known. For example, if powerful AI requires large facilities and easily-trackable equipment, I think we can expect lessons from nuclear power to be more transferable than if it can be done at a smaller scale with commonly-available materials. Still, I think some of what we’ve seen in these cases will transfer to AI, either because of similarity with AI or because they reflect more general principles.\nSocial norms\nThe main thing I expect to generalize is the power of social norms to constrain technological development. While it is far from guaranteed to prevent irresponsible AI development, especially if building dangerous AI is not seen as a major transgression everywhere that AI is being developed, it does seem like the world is much safer if building AI in defiance of regulations is seen as similarly villainous to building nuclear reactors or infecting study participants without authorization. We are not at that point, but the public does seem prepared to support concrete limits on AI development.\nSource \nI do think there are reasons for pessimism about norms constraining AI. For geoengineering, the norms worked by tabooing a particular topic in a research community, but I’m not sure if this will work with a technology that is no longer in such an early stage. AI already has a large body of research and many people who have already invested their careers in it. For medical and nuclear technology, the norms are powerful because they enforce adherence to regulations, and those regulations define the constraints. But it can be hard to build regulations that create the right boundaries around technology, especially something as imprecise-defined as AI. If someone starts building a nuclear power plant in the US, it will become clear relatively early on that this is what they are doing, but a datacenter training an AI and a datacenter updating a search engine may be difficult to tell apart.\nAnother reason for pessimism is tolerance for failure. Past technologies have mostly carried risks that scaled with how much of the technology was built. For example, if you’re worried about nuclear waste, you probably think two power plants are about twice as bad as one. While risk from AI may turn out this way, it may be that a single powerful system poses a global risk. If this does turn out to be the case, then even if strong norms combine with strong regulation to achieve the same level of success as for nuclear power, it still will not be adequate.\nDevelopment gains from deployment\nI’m very uncertain how much development of dangerous AI will be hindered by constraints on deployment. I think approximately all technologies face some limitations like this, in some cases very severe limitations, as we’ve seen with nuclear power. But we’re mainly interested in the gains to development toward dangerous systems, which may be possible to advance with little deployment. Adding to the uncertainty, there is ambiguity where the line is drawn between testing and deployment or whether allowing the deployment of verifiably safe systems will provide the gains needed to create dangerous systems.\nSeparating safety decisions from gains\nI do not see any particular reason to think that asymmetric justice will operate differently with AI, but I am uncertain whether regulatory systems around AI, if created, will have such incentives. I think it is worth thinking about IRB-like models for AI safety.\nCapture of value\nIt is obvious there are actors who believe they can capture substantial value from AI (for example Microsoft recently invested $10B in OpenAI), but I’m not sure how this will go as AI advances. By default, I expect the value created by AI to be more straightforwardly capturable than for nuclear power or geoengineering, but I’m not sure how it differs from drug development.\nSocial preview image: German anti-nuclear power protesters in 2012. Used under Creative Commons license from Bündnis 90/Die Grünen Baden-Württemberg Flickr\nNotes", "url": "https://aiimpacts.org/what-weve-learned-so-far-from-our-technological-temptations-project/", "title": "What we’ve learned so far from our technological temptations project", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2023-04-14T00:04:40+00:00", "paged_url": "https://aiimpacts.org/feed?paged=1", "authors": ["richardkorzekwa"], "id": "d5b6f9ff6c59d13928835333066af156", "summary": []} {"text": "Superintelligence Is Not Omniscience\n\nJeffrey Heninger and Aysja Johnson, 7 April 2023\nThe Power of Intelligence\nIt is often implicitly assumed that the power of a superintelligence will be practically unbounded. There seems like there could be “ample headroom” above humans, i.e. that a superintelligence will be able to vastly outperform us across virtually all domains.\nBy “superintelligence,” I mean something which has arbitrarily high cognitive ability, or an arbitrarily large amount of compute, memory, bandwidth, etc., but which is bound by the physical laws of our universe.1 There are other notions of “superintelligence” which are weaker than this. Limitations of the abilities of this superintelligence would also apply to anything less intelligent.\nThere are some reasons to believe this assumption. For one, it seems a bit suspicious to assume that humans have close to the maximal possible intelligence. Secondly, AI systems already outperform us in some tasks,2 so why not suspect that they will be able to outperform us in almost all of them? Finally, there is a more fundamental notion about the predictability of the world, described most famously by Laplace in 1814:\n\nGiven for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it – an intelligence sufficiently vast to submit this data to analysis – it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present in its eyes.3\n\nWe are very far from completely understanding, and being able to manipulate, everything we care about. But if the world is as predictable as Laplace suggests, then we should expect that a sufficiently intelligent agent would be able to take advantage of that regularity and use it to excel at any domain.\nThis investigation questions that assumption. Is it actually the case that a superintelligence has practically unbounded intelligence, or are there “ceilings” on what intelligence is capable of? To foreshadow a bit, there are ceilings in some domains that we care about, for instance, in predictions about the behavior of the human brain. Even unbounded cognitive ability does not imply unbounded skill when interacting with the world. For this investigation, I focus on cognitive skills, especially predicting the future. This seems like a realm where a superintelligence would have an unusually large advantage (compared to e.g. skills requiring dexterity), so restrictions on its skill here are more surprising.\nThere are two ways for there to be only a small amount of headroom above human intelligence. The first is that the task is so easy that humans can do it almost perfectly, like playing tic-tac-toe. The second is that the task is so hard that there is a “low ceiling”: even a superintelligence is incapable of being very good at it. This investigation focuses on the second.\nThere are undoubtedly many tasks where there is still ample headroom above humans. But there are also some tasks for which we can prove that there is a low ceiling. These tasks provide some limitations on what is possible, even with arbitrarily high intelligence.\nChaos Theory\nThe main tool used in this investigation is chaos theory. Chaotic systems are things for which uncertainty grows exponentially in time. Most of the information measured initially is lost after a finite amount of time, so reliable predictions about its future behavior are impossible.\nA classic example of chaos is the weather. Weather is fairly predictable for a few days. Large simulations of the atmosphere have gotten consistently better for these short-time predictions.4\n After about 10 days, these simulations become useless. The predictions from the simulations are worse than guessing what the weather might be using historical climate data from that location.\nChaos theory provides a response to Laplace. Even if it were possible to exactly predict the future given exact initial conditions and equations of motion,5 chaos makes it impossible to approximately predict the future using approximate initial conditions and equations of motion. Reliable predictions can only be made for a short period of time, but not once the uncertainty has grown large enough.\nThere is always some small uncertainty. Normally, we do not care: approximations are good enough. But when there is chaos, the small uncertainties matter. There are many ways small uncertainties can arise: Every measuring device has a finite precision.6 Every theory should only be trusted in the regimes where it has been tested. Every algorithm for evaluating the solution has some numerical error. There are external forces you are not considering that the system is not fully isolated from. At small enough scales, thermal noise and quantum effects provide their own uncertainties. Some of this uncertainty could be reduced, allowing reliable predictions to be made for a bit longer.7 Other sources of this uncertainty cannot be reduced. Once these microscopic uncertainties have grown to a macroscopic scale, the motion of the chaos is inherently unpredictable.\nCompletely eliminating the uncertainty would require making measurements with perfect precision, which does not seem to be possible in our universe. We can prove that fundamental sources of uncertainty make it impossible to know important things about the future, even with arbitrarily high intelligence. Atomic scale uncertainty, which is guaranteed to exist by Heisenberg’s Uncertainty Principle, can make macroscopic motion unpredictable in a surprisingly short amount of time. Superintelligence is not omniscience.\nChaos theory thus allows us to rigorously show that there are ceilings on some particular abilities. If we can prove that a system is chaotic, then we can conclude that the system offers diminishing returns to intelligence. Most predictions of the future of a chaotic system are impossible to make reliably. Without the ability to make better predictions, and plan on the basis of these predictions, intelligence becomes much less useful.\nThis does not mean that intelligence becomes useless, or that there is nothing about chaos which can be reliably predicted. \nFor relatively simple chaotic systems, even when what in particular will happen is unpredictable, it is possible to reliably predict the statistics of the motion.8 We have learned sophisticated ways of predicting the statistics of chaotic motion,9 and a superintelligence could be better at this than we are. It is also relatively easy to sample from this distribution to emulate behavior which is qualitatively similar to the motion of the original chaotic system.\nBut chaos can also be more complicated than this. The chaos might be non-stationary, which means that the statistical distribution and qualitative description of the motion themselves change unpredictably in time. The chaos might be multistable, which means that it can do statistically and qualitatively different things depending on how it starts. In these cases, it is also impossible to reliably predict the statistics of the motion, or to emulate a typical example of a distribution which is itself changing chaotically. Even in these cases, there are sometimes still patterns in the chaos which allow a few predictions to be made, like the energy spectra of fluids.10 These patterns are hard to find, and it is possible that a superintelligence could find patterns that we have missed. But it is not possible for the superintelligence to recover the vast amount of information rendered unpredictable by the chaos.\nThis Investigation\nThis blog post is the introduction to an investigation which explores these points in more detail. I will describe what chaos is, how humanity has learned to deal with chaos, and where chaos appears in things we care about – including in the human brain itself. Links to the other pages, blog posts, and report that constitute this investigation can be found below.\nMost of the systems we care about are considerably messier than the simple examples we use to explain chaos. It is more difficult to prove claims about the inherent unpredictability of these systems, although it is still possible to make some arguments about how chaos affects them.\nFor example, I will show that individual neurons, small networks of neurons, and in vivo neurons in sense organs can behave chaotically.11 Each of these can also behave non-chaotically in other circumstances. But we are more interested in the human brain as a whole. Is the brain mostly chaotic or mostly non-chaotic? Does the chaos in the brain amplify uncertainty all the way from the atomic scale to the macroscopic, or is the chain of amplifying uncertainty broken at some non-chaotic mesoscale? How does chaos in the brain actually impact human behavior? Are there some things that brains do for which chaos is essential?\nThese are hard questions to answer, and they are, at least in part, currently unsolved. They are worth investigating nevertheless. For instance, it seems likely to me that the chaos in the brain does render some important aspects of human behavior inherently unpredictable and plausible that chaotic amplification of atomic-level uncertainty is essential for some of the things humans are capable of doing.\nThis has implications for how humans might interact with a superintelligence and for how difficult it might be to build artificial general intelligence.\nIf some aspects of human behavior are inherently unpredictable, that might make it harder for a superintelligence to manipulate us. Manipulation is easier if it is possible to predict how a human will respond to anything you show or say to them. If even a superintelligence cannot predict how a human will respond in some circumstances, then it is harder for the superintelligence to hack the human and gain precise, long-term control over them.\nSo far, I have been considering the possibility that a superintelligence will exist and asking what limitations there are on its abilities.12 But chaos theory might also change our estimates of the difficulty of making artificial general intelligence (AGI) that leads to superintelligence. Chaos in the brain makes whole brain emulation on a classical computer wildly more difficult – or perhaps even impossible.\nWhen making a model of a brain, you want to coarse-grain it at some scale, perhaps at the scale of individual neurons. The coarse-grained model of a neuron should be much simpler than a real neuron, involving only a few variables, while still being good enough to capture the behavior relevant for the larger scale motion. If a neuron is behaving chaotically itself, especially if it is non-stationary or multistable, then no good enough coarse-grained model will exist. The neuron needs to be resolved at a finer scale, perhaps at the scale of proteins. If a protein itself amplifies smaller uncertainties, then you would have to resolve it at a finer scale, which might require a quantum mechanical calculation of atomic behavior. \nWhole brain emulation provides an upper bound on the difficulty of AGI. If this upper bound ends up being farther away than you expected, then that suggests that there should be more probability mass associated with AGI being extremely hard.\nLinks\nI will explore these arguments, and others, in the remainder of this investigation. Currently, this investigation consists of one report, two Wiki pages, and three blog posts.\nReport:\n\nChaos and Intrinsic Unpredictability. Background reading for the investigation. An explanation of what chaos is, some other ways something can be intrinsically unpredictable, different varieties of chaos, and how humanity has learned to deal with chaos.\n\nWiki Pages:\n\nChaos in Humans. Some of the most interesting things to try to predict are other humans. I discuss whether humans are chaotic, from the scale of a single neuron to society as a whole.\n\n\nAI Safety Arguments Affected by Chaos. A list of the arguments I have seen within the AI safety community which our understanding of chaos might affect.\n\nBlog Posts:\n\nSuperintelligence Is Not Omniscience. This post.\n\n\nYou Can’t Predict a Game of Pinball. A simple and familiar example which I describe in detail to help build intuition for the rest of the investigation.\n\n\nWhole Bird Emulation Requires Quantum Mechanics. A humorous discussion of one example of a quantum mechanical effect being relevant for an animal’s behavior.\n\nOther Resources\nIf you want to learn more about chaos theory in general, outside of this investigation, here are some sources that I endorse:\n\nUndergraduate Level Textbook:S. Strogatz. Nonlinear Dynamics And Chaos: With Applications To Physics, Biology, Chemistry, and Engineering. (CRC Press, 2000).\n\n\nGraduate Level Textbook:P. Cvitanović, R. Artuso, R. Mainieri, G. Tanner and G. Vattay, Chaos: Classical and Quantum. ChaosBook.org. (Niels Bohr Institute, Copenhagen 2020).\n\n\nWikipedia has a good introductory article on chaos. Scholarpedia also has multiple good articles, although no one obvious place to start.\n\n\nWhat is Chaos? sequence of blog posts by The Chaostician.\n\n\nNotes", "url": "https://aiimpacts.org/superintelligence-is-not-omniscience/", "title": "Superintelligence Is Not Omniscience", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2023-04-07T16:25:58+00:00", "paged_url": "https://aiimpacts.org/feed?paged=1", "authors": ["Jeffrey Heninger"], "id": "c84b6e040d2ca6201a330b8ce5023c9d", "summary": []} {"text": "A policy guaranteed to increase AI timelines\n\nRick Korzekwa, April 1, 2023\nThe number of years until the creation of powerful AI is a major input to our thinking about risk from AI and which approaches are most promising for mitigating that risk. While there are downsides to transformative AI arriving many years from now, rather than few years from now, most people seem to agree that it is safer for AI to arrive in 2060 than in 2030. Given this, there is a lot of discussion about what we can do to increase the number of years until we see such powerful systems. While existing proposals have their merits, none of them can ensure that AI will arrive later than 2030, much less 2060.\nThere is a policy that is guaranteed to increase the number of years between now and the arrival of transformative AI. The General Conference on Weights and Measures defines one second to be 9,192,631,770 cycles of the optical radiation emitted during a hyperfine transition in the ground state of a cesium 133 atom. Redefining the second to instead be 919,263,177 cycles of this radiation will increase the number of years between now and transformative AI by a factor of ten. The reason this policy works is the same reason that defining a time standard works–the microscopic behavior of atoms and photons is ultimately governed by the same physical laws as everything else, including computers, AI labs, and financial markets, and those laws are unaffected by our time standards. Thus fewer cycles of cesium radiation per year implies proportionately fewer other things happening per year.\nMaking such a change might not sound politically tractable, but there is already precedent for making radical changes to the definition of a second. Previously it was defined in terms of Earth’s solar orbit, and before that in terms of Earth’s rotation. These physical processes and their implementations as time standards bear little resemblance to the present-day quantum mechanical standard. In contrast, a change that preserves nearly the entire standard, including all significant figures in the relevant numerical definition, is straightforward.\nOne possible objection to this policy is that our time standards are not entirely causally disconnected from the rest of the world. For example, redefining the time standard might create a sense of urgency among AI labs and the people investing in them. It’s not hard to imagine that the leaders and researchers within companies advancing the state of the art in AI might increase their efforts after noticing it is taking ten times as long to generate the same amount of research. While this is a reasonable concern, it seems unlikely that AI labs can increase their rate of progress by a full order of magnitude. Why would they currently be leaving so much on the table if they were? Futhermore, there are similar effects that might push in the other direction. Once politicians and executives realize they will live to be hundreds of years old, they may take risks to the longterm future more seriously.\nStill, it does seem that the policy might have undesirable side effects. Changing all of our textbooks, clocks, software, calendars, and habits is costly. One solution to this challenge is to change the standard either in secret or in a way that allows most people to continue using the old “unofficial” standard. After all, what matters is the actual number of years required to create AI, not the number of years as measured by some deprecated standard.\nIn conclusion, while there are many policies for increasing the number of years before the arrival of advanced artificial intelligence, until now, none of them has guaranteed a large increase in this number. This policy, if implemented promptly and thoughtfully, is essentially guaranteed to cause a large increase the number of years before we see systems capable of posing a serious risk to humanity.", "url": "https://aiimpacts.org/a-policy-guaranteed-to-increase-ai-timelines/", "title": "A policy guaranteed to increase AI timelines", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2023-04-01T20:41:43+00:00", "paged_url": "https://aiimpacts.org/feed?paged=1", "authors": ["richardkorzekwa"], "id": "a323345784c8f65655edd4519d522d7b", "summary": []} {"text": "You Can’t Predict a Game of Pinball\n\nJeffrey Heninger, 29 March 2023\nIntroduction\nWhen thinking about a new idea, it helps to have a particular example to use to gain intuition and to clarify your thoughts. Games are particularly helpful for this, because they have well defined rules and goals. Many of the most impressive abilities of current AI systems can be found in games.1\nTo demonstrate how chaos theory imposes some limits on the skill of an arbitrary intelligence, I will also look at a game: pinball.\nIn this page, I will show that the uncertainty in the location of the pinball grows by a factor of about 5 every time the ball collides with one of the disks. After 12 bounces, an initial uncertainty in position the size of an atom grows to be as large as the disks themselves. Since you cannot launch a pinball with more than atom-scale precision, or even measure its position that precisely, you cannot make the ball bounce between the disks for more than 12 bounces.\nThe challenge is not that we have not figured out the rules that determine the ball’s motion. The rules are simple; the ball’s trajectory is determined by simple geometry. The challenge is that the chaotic motion of the ball amplifies microscopic uncertainties. This is not a problem that is solvable by applying more cognitive effort.\nThe Game\nLet’s consider a physicist’s game of pinball.\nForget about most of the board and focus on the three disks at the top. Each disk is a perfect circle of radius R. The disks are arranged in an equilateral triangle. The minimum distance between any two disks is L. See Figure 1 for a picture of this setup.\nFigure 1: An idealization of the three disks near the top of a pinball table. Drawn by Jeffrey Heninger.\nThe board is frictionless and flat, not sloped like in a real pinball machine. Collisions between the pinball and the disks are perfectly elastic, with no pop bumpers that come out of the disk and hit the ball. The pinball moves at a constant speed all of the time and only changes direction when it collides with a disk.\nThe goal of the game is to get the pinball to bounce between the disks for as long as possible. As long as it is between the disks, it will not be able to get past your flippers and leave the board.\nA real game of pinball is more complicated than this – and correspondingly, harder to predict. If we can establish that the physicist’s game of pinball is impossible to predict, then a real game of pinball will be impossible to predict too. \nCollisions\nWhen the ball approaches a disk, it will not typically be aimed directly at the center of the disk. How far off center it is can be described by the impact parameter, b, which is the distance between the trajectory of the ball and a parallel line which passes through the center of the disk. Figure 2 shows the trajectory of the ball as it collides with a disk.\nThe surface of the disk is at an angle relative to the ball’s trajectory. Call this angle θ. This is also the angle of the position of the collision on the disk relative to the line through the center parallel to the ball’s initial trajectory. This can be seen in Figure 2 because they are corresponding angles on a transversal.\nFigure 2: A single bounce of the pinball off of one of the disks. Drawn by Jeffrey Heninger.\nAt the collision, the angle of incidence equals the angle of reflection. The total change in the direction of the ball’s motion is 2θ. \nWe cannot aim the ball with perfect precision. We can calculate the effect of this imperfect precision by following two slightly different trajectories through the collision instead of one.\nThe two trajectories have slightly different initial locations. The second trajectory has impact parameter b + db, with db ≪ b. Call db the uncertainty in the impact parameter. We assume that the two trajectories have exactly the same initial velocity. If we were to also include uncertainty in the velocity, it would further decrease our ability to predict the motion of the pinball. A diagram of the two trajectories near a collision is shown in Figure 3.\nFigure 3: Two nearby possible trajectories of a pinball bounce off of one of the disks. Drawn by Jeffrey Heninger.\nThe impact parameter, radius of the disk, and angle are trigonometrically related: b = R sinθ (see Figure 2). We can use this to determine the relationship between the uncertainty in the impact parameter and the uncertainty in the angle: db = R cosθ dθ.\nAfter the collision, the two trajectories will no longer be parallel. The angle between them is now 2 dθ. The two trajectories will separate as they travel away from the disk. They will have to travel a distance of at least L before colliding with the next disk. The distance between the two trajectories will then be at least L 2 dθ.\nIteration\nWe can now iterate this. How does the uncertainty in the impact parameter change as the pinball bounces around between the disks?\nStart with an initial uncertainty in the impact parameter, db₀. After one collision, the two trajectories will be farther apart. We can use the new distance between them as the uncertainty in the impact parameter for the second collision.2 The new uncertainty in the impact parameter is related to the old one according to:\ndb_1 \\geq L 2 d\\theta = 2 L \\frac{db_0}{R \\cos\\theta} \\geq \\frac{2L}{R} db_0 \\,.\nWe also used 1 / cos θ > 1 for –π/2 < θ < π/2, which are the angles that could be involved in a collision. The ball will not pass through the disk and collide with the interior of the far side.\nRepeat this calculation to see that after two collisions: \ndb_2 \\geq \\frac{2L}{R} db_1 \\geq \\left(\\frac{2L}{R}\\right)^2 db_0 \\,,\nand after N collisions, \ndb_N \\geq \\left(\\frac{2L}{R}\\right)^N db_0 \\, .\nPlugging in realistic numbers, R = 2 cm and L = 5 cm, we see that \ndb_N \\geq 5^N \\, db_0 \\,.\nThe uncertainty grows exponentially with the number of collisions.\nSuppose we had started with an initial uncertainty about the size of the diameter of an atom, or 10−10 m. After 12 collisions, the uncertainty would grow by a factor of 5¹², to 2.4 cm. The uncertainty is larger than the radius of the disk, so if one of the trajectories struck the center of the disk, the other trajectory would miss the disk entirely.\nThe exponential growth amplifies atomic-scale uncertainty to become macroscopically relevant in a surprisingly short amount of time. If you wanted to predict the path the pinball would follow, having an uncertainty of 2 cm would be unacceptable.\nIn practice, there are many uncertainties that are much larger than 10−10 m, in the production of the pinball, disks, & board and in the mechanics of the launch system. If you managed to solve all of these engineering challenges, you would eventually run into the fundamental limit imposed by quantum mechanics. \nIt is in principle impossible to prepare the initial location of the pinball with a precision of less than the diameter of an atom. Heisenberg’s Uncertainty Principle is relevant at these scales. If you tried to prepare the initial position of a pinball with more precision than this, it would cause the uncertainty in the initial velocity to increase, which would again make the motion unpredictable after a similar number of collisions.\nWhen I mention Heisenberg’s Uncertainty Principle, I expect that there will be some people who want to see the quantum version of the argument. I do not think that it is essential to this investigation, but if you are interested, you can find a discussion of Quantum Pinball in the appendix.\nPredictions\nYou cannot prepare the initial position of the pinball with better than atomic precision, and atomic precision only allows you to predict the motion of the pinball between the disks with centimeter precision for less than 12 bounces. It is impossible to predict a game of pinball for more than 12 bounces in the future. This is true for an arbitrary intelligence, with an arbitrarily precise simulation of the pinball machine, and arbitrarily good manufacturing & launch systems.\nThis behavior is not unique to pinball. It is a common feature of chaotic systems.\nIf you have infinite precision, you could exactly predict the future. The equations describing the motion are deterministic. In this example, following the trajectory is a simple geometry problem, solvable using a straightedge and compass. \nBut we never have infinite precision. Every measuring device only provides a finite number of accurate digits. Every theory has only been tested within a certain regime, and we do not have good reason to expect it will work outside of the regime it has been tested in.\nChaos quickly amplifies whatever uncertainty and randomness which exists at microscopic scales to the macroscopic scales we care about. The microscopic world is full of thermal noise and quantum effects, making macroscopic chaotic motion impossible to predict as well.\nConclusion\nIt is in principle impossible to predict the motion of a pinball as it moves between the top three disks for more than 12 bounces. A superintelligence might be better than us at making predictions after 8 bounces, if it can design higher resolution cameras or more precise ball and board machining. But it too will run into the low prediction ceiling I have shown here.\nPerhaps you think that this argument proves too much. Pinball is not completely a game of chance. How do some people get much better at pinball than others?\nIf you watch a few games of professional pinball, the answer becomes clear. The strategy typically is to catch the ball with the flippers, then to carefully hit the balls so that it takes a particular ramp which scores a lot of points and then returns the ball to the flippers. Professional pinball players try to avoid the parts of the board where the motion is chaotic. This is a good strategy because, if you cannot predict the motion of the ball, you cannot guarantee that it will not fall directly between the flippers where you cannot save it. Instead, professional pinball players score points mostly from the non-chaotic regions where it is possible to predict the motion of the pinball.3\nPinball is typical for a chaotic system. The sensitive dependence on initial conditions renders long term predictions impossible. If you cannot predict what will happen, you cannot plan a strategy that allows you to perform consistently well. There is a ceiling on your abilities because of the interactions with the chaotic system. In order to improve your performance you often try to avoid the chaos and focus on developing your skill in places where the world is more predictable.\n\nAppendix: Quantum Pinball\nIf quantum uncertainty actually is important to pinball, maybe we should be solving the problem using quantum mechanics. This is significantly more complicated, so I will not work out the calculation in detail. I will explain why this does not give you a better prediction for where the pinball will be in the future.\nModel the disks as infinite potential walls, so the wave function reflects off of them and does not tunnel through them. If the pinball does have a chance of tunneling through the disks, that would mean that there are even more places the quantum pinball could be.\nStart with a wave function with minimum uncertainty: a wave packet with Δx Δp =ℏ/2 in each direction. It could be a Gaussian wave packet, or it could be a wavelet. This wave packet is centered around some position and velocity.\nAs long as the wave function is still a localized wave packet, the center of the wave packet follows the classical trajectory. This can be seen either by looking at the time evolution of the average position and momentum, or by considering the semiclassical limit. In order for classical mechanics to be a good approximation to quantum mechanics at macroscopic scales, a zoomed out view of the wave packet has to follow the classical trajectory.\nWhat happens to the width of the wave packet? Just like how the collisions in the classical problem caused nearby trajectories to separate, the reflection off the disk causes the wave packet to spread out. This can be most easily seen using the ray tracing method to solve Schrödinger’s equation in the WKB approximation.4 This method converts the PDE for the wave function into a collection of (infinitely many) ODEs for the path each ‘ray’ follows and for the value of the wavefunction along each ray. The paths the rays follow reflect like classical particles, which means that the region where the wave function is nonzero spreads out in the same way as the classical uncertainty would.\nThis is a common result in quantum chaos. If you start with a minimum uncertainty wave packet, the center of the wave packet will follow the classical trajectory and the width of the wave packet will grow with the classical Lyapunov exponent.5\nAfter 12 collisions, the width of the wave packet would be several centimeters. After another collision or two, the wave function is no longer a wave packet with a well defined center and width. Instead, it has spread out so it has a nontrivial amplitude across the entire pinball machine. There might be some interesting interference patterns or quantum scarring,6 but the wave function will not be localized to any particular place. Since the magnitude of the wave function squared tells you the probability of finding the pinball in that location, this tells us that there is a chance of finding the pinball almost anywhere.\nA quantum mechanical model of the motion of the pinball will not tell you the location of the pinball after many bounces. The result of the quantum mechanical model is a macroscopic wavefunction, with nontrivial probability of being at almost any location across the pinball machine. We do not observe a macroscopic wavefunction. Instead, we observe the pinball at a particular location. Which of these locations you will actually observe the pinball in is determined by wavefunction collapse. Alternatively, I could say that there are Everett branches with the pinball at almost any location on the board.\nSolving wave function collapse, or determining which Everett branch you should expect to find yourself on, is an unsolved and probably unsolvable problem – even for a superintelligence.\n\nNotes", "url": "https://aiimpacts.org/you-cant-predict-a-game-of-pinball/", "title": "You Can’t Predict a Game of Pinball", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2023-03-30T00:39:34+00:00", "paged_url": "https://aiimpacts.org/feed?paged=1", "authors": ["Jeffrey Heninger"], "id": "db6f4906d5d9fc4313ed1d0a1251d19a", "summary": []} {"text": "How bad a future do ML researchers expect?\n\nKatja Grace, 8 March 2023\nIn our survey last year, we asked publishing machine learning researchers how they would divide probability over the future impacts of high-level machine intelligence between five buckets ranging from ‘extremely good (e.g. rapid growth in human flourishing)’ to ‘extremely bad (e.g. human extinction).1 The median respondent put 5% on the worst bucket. But what does the whole distribution look like? Here is every person’s answer, lined up in order of probability on that worst bucket:\n(Column widths may be distorted or columns may be missing due to limitation of chosen software.)\nAnd here’s basically that again from the 2016 survey (though it looks like sorted slightly differently when optimism was equal), so you can see how things have changed:\nDistribution from 2016 survey. (Column widths may be distorted or columns may be missing due to limitation of chosen software.)\nThe most notable change to me is the new big black bar of doom at the end: people who think extremely bad outcomes are at least 50% have gone from 3% of the population to 9% in six years.\nHere are the overall areas dedicated to different scenarios in the 2022 graph (equivalent to averages):\n\nExtremely good: 24%\nOn balance good: 26%\nMore or less neutral: 18%\nOn balance bad: 17%\nExtremely bad: 14%\n\nThat is, between them, these researchers put 31% of their credence on AI making the world markedly worse. \nSome things to keep in mind in looking at these:\n\nIf you hear ‘median 5%’ thrown around, that refers to how the researcher right in the middle of the opinion spectrum thinks there’s a 5% chance of extremely bad outcomes. (It does not mean, ‘about 5% of people expect extremely bad outcomes’, which would be much less alarming.) Nearly half of people are at ten percent or more.\nThe question illustrated above doesn’t ask about human extinction specifically, so you might wonder if ‘extremely bad’ includes a lot of scenarios less bad than human extinction. To check, we added two more questions in 2022 explicitly about ‘human extinction or similarly permanent and severe disempowerment of the human species’. For these, the median researcher also gave 5% and 10% answers. So my guess is that a lot of the extremely bad bucket in this question is pointing at human extinction levels of disaster. \nYou might wonder whether the respondents were selected for being worried about AI risk. We tried to mitigate that possibility by usually offering money for completing the survey ($50 for those in the final round, after some experimentation), and describing the topic in very broad terms in the invitation (e.g. not mentioning AI risk). Last survey we checked in more detail—see ‘Was our sample representative?’ in the paper on the 2016 survey.\n\nHere’s the 2022 data again, but ordered by overall optimism-to-pessimism rather than probability of extremely bad outcomes specifically:\n(Column widths may be distorted or columns may be missing due to limitation of chosen software.)\nFor more survey takeaways, see this blog post. For all the data we have put up on it so far, see this page.\nSee here for more details. \nThanks to Harlan Stewart for helping make these 2022 figures, Zach Stein-Perlman for generally getting this data in order, and Nathan Young for pointing out that figures like this would be good.\nNotes", "url": "https://aiimpacts.org/how-bad-a-future-do-ml-researchers-expect/", "title": "How bad a future do ML researchers expect?", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2023-03-09T04:49:24+00:00", "paged_url": "https://aiimpacts.org/feed?paged=1", "authors": ["Katja Grace"], "id": "decd72d7cafb816bc2a2c0965547b98c", "summary": []} {"text": "How popular is ChatGPT? Part 2: slower growth than Pokémon GO\n\nRick Korzekwa, March 3, 2023\nA major theme in reporting on ChatGPT is the rapid growth of its user base. A commonly stated claim is that it broke records, with over 1 million users in less than a week and 100 million users in less than two months. It seems not to have broken the record, though I do think ChatGPT’s growth is an outlier.\nChecking the claims\nChatGPT growth\nFrom what I can tell, the only source for the claim that ChatGPT had 1 million users in less than a week comes from this tweet by Sam Altman, the CEO of OpenAI:\n\nI don’t see any reason to strongly doubt this is accurate, but keep in mind it is an imprecise statement from a single person with an incentive to promote a product, so it could be wrong or misleading.\nThe claim that it reached 100 million users within two months has been reported by many news outlets, which all seem to bottom out in data from Similarweb. I was not able to find a detailed report, but it looks like they have more data behind a paywall. I think it’s reasonable to accept this claim for now, but, again, it might be different in some way from what the media is reporting1.\nSetting records and growth of other apps\nClaims of record setting\nI saw people sharing graphs that showed the number of users over time for various apps and services. Here is a rather hyperbolic example:\n\nThat’s an impressive curve and it reflects a notable event. But it’s missing some important data and context.\nThe claim that this set a record seems to originate from a comment by an analyst at investment bank UBS, who said “We cannot remember an app scaling at this pace”, which strikes me as a reasonable, hedged thing to say. The stronger claim that it set an outright record seems to be misreporting.\nData on other apps\nI found data on monthly users for all of these apps except Spotify2. I also searched lists of very popular apps for good leads on something with faster user growth. You can see the full set of data, with sources, here.3 I give more details on the data and my methods in the appendix.\nFrom what I can tell, that graph is reasonably accurate, but it’s missing Pokémon GO, which was substantially faster. It’s also missing the Android release of Instagram, which is arguably a new app release, and surpassed 1M within the first day. Here’s a table summarizing the numbers I was able to find, listed in chronological order:\nServiceDate launchedDays to 1MDays to 10MDays to 100MNetflix subscribers (all)1997-08-29366941857337Facebook2004-02-043319501608Twitter2006-07-156709551903Netflix subscribers (streaming)2007-01-15188923513910Instagram (all)2010-10-0161362854Instagram (Android)2012-04-031Pokemon Go (downloads)2016-07-05727ChatGPT2022-11-30461Number of days to reach 1 million, 10 million, and 100 million users, for several apps. Some of the figures are exponentially interpolated, due to a lack of datapoints at the desired values.\nIt’s a little hard to compare early numbers for ChatGPT and Pokémon GO, since I couldn’t find the days to 1M for Pokémon GO or the days to 10M for ChatGPT, but it seems unlikely that ChatGPT was faster for either.\nAnalysis\nScaling by population of Internet users\nThe total number of people with access to the Internet has been growing rapidly over the last few decades. Additionally, the growth of social networking sites makes it easier for people to share apps with each other. Both of these should make it easier for an app to spread. With that in mind, here’s a graph showing the fraction of all Internet users who are using each app over time (note the logarithmic vertical axis):\nNumber of monthly users over time for several applications. The vertical axis is on a log scale.\nIn general, it looks like these curves have initial slopes that are increasing with time, suggesting that how quickly an app can spread is influenced by more than just an increase in the number of people with access to the Internet. But Pokémon GO and ChatGPT just look like vertical lines of different heights, so here’s another graph, showing the (logarithmic) time since launch for each app:\nFraction of total global population with access to the Internet who are using the service vs days since the service launched. The number of users is set somewhat arbitrarily to 1 at t=1 minute\nThis shows pretty clearly that, while ChatGPT is an outlier, it was nonetheless substantially slower than Pokémon GO4.\nAdditional comparisons\nOne more comparison we can make is to other products and services that have a very fast uptake with users and how their reach increases over time:\n\nYouTube views within 24 hours for newly posted videos gives us a reference point for how quickly a link to something on the Internet can spread and get engagement. The lower barrier to watching a video, compared to making an account for ChatGPT, might give videos an advantage. Additionally, there is presumably more than one view per person. I do not know how big this effect is, but it may be large.\nPay-per-view sales for live events, in this case for combat sports, are a reference point for something that people are willing to pay for to use at home in a short timeframe. The payment is a higher barrier than making an account, but marketing and sales can happen ahead of time.\nVideo game sales within 24 hours, in some cases digital downloads, are similar to pay-per-view, but seem more directly comparable to a service on a website. I would guess that video games benefit from a longer period of marketing and pre-sales than PPV, but I’m not sure.\n\nHere is a graph of records for these things over time, with data taken from Wikipedia5, which is included in the data spreadsheet. Each dot is a separate video, PPV event, or game, and I’m only including those that set 24 hour records:\nRecords for most sales, views, and users within the first 24 hours for video games, PPV bouts, YouTube videos, and apps, plus a few points for users during first week for apps (shown as blue diamonds). Each data point represents one event, game, video, or app. Only those setting records in their particular category are included.\n\n\n\nIt would appear that very popular apps are not as popular as very popular video games or videos. I don’t see a strong conclusion to be drawn from this, but I do think it is helpful context.\nAdditional considerations\nI suspect the marketing advantage for Pokémon GO and other videogames is substantial. I do not remember seeing ads for Pokémon GO before its release, but I did a brief search for news articles about it before it was released and found lots of hype going back months. I did not find any news articles mentioning ChatGPT before launch. This does not change the overall conclusion, that the claim about ChatGPT setting an outright record is false, but it should change how we think about it. \nThat ChatGPT was able to beat out most other services without any marketing seems like a big deal. I think it’s hard to sell people on what’s cool about it without lots of user engagement, but the next generation of AI products might not need that, now that people are aware of how far the technology has come. Given this (and the hype around Bing Chat and Bard), I would weakly predict that marketing will play a larger role in future releases.\nAppendix – methods and caveats\nMost of the numbers I found were for monthly users or, in some cases, monthly active users. I wasn’t always sure what the difference was between these two things. In some cases, all I was able to find was monthly app downloads or annual downloads, both of which I would naively expect to be strictly larger than monthly users. But the annual user numbers reflected longer-term growth anyway, so they shouldn’t affect the conclusions.\nSome of the numbers for days to particular user milestones were interpolated, assuming exponential growth. By and large, I do not think this affects the overall story too much, but if you need to know precise numbers, you should check my interpolations or find more direct measurements. None of the numbers is extrapolated.\nWhen searching for data, I tried to use either official sources like SEC filings and company announcements, or measurements from third-party services that seem reputable and have paying customers. But sometimes those were hard to find and I had to use less reliable sources like news reports with dubious citations or studies with incomplete data.\nI did not approach this with the intent to produce very reliable data in a very careful way. Overall, this took about 1-2 researcher-days of effort. Given this, it seems likely I made some mistakes, but hopefully not any that undermine the conclusions.\nThanks to Jeffrey Heninger and Harlan Stewart for their help with research on this. Thanks to the two of them and Daniel Filan for helpful comments.", "url": "https://aiimpacts.org/how-popular-is-chatgpt-part-2-slower-growth-than-pokemon-go/", "title": "How popular is ChatGPT? Part 2: slower growth than Pokémon GO", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2023-03-03T23:36:46+00:00", "paged_url": "https://aiimpacts.org/feed?paged=1", "authors": ["richardkorzekwa"], "id": "43e420b83c9efba4e95c72537cabb56a", "summary": []} {"text": "Scoring forecasts from the 2016 “Expert Survey on Progress in AI”\n\nPatrick Levermore, 1 March 2023\nSummary\nThis document looks at the predictions made by AI experts in The 2016 Expert Survey on Progress in AI, analyses the predictions on ‘Narrow tasks’, and gives a Brier score to the median of the experts’ predictions. \nMy analysis suggests that the experts did a fairly good job of forecasting (Brier score = 0.21), and would have been less accurate if they had predicted each development in AI to generally come, by a factor of 1.5, later (Brier score = 0.26) or sooner (Brier score = 0.29) than they actually predicted.\nI judge that the experts expected 9 milestones to have happened by now – and that 10 milestones have now happened.\nBut there are important caveats to this, such as:\n\nI have only analysed whether milestones have been publicly met. AI labs may have achieved more milestones in private this year without disclosing them. This means my analysis of how many milestones have been met is probably conservative.\nI have taken the point probabilities given, rather than estimating probability distributions for each milestone, meaning I often round down, which skews the expert forecasts towards being more conservative and unfairly penalises their forecasts for low precision.\nIt’s not apparent that forecasting accuracy on these nearer-term questions is very predictive of forecasting accuracy on the longer-term questions.\nMy judgements regarding which forecasting questions have resolved positively vs negatively were somewhat subjective (justifications for each question in the separate appendix).\n\nIntroduction\nIn 2016, AI Impacts published The Expert Survey on Progress in AI: a survey of machine learning researchers, asking for their predictions about when various AI developments will occur. The results have been used to inform general and expert opinions on AI timelines.\nThe survey largely focused on timelines for general/human-level artificial intelligence (median forecast of 2056). However, included in this survey were a collection of questions about shorter-term milestones in AI. Some of these forecasts are now resolvable. Measuring how accurate these shorter-term forecasts have been is probably somewhat informative of how accurate the longer-term forecasts are. More broadly, the accuracy of these shorter-term forecasts seems somewhat informative of how accurate ML researchers’ views are in general. So, how have the experts done so far? \nFindings\nI analysed the 32 ‘Narrow tasks’ to which the following question was asked:\n\nHow many years until you think the following AI tasks will be feasible with:\n\na small chance (10%)?\nan even chance (50%)?\na high chance (90%)?\n\nLet a task be ‘feasible’ if one of the best resourced labs could implement it in less than a year if they chose to. Ignore the question of whether they would choose to.1\n\nI interpret ‘feasible’ as whether, in ‘less than a year’ before now, any AI models had passed these milestones, and this was disclosed publicly. Since it is now (February 2023) 6.5 years since this survey, I am therefore looking at any forecasts for events happening within 5.5 years of the survey.\nAcross these milestones, I judge that 10 have now happened and 22 have not happened. My 90% confidence interval is that 7-15 of them have now happened. A full description of milestones, and justification of my judgments, are in the appendix (separate doc).\nThe experts forecast that:\n\n4 milestones had a <10% chance of happening by now, \n20 had a 10-49% chance,\n7 had a 50-89% chance, \n1 had a >90% chance. \n\nSo they expected 6-17 of these milestones to have happened by now. By eyeballing the forecasts for each milestone, my estimate is that they expected ~9 to have happened.2 I did not estimate the implied probability distributions for each milestone, which would make this more accurate.\nUsing the 10, 50, and 90% point probabilities, we get the following calibration curve:\n\nBut, firstly, the data here is small (there are 7 data points at the 50% mark and 1 at the 90% mark). Secondly, my methodology for this graph, and in the below Brier calculations, is based on rounding down to the nearest given forecast. For example, if a 10% chance was given at 3 years, and a 50% chance at 10 years, the forecast was taken to be 10%, rather than estimating a full probability distribution and finding the 5.5 years point. This skews the expert forecasts towards being more conservative and unfairly penalises a lack of precision. \nBrier scores\nOverall, across every forecast made, the experts come out with a Brier score of 0.21.3 The score breakdown and explanation of the method is here.4\nFor reference, a lower Brier score is better. 0 would mean absolute confidence in everything that eventually happened, 0.25 would mean a series of 50% hedged guesses on anything happening, and randomly guessing from 0% to 100% for every question would yield a Brier score of 0.33.5\nAlso interesting is the Brier score relative to others who forecast the same events. We don’t have that when looking at the median of our experts – but we could simulate a few other versions:\nBearish6 – if the experts all thought each milestone would take 1.5 times longer than they actually thought, they would’ve gotten a Brier score of 0.27.\nSlightly Bearish – if the experts all thought each milestone would take 1.2 times longer than they actually thought, they would’ve gotten a Brier score of 0.25.\nActual forecasts – a Brier score of 0.21.\nSlightly Bullish – if the experts all thought each milestone would take 1.2 times less than they actually thought, they would’ve gotten a Brier score of 0.24.Bullish – if the experts all thought each milestone would take 1.5 times less than they actually thought, they would’ve gotten a Brier score of 0.29.\n\nSo, the experts were in general pretty accurate and would have been less so if they had been more or less bullish on the speed of AI development (with the same relative expectations between each milestone). \nTaken together, I think this should slightly update us towards the expert forecasts being useful in as yet unresolved cases, and away from the usefulness of estimates which fall outside of 1.5 times further or closer than the expert forecasts.\nRandomised – if the experts’ forecast for each specific milestone were randomly assigned to any forecasted date for a different milestone in the collection, they would’ve gotten a Bier score of 0.31 (in the random assignment I received from a random number generator).\nI think this should update us slightly towards the surveyed experts generally being accurate on which areas of AI would progress fastest. My assessment is that, compared to the experts’ predictions, AI has progressed more quickly in text generation and coding and more slowly in game playing and robotics. It is not clear now whether this trend will continue, or whether other areas in AI will unexpectedly progress more quickly in the next 5 year period.\nSummary of milestones and forecasts\nIn the below table, the numbers in the cells are the median expert response to “Years after the (2016) survey for which there is a 10, 50 and 90% probability of the milestone being feasible”. The final column is my judgement of whether the milestone was in fact feasible after 5.5 years. Orange shading shows forecasts falling within the 5.5 years between the survey and today. A full description of milestones, and justification of my judgments, are in the appendix.\nMilestone / Confidence of AI reaching the milestone within X years10 percent50 percent90 percentTrue by Feb 2023? (5.5 + 1 years)Translate a new-to-humanity language102050FALSETranslate a new-to-it language51015FALSETranslate as well as bilingual humans3715FALSEPhone bank as well as humans3610FALSECorrectly group unseen objects24.56.5TRUEOne-shot image labeling 4.5820FALSEGenerate video from a photograph51020TRUETranscribe as well as humans51020TRUERead aloud better than humans51015FALSEProve and generate top theorems105090FALSEWin Putnam competition153555FALSEWin Go with less gametime3.58.519.5FALSEWin Starcraft2510FALSEWin any random computer game51015FALSEWin angry birds246FALSEBeat professionals at all Atari games51015FALSEWin Atari with 20 minutes training2510FALSEFold laundry as well as humans25.510FALSEBeat a human in a 5km race51020FALSEAssemble any LEGO51015FALSEEfficiently sort very large lists3510TRUEWrite good Python code31020TRUEAnswers factoids better than experts3510TRUEAnswer open-ended questions well51015TRUEAnswer unanswered questions well41017.5TRUEHigh marks for a high school essay2715FALSECreate a top forty song51020FALSEProduce a Taylor Swift song51020FALSEWrite a NYT bestseller103050FALSEConcisely explain its game play51015TRUEWin World Series of Poker135.5TRUEOutput laws of physics of virtual world51020FALSE\nCaveats:\nMy judgements of which forecasts have turned out true or false are a little subjective. This was made harder by the survey question asking which tasks were ‘feasible’, where feasible meant ‘if one of the best resourced labs could implement it in less than a year if they chose to. Ignore the question of whether they would choose to.’ I have interpreted this as, one year after the forecasted date, have AI labs achieved these milestones, and disclosed this publicly? \nGiven (a) ‘has happened’ implies ‘feasible’, but ‘feasible’ does not imply ‘has happened’ and (b) labs may have achieved some of these milestones but not disclosed it, I am probably being conservative in the overall number of tasks which have been completed by labs. I have not attempted to offset this conservatism by using my judgement of what labs can probably achieve in private. If you disagree or have insider knowledge of capabilities, you may be interested in editing my working here. Please reach out if you want an explanation of the method, or to privately share updates – patrick at rethinkpriorities dot org.\nIt’s not obvious that forecasting accuracy on these nearer-term questions is very predictive of forecasting accuracy on the longer-term questions. Dillon (2021) notes “There is some evidence that forecasting skill generalises across topics (see Superforecasting, Tetlock, 2015 and for a brief overview see here) and this might inform a prior that good forecasters in the short term will also be good over the long term, but there may be specific adjustments which are worth emphasising when forecasting in different temporal domains.” I have not found any evidence either way on whether good forecasters in the short term will also be good over the long term, but this does seem possible to analyse from the data that Dillon and niplav collect.8\nFinally, there are caveats in the original survey worth noting here, too. For example, how the question is framed makes a difference to forecasts, even when the meaning is the same. To illustrate this, the authors note \n\n“People consistently give later forecasts if you ask them for the probability in N years instead of the year that the probability is M. We saw this in the straightforward HLMI (high-level machine intelligence) question and most of the tasks and occupations, and also in most of these things when we tested them on mturk people earlier. For HLMI for instance, if you ask when there will be a 50% chance of HLMI you get a median answer of 40 years, yet if you ask what the probability of HLMI is in 40 years, you get a median answer of 30%.” \n\nThis is commonly true of the ‘Narrow tasks’ forecasts (although I disagree with the authors that it is consistently so).9 For example, when asked when there is a 50% chance AI can write a top forty hit, respondents gave a median of 10 years. Yet when asked about the probability of this milestone being reached in 10 years, respondents gave a median of 27.5%. \nWhat does this all mean for us?\nMaybe not a huge amount at this point. It is probably a little too early to get a good picture of the experts’ accuracy, and there are a few important caveats. But this should update you slightly towards the experts’ timelines if you were sceptical of their forecasts. Within another five years, we will have ~twice the data and a good sense of how the experts performed across their 50% estimates.\nIt is also limiting to have only one comprehensive survey of AI experts which includes both long-term and shorter-term timelines. What would be excellent for assessing accuracy is detailed forecasts from various different groups, including political pundits, technical experts, and professional forecasters, with which we can compare accuracy between groups. It would be easier to analyse the forecasting accuracy of the questions focused on what developments have happened, rather than what developments are feasible. We could try closer to home, maybe the average EA would be better at forecasting developments than the average AI expert – it seems worth testing now to give us some more data in ten years!\n\nThis is a blog post, not a research report, meaning it was produced quickly and is not to our typical standards of substantiveness and careful checking for accuracy. I’m grateful to Alex Lintz, Amanda El-Dakhakhni, Ben Cottier, Charlie Harrison, Oliver Guest, Michael Aird, Rick Korzekwa, and Zach Stein-Perlman for comments on an earlier draft.\nIf you are interested in RP’s work, please visit our research database and subscribe to our newsletter. \nCross-posted to EA Forum, Lesswrong, and this google doc.\nFootnotes\n\nI only analysed this ‘fixed probabilities’ question and not the alternative ‘fixed years’ question, which asked:“How likely do you think it is that the following AI tasks will be feasible within the next:– 10 years?– 20 years?– 50 years?”We are not yet at any of these dates, so the analysis would be much more unclear.\n9 =  4*5% + 14*15% + 6*30% + 5*55% + 2*80% + 1*90%\n A precise number as a Brier score does not imply an accurate assessment of forecasting ability – ideally, we could work with a larger dataset (i.e. more surveys, with more questions) to get more accuracy.\nMy methodology for the Brier score calculations is based on rounding down to the nearest given forecast, or rounding up to the 10% mark. For example, if a 10% chance was given at 3 years, and a 50% chance at 10 years, the forecast was taken to be 10%, rather than estimating a full probability distribution and finding the 5.5 years point. This skews the expert forecasts towards being more conservative and unfairly penalises them. If the experts gave a 10% chance of X happening in 3 years, I didn’t check whether it had happened in 3 years, but instead checked if it had happened by now. I estimate these two factors (the first skewing the forecasts to be more begives a roughly balance 5-10% increase to the Brier score, given most milestones included a probability at the 5 year mark. A better analysis would estimate the probability distributions implied by each 10, 50, 90% point probability, then assess the probability implied at 5.5 years. \nFor more detail, see Brier score – Wikipedia.\nBy ‘bearish’ and ‘bullish’ I mean expecting AI milestones to be met later or sooner, respectively.\nThe score breakdown and method for these calculations is also here.\nThis seems valuable, and I’m not sure why it hasn’t been analysed yet.Somewhat relevant sources:https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizonshttps://www.lesswrong.com/posts/MquvZCGWyYinsN49c/range-and-forecasting-accuracyhttps://www.openphilanthropy.org/research/how-feasible-is-long-range-forecasting/https://forum.effectivealtruism.org/topics/long-range-forecasting\nI sampled ten forecasts where probabilities were given on a 10 year timescale, and five of them (Subtitles, Transcribe, Top forty, Random game, Explain) gave later forecasts when asked with a ‘probability in N years’ framing rather than a ‘year that the probability is M’ framing, three of them (Video scene, Read aloud, Atari) gave the same forecasts, and two of them (Rosetta, Taylor) gave an earlier forecast. This is why I disagree it leads to consistently later forecasts.\n", "url": "https://aiimpacts.org/scoring-forecasts-from-the-2016-expert-survey-on-progress-in-ai/", "title": "Scoring forecasts from the 2016 “Expert Survey on Progress in AI”", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2023-03-02T00:37:23+00:00", "paged_url": "https://aiimpacts.org/feed?paged=1", "authors": ["Harlan Stewart"], "id": "b05ca6e9831f901b0370a4c1982ced7d", "summary": []} {"text": "How popular is ChatGPT? Part 1: more popular than Taylor Swift\n\nHarlan Stewart, 23 February 2023\nIntroduction\nPublic attention toward AI seems much higher after the release of ChatGPT at the end of November. But how much higher is it? To better understand this, I looked at search data from Google Trends about ChatGPT, OpenAI, AI, and AI Alignment. Unfortunately, Google Trends only shares relative search volumes instead of the number of searches made for a term or topic. I compared these relative search volumes to other non-AI topics, such as Taylor Swift, to make them more useful. This is similar to adding a familiar “for scale” object in a product photo.\nMagnetic-core memory with a quarter for scale, by Jud McCranie\nHow to read these graphs\n\nIn the first graph, the data is about searches for the terms in quotation marks, which are exact search terms. In the others, the data is about search “topics,” which are collections of various search terms related to a topic, as defined by Google Trends.\nThe vertical axes of these graphs are relative search volume, defined as the percentage of the peak search volume in that graph.\n\nData\nChatGPT is mainstream\n\nFor the time that ChatGPT has been publicly available since November 30 2022, US searches for it outnumbered US searches for Taylor Swift or Drake. However, there were only around a third as many searches for ChatGPT as searches for Wordle, and Wordle itself had only around a third of the search volume that it did in Spring 2022.\nAmericans suddenly know about OpenAI\n\nFor the time that OpenAI has existed, since December 10 2015, Americans usually searched for it less than for Blockbuster Video, a retailer that closed in 2014. In the months since ChatGPT was announced, American searches for OpenAI have increased by around 15x to a volume similar to that for Samsung.\nInterest in AI evolved from dinosaurs to birds\n\nFor most of the last decade, there has been a similar number of global searches about AI as about dinosaurs. In the time since DALL-E 2’s beta was announced less than a year ago, global searches about AI have roughly tripled, rising to a volume of global searches similar to that about birds.\nAlignment interest is at an all-time high but still pretty low\n\nOver the last 10 years, global searches about AI alignment have risen from “digital scent technology” level to “colonization of the moon” level and possibly beyond. Searches about AI alignment seem to have roughly quadrupled in the last two years. Eyeballing this graph, it’s unclear to me whether the announcements of DALL-E 2 or ChatGPT had any significant effect on search volume.\nDiscussion\nChatGPT is receiving mainstream attention. Although I have not done any statistical analysis of these trends, it appears to me that the popularity of ChatGPT is also driving interest in both OpenAI as a company and AI in general. Interest in alignment is also on the rise but still about as obscure an interest as colonization of the moon.\nIt’s unclear whether interest in AI will continue to grow, plateau, or drop back to previous levels. This will likely depend on what near-term future progress in AI will look like. If you expect that AI-related news as interesting as ChatGPT will be rare, you might expect interest to decline as the hype fizzles out. If you expect that the pace of interesting AI advancements will continue at its current fast rate, you might expect interest in AI to continue to grow, perhaps becoming even more popular than birds.", "url": "https://aiimpacts.org/how-popular-is-chatgpt-part-1-more-popular-than-taylor-swift/", "title": "How popular is ChatGPT? Part 1: more popular than Taylor Swift", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2023-02-24T02:47:04+00:00", "paged_url": "https://aiimpacts.org/feed?paged=1", "authors": ["Harlan Stewart"], "id": "9cc16377337120549c6c6b1d5610ed67", "summary": []} {"text": "The public supports regulating AI for safety\n\nZach Stein-Perlman, 16 February 2023\nA high-quality American public survey on AI, Artificial Intelligence Use Prompts Concerns, was released yesterday by Monmouth. Some notable results:\n\n9% say AI1 would do more good than harm vs 41% more harm than good (similar to responses to a similar survey in 2015)\n55% say AI could eventually pose an existential threat (up from 44% in 2015)\n55% favor “having a federal agency regulate the use of artificial intelligence similar to how the FDA regulates the approval of drugs and medical devices”\n60% say they have “heard about A.I. products – such as ChatGPT – that can have conversations with you and write entire essays based on just a few prompts from humans”\n\nWorries about safety and support of regulation echoes other surveys:\n\n71% of Americans agree that there should be national regulations on AI (Morning Consult 2017)\nThe public is concerned about some AI policy issues, especially privacy, surveillance, and cyberattacks (GovAI 2019)\nThe public is concerned about various negative consequences of AI, including loss of privacy, misuse, and loss of jobs (Stevens / Morning Consult 2021)\n\nSurveys match the anecdotal evidence from talking to Uber drivers: Americans are worried about AI safety and would support regulation on AI. Perhaps there is an opportunity to improve the public’s beliefs, attitudes, and memes and frames for making sense of AI; perhaps better public opinion would enable better policy responses to AI or actions from AI labs or researchers.\nPublic desire for safety and regulation is far from sufficient for a good government response to AI. But it does mean that the main challenge for improving government response is helping relevant actors believe what’s true, developing good affordances for them, and helping them take good actions— not making people care enough about AI to act at all.", "url": "https://aiimpacts.org/the-public-supports-regulating-ai-for-safety/", "title": "The public supports regulating AI for safety", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2023-02-17T04:00:53+00:00", "paged_url": "https://aiimpacts.org/feed?paged=1", "authors": ["Zach Stein-Perlman"], "id": "4bc8bb5caa344f940cf6c23567418909", "summary": []} {"text": "Whole Bird Emulation requires Quantum Mechanics\n\nJeffrey Heninger, 14 February 2023\nEpistemic status: Written for engagement. More sober analysis coming soon.\n\nBird navigation is surprisingly cruxy for the future of AI.\n – Zach Stein-Perlman\n\nThis seems pretty wrong.\n – Richard Korzekwa\nBirds are astonishingly good at navigating, even over thousands of miles. The longest migration routes, of the arctic term, are only limited by the size of the globe. Homing pigeons can return home after being released 1800 km (1100 mi) away. White-crowned sparrows have been able to migrate to their wintering grounds after being displaced 3700 km (2300 mi) shortly before they began migration.\nHow they do this is not entirely understood. There seem to be multiple cues they respond to, which combine to give them an accurate ‘map’ and ‘compass’. Which cues are most important might be different for different species. Some of these cues include watching the stars & sun, low frequency sounds, long-range smells, and detecting the earth’s magnetic field. This last one is the most interesting. Birds can detect magnetic fields, and there is increasing consensus that the detection mechanism involves quantum mechanics (See Appendix for details).\nThe result is a precise detector of the magnetic field. It is located in the retina and transferred up the optical nerve to the brain, so birds can ‘see’ magnetic fields. Leaving aside questions like “What is it like to be a [Bird]?”, this result has implications for the difficulty of Whole Bird Emulation (WBE).\nWBE is important for understanding the future development of artificial intelligence. If we can put an upper bound on the difficulty of WBE, we have an upper bound on the difficulty of making AI that can do everything a bird can do. And birds can do lots of cool things: they know how to fly, they sing pretty songs, and they even drop nuts in front of cars ! \nIn order to put bounds on WBE, we need to determine how much resolution is needed in order to emulate everything a bird can do. Is it good enough to model a bird at the cellular level? Or at the protein level? Or do you need an even finer resolution?\nIn order to model the navigational ability of a bird, you need a quantum mechanical description of the spin state of a pair of electrons. This is extremely high resolution.\nA few caveats:\n\nNot all parts of a bird require quantum mechanics to describe their macroscopic behavior. You can likely get away with coarse-graining most of the bird at a much higher level.\nThis is a simple quantum system, so it’s not hard to figure out the wave function over the singlet and triplet states.\nWhat you need to know to determine the behavior of the bird is the concentration of the two final products as a function of the external magnetic field. Once this (quantum mechanical) calculation is done, you likely don’t need to model the subsequent evolution of the bird using quantum mechanics.\n\nOn the other hand:\n\nBirds are extremely complicated things, so it is always somewhat surprising when we understand anything in detail about them.\nIf quantum mechanics is necessary to understand the macroscopic behavior of some part of a bird, we should think that it is more likely that quantum mechanics is necessary to understand the macroscopic behavior of other parts of a bird too.\nIf there are other parts of a bird which depend on quantum mechanics in a more complicated way, or if the macroscopic response cannot be well modeled using classical probabilities, we almost certainly would not have discovered it. Getting good empirical evidence for even simple models of biological systems is hard. Getting good empirical evidence for complex models of biological systems is much harder.\n\nWBE requires a quantum mechanical calculation in order to describe at least one macroscopic behavior of birds. This dramatically increases the resolution needed for at least parts of WBE and the overall expected difficulty of WBE. If your understanding of artificial intelligence would have predicted that Whole Bird Emulation would be much simpler than this, you should update accordingly.\nUnless, of course, Birds Aren’t Real.\nFurther Reading\n\nLambert et al. Quantum Biology. Nature Physics 9. (2013) https://quantum.ch.ntu.edu.tw/ycclab/wp-content/uploads/2015/01/Nat-Phys-2013-Lambert.pdf.\nHolland. True navigation in birds: from quantum physics to global migration. Journal of Zoology 293. (2014) https://zslpublications.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jzo.12107.\nRitz. Quantum effects in biology: Bird navigation. Procedia Chemistry 3. (2011) https://www.sciencedirect.com/science/article/pii/S1876619611000738.\n\nAppendix\nHere is a brief description of how a bird’s magnetic sense seems to work:\nA bird’s retina contains some pigments called cryptochromes. When blue or green light (<570 nm) is absorbed by the pigment, an electron is transferred from one molecule to another. This electron had previously been paired with a different electron, so after the transfer, there is now an excited radical pair. Initially, the spins of the two electrons are anti-parallel (they initially are in the singlet state). An external magnetic magnetic field can cause one of the electrons to flip so they become parallel (they transition to a triplet state). Transitions can also occur due to interactions with the nuclear spins, so it is better to think of the external magnetic field as changing the rate at which transitions happen instead of introducing entirely new behavior. The excited singlet state decays back to the original state of the cryptochrome, while the excited triplet state decays into a different product. Neurons in the retina can detect the change in the relative concentration of these two products, providing a measurement of the magnetic field.\nThis model has made several successful predictions. (1) Cryptochromes were originally known from elsewhere in biology. This theory predicted that they, or another pigment which produces radical pairs, would be found in birds’ eyes. (2) Low amplitude oscillating magnetic fields with a frequency of between 1-100 MHz should also affect the transition between the singlet and triplet states. Exposing birds to these fields disrupts their ability to navigate.", "url": "https://aiimpacts.org/whole-bird-emulation-requires-quantum-mechanics/", "title": "Whole Bird Emulation requires Quantum Mechanics", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2023-02-14T23:47:25+00:00", "paged_url": "https://aiimpacts.org/feed?paged=2", "authors": ["Jeffrey Heninger"], "id": "239ffb9c72a41cae9ec0f4f90f4a969f", "summary": []} {"text": "Framing AI strategy\n\nZach Stein-Perlman, 6 February 2023\nStrategy is the activity or project of doing research to inform interventions to achieve a particular goal.1 AI strategy is strategy from the perspective that AI is important, focused on interventions to make AI go better. An analytic frame is a conceptual orientation that makes salient some aspects of an issue, including cues for what needs to be understood, how to approach the issue, what your goals and responsibilities are, what roles to see yourself as having, what to pay attention to, and what to ignore.\nThis post discusses ten strategy frames, focusing on AI strategy. Some frames are comprehensive approaches to strategy; some are components of strategy or prompts for thinking about an aspect of strategy. This post focuses on meta-level exploration of frames, but the second and last sections have some object-level thoughts within a frame.\nSections are overlapping but independent; focus on sections that aren’t already in your toolbox of approaches to strategy.\nEpistemic status: exploratory, brainstormy.\nMake a plan\nSee Jade Leung’s Priorities in AGI governance research (2022) and How can we see the impact of AI strategy research? (2019).\nOne output of strategy is a plan describing relevant (kinds of) actors’ behavior. More generally, we can aim for a playbook– something like a function from (sets of observations about) world-states to plans. A plan is good insofar as it improves important decisions in the counterfactual where you try to implement it, in expectation.\nTo make a plan or playbook, identify (kinds of) actors that might be affectable, then figure out\n\nwhat they could do,\nwhat it would be good for them to do,\nwhat their incentives are (if relevant), and then\nhow to cause them to act better.\n\nIt is also possible to focus on decisions rather than actors: determine what decisions you want to affect (presumably because they’re important and affecting them seems tractable) and how you can affect them.\nFor AI, relevant actors include AI labs, states (particularly America), non-researching non-governmental organizations (particularly standard-setters), compute providers, and the AI risk and EA communities.2\nInsofar as an agent (not necessarily an actor that can take directly important actions) has distinctive abilities and is likely to try to execute good ideas you have, it can be helpful to focus on what the agent can do or how to leverage the agent’s distinctive abilities rather than backchain from what would be good.3\nAffordances\nAs in the previous section, a natural way to improve the future is to identify relevant actors, determine what it would be good for them to do, and cause them to do those things. “Affordances” in strategy are “possible partial future actions that could be communicated to relevant actors, such that they would take similar actions.”4 The motivation for searching for and improving affordances is that there probably exist actions that would be great and relevant actors would be happy to take, but that they wouldn’t devise or recognize by default. Finding great affordances is aided by a deep understanding of how an actor thinks and its incentives, as well as a deep external understanding of the actor, to focus on its blind spots and identify feasible actions.5 Separately, the actor’s participation would sometimes be vital.\nAffordances are relevant not just to cohesive actors but also to non-structured groups. For example, for AI strategy, discovering affordances for ML researchers (as individuals or for collective action) could be valuable. Perhaps there also exist great possible affordances that don’t depend much on the actor– generally helpful actions that people just aren’t aware of.\nFor AI, two relevant kinds of actors are states (particularly America) and AI labs. One way to discover affordances is to brainstorm the kinds of actions particular actors can take, then find creative new plans within that list. Going less meta, I made lists of the kinds of actions states and labs can take that may be strategically significant, since such lists seem worthwhile and I haven’t seen anything like them.\nKinds of things states can do that may be strategically relevant (or consequences or characteristics of possible actions):\n\nRegulate (and enforce regulation in their jurisdiction and investigate possible violations)\nExpropriate property and nationalize companies (in their territory)\nPerform or fund research (notably including through Manhattan/Apollo-style projects)\nAcquire capabilities (notably including military and cyber capabilities)\nSupport particular people, companies, or states\nDisrupt or attack particular people, companies, or states (outside their territory)\nAffect what other actors believe on the object level\n\nShare information\nMake information salient in a way that predictably affects beliefs\nExpress attitudes that others will follow\n\n\nNegotiate with other actors, or affect other actors’ incentives or meta-level beliefs\nMake agreements with other actors (notably including contracts and treaties)\nEstablish standards, norms, or principles\nMake unilateral declarations (as an international legal commitment) [less important]\n\nKinds of things AI labs6 can do—or choose not to do—that may be strategically relevant (or consequences or characteristics of possible actions):\n\nDeploy an AI system\nPursue capabilities\n\nPursue risky (and more or less alignable systems) systems\nPursue systems that enable risky (and more or less alignable) systems\nPursue weak AI that’s mostly orthogonal to progress in risky stuff for a specific (strategically significant) task or goal\n\nThis could enable or abate catastrophic risks besides unaligned AI\n\n\n\n\nDo alignment (and related) research (or: decrease the alignment tax by doing technical research)\n\nIncluding interpretability and work on solving or avoiding alignment-adjacent problems like decision theory and strategic interaction and maybe delegation involving multiple humans or multiple AI systems\n\n\nAdvance global capabilities\n\nPublish capabilities research\nCause investment or spending in big AI projects to increase\n\n\nAdvance alignment (or: decrease the alignment tax) in ways other than doing technical research\n\nSupport and coordinate with external alignment researchers\n\n\nAttempt to align a particular system (or: try to pay the alignment tax)\nInteract with other labs7\n\nCoordinate with other labs (notably including coordinating to avoid risky systems)\n\nMake themselves transparent to each other\nMake themselves transparent to an external auditor\nMerge\nEffectively commit to share upsides\nEffectively commit to stop and assist\n\n\nAffect what other labs believe on the object level (about AI capabilities or risk in general, or regarding particular memes)\n\nPractice selective information sharing\nDemonstrate AI risk (or provide evidence about it)\n\n\nNegotiate with other labs, or affect other labs’ incentives or meta-level beliefs\n\n\nAffect public opinion, media, and politics\n\nPublish research\nMake demos or public statements\nRelease or deploy AI systems\n\n\nImprove their culture or operational adequacy\n\nImprove operational security\nAffect attitudes of effective leadership\nAffect attitudes of researchers\nMake a plan for alignment (e.g., OpenAI’s); share it; update and improve it; and coordinate with capabilities researchers, alignment researchers, or other labs if relevant\nMake a plan for what to do with powerful AI (e.g., CEV or some specification of long reflection), share it, update and improve it, and coordinate with other actors if relevant\nImprove their ability to make themselves (selectively) transparent\n\n\nTry to better understand the future, the strategic landscape, risks, and possible actions\nAcquire resources\n\nE.g., money, hardware, talent, influence over states, status/prestige/trust\nCapture scarce resources\n\nE.g., language data from language model users\n\n\n\n\nAffect other actors’ resources\n\nAffect the flow of talent between labs or between projects\n\n\nPlan, execute, or participate in pivotal acts or processes\n\n(These lists also exist on the AI Impacts wiki, where they may be improved in the future: Affordances for states and Affordances for AI labs. These lists are written from an alignment-focused and misuse-aware perspective, but prosaic risks may be important too.)\nMaybe making or reading lists like these can help you notice good tactics. But innovative affordances are necessarily not things that are already part of an actor’s behavior.\nMaybe making lists of relevant things similar actors have done in the past would illustrate possible actions, build intuition, or aid communication.\nThis frame seems like a potentially useful complement to the standard approach backchain from goals to actions of relevant actors. And it seems good to understand actions that should be items on lists like these—both like understanding these list-items well and expanding or reframing these lists—so you can notice opportunities.\nIntermediate goals\nNo great sources are public, but illustrating this frame see “Catalysts for success” and “Scenario variables” in Marius Hobhannon et al.’s What success looks like (2022). On goals for AI labs, see Holden Karnofsky’s Nearcast-based “deployment problem” analysis (2022).\nAn intermediate/instrumental goal is a goal that is valuable because it promotes one or more final/terminal goals. (“Goal” sounds discrete and binary, like “there exists a treaty to prevent risky AI development,” but often should be continuous, like “gain resources and influence.”) Intermediate goals are useful because we often need more specific and actionable goals than “make the future go better” or “make AI go better.”\nKnowing what specifically would be good for people to do is a bottleneck on people doing useful things. If the AI strategy community had better strategic clarity, in terms of knowledge about the future and particularly intermediate goals, it could better utilize people’s labor, influence, and resources. Perhaps an overlapping strategy framing is finding or unlocking effective opportunities to spend money. See Luke Muehlhauser’s A personal take on longtermist AI governance (2021).8\nIt is also sometimes useful to consider goals about particular actors.\nThreat modeling\nIllustrating threat modeling for the technical component of AI misalignment, see the DeepMind safety team’s Threat Model Literature Review and Clarifying AI X-risk (2022), Sam Clarke and Sammy Martin’s Distinguishing AI takeover scenarios (2021), and GovAI’s Survey on AI existential risk scenarios (2021).\nThe goal of threat modeling is deeply understanding one or more risks for the purpose of informing interventions. A great causal model of a threat (or class of possible failures) can let you identify points of intervention and determine what countering the threat would require.\nA related project involves assessing all threats (in a certain class) rather than a particular one, to help account for and prioritize between different threats.\nTechnical AI safety research informs AI strategy through threat modeling. A causal model of (part of) AI risk can generate a model of AI risk abstracted for strategy, with relevant features made salient and irrelevant details black-boxed. This abstracted model gives us information including necessary and sufficient conditions or intermediate goals for averting the relevant threats. These in turn can inform affordances, tactics, policies, plans, influence-seeking, and more.\nTheories of victory\nI am not aware of great sources, but illustrating this frame see Marius Hobhannon et al.’s What success looks like (2022).\nConsidering theories of victory is another natural frame for strategy: consider scenarios where the future goes well, then find interventions to nudge our world toward those worlds. (Insofar as it’s not clear what the future going well means, this approach also involves clarifying that.) To find interventions to make our world like a victorious scenario, I sometimes try to find necessary and sufficient conditions for the victory-making aspect of that scenario, then consider how to cause those conditions to hold.9\nGreat threat-model analysis can be an excellent input to theory-of-victory analysis, to clarify the threats and what their solutions must look like. And it could be useful to consider scenarios in which the future goes well and scenarios where it doesn’t, then examine the differences between those worlds.\nTactics and policy development\nCollecting progress on possible government policies, see GovAI’s AI Policy Levers (2021) and GCRI’s Policy ideas database.\nGiven a model of the world and high-level goals, we must figure out how to achieve those goals in the messy real world. For a goal, what would cause success, which of those possibilities are tractable, and how could they become more likely to occur? For a goal, what are necessary and sufficient conditions for achievement and how could those occur in the real world?\nMemes & frames\nI am not aware of great sources on memes & frames in strategy, but see Jade Leung’s How can we see the impact of AI strategy research? (2019). See also the academic literature on framing, e.g. Robert Entman’s Framing (1993).\n(“Frames” in this context refers to the lenses through which people interpret the world, not the analytic, research-y frames discussed in this post.)\nIf certain actors held certain attitudes, they would make better decisions. One way to affect attitudes is to spread memes. A meme could be explicit agreement with a specific proposition; the attitude that certain organizations, projects, or goals are (seen as) shameful; the attitude that certain ideas are sensible and respectable or not; or merely a tendency to pay more attention to something. The goal of meme research is finding good memes—memes that would improve decisions if widely accepted (or accepted by a particular set of actors10) and are tractable to spread—and figuring out how to spread them. Meme research is complemented by work actually causing those memes to spread.\nFor example, potential good memes in AI safety include things like AI is powerful but not robust, and in particular [specification gaming or Goodhart or distributional shift or adversarial attack] is a big deal. Perhaps misalignment as catastrophic accidents is easier to understand than misalignment as powerseeking agents, or vice versa. And perhaps misuse risk is easy to understand and unlikely to be catastrophically misunderstood, but less valuable-if-spread.\nA frame tells people what to notice and how to make sense of an aspect of the world. Frames can be internalized by a person or contained in a text. Frames for AI might include frames related to consciousness, Silicon Valley, AI racism, national security, or specific kinds of applications such as chatbots or weapons.\nHigher-level research could also be valuable. This would involve topics like how to communicate ideas about AI safety or even how to communicate ideas and how groups form beliefs.\nThis approach to strategy could also involve researching how to stifle harmful memes, like perhaps “powerful actors are incentivized to race for highly capable AI” or “we need a Manhattan Project for AI.”\nExploration, world-modeling, and forecasting\nSometimes strategy greatly depends on particular questions about the world and the future.\nMore generally, you can reasonably expect that increasing clarity about important-seeming aspects of the world and the future will inform strategy and interventions, even without thinking about specific goals, actors, or interventions. For AI strategy, exploration includes central questions about the future of AI and relevant actors, understanding the effects of possible actions, and perhaps also topics like decision theory, acausal trade, digital minds, and anthropics.\nConstructing a map is part of many different approaches to strategy. This roughly involves understanding the landscape and discovering analytically useful concepts, like reframing victory means causing AI systems to be aligned to it’s necessary and sufficient to cause the alignment tax to be paid, so it’s necessary and sufficient to reduce the alignment tax and increase the amount-of-tax-that-would-be-paid such that the latter is greater.\nOne exploratory, world-model-y goal is a high-level understanding of the strategic landscape. One possible approach to this goal is creating a map of relevant possible events, phenomena, actions, propositions, uncertainties, variables, and/or analytic nodes.\nNearcasting\nDiscussing nearcasting, see Holden Karnofsky’s AI strategy nearcasting (2022). Illustrating nearcasting, see Karnofsky’s Nearcast-based “deployment problem” analysis (2022).\nHolden Karnofsky defines “AI strategy nearcasting” as\ntrying to answer key strategic questions about transformative AI, under the assumption that key events (e.g., the development of transformative AI) will happen in a world that is otherwise relatively similar to today’s. One (but not the only) version of this assumption would be “Transformative AI will be developed soon, using methods like what AI labs focus on today.”\nWhen I think about AI strategy nearcasting, I ask:\n\nWhat would a near future where powerful AI could be developed look like?\nIn this possible world, what goals should we have?\nIn this possible world, what important actions could relevant actors take?\n\nAnd what facts about the world make those actions possible? (For example, some actions would require that a lab has certain AI capabilities, or most people believe a certain thing about AI capabilities, or all major labs believe in AI risk.)\n\n\nIn this possible world, what interventions are available?\nRelative to this possible world, how should we expect the real world to be different?11\n\n\nAnd how do those differences affect the goals we should have, and the interventions that are available to us?\n\nNearcasting seems to be a useful tool for\n\npredicting relevant events concretely and\nforcing you to notice how you think the world will be different in the future and how that matters.\n\nLeverage\nI’m not aware of other public writeups on leverage. See also Daniel Kokotajlo’s What considerations influence whether I have more influence over short or long timelines? (2020). Related concept: crunch time.\nWhen doing strategy and planning interventions, what should you focus on?\nA major subquestion is: how should you prioritize focus between possible worlds?12 Ideally you would prioritize working on the worlds that working on has highest expected value, or something like the worlds that have the greatest product of probability and how much better they would go if you worked on them. But how can you guess which worlds are high-leverage for you to work on? There are various reasons to prioritize certain possible worlds, both for reasoning about strategy and for evaluating possible interventions. For example, it seems higher-leverage to work on making AI go well conditional on human-level AI appearing in 2050 than in 3000: the former is more foreseeable, more affectable, and more neglected.\nWe currently lack a good account of leverage, so (going less meta) I’ll begin one for AI strategy here. Given a baseline of weighting possible worlds by their probability, all else equal, you should generally:\n\nUpweight worlds that you have more control over and that you can better plan for\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nUpweight worlds with short-ish timelines (since others will exert more influence over AI in long-timelines worlds, and since we have more clarity about the nearer future, and since we can revise strategies in long-timelines worlds)\nTake into account future strategy research\n\nFor example, if you focus on the world in 2030 (or assume that human-level AI is developed in 2030) you can be deferring, not neglecting, some work on 2040\nFor example, if you focus on worlds in which important events happen without much advance warning or clearsightedness, you can be deferring, not neglecting, some work on worlds in which important events happen foreseeably\n\n\nFocus on what you can better plan for and influence; for AI, perhaps this means:\n\nShort timelines\nThe deep learning paradigm continues\nPowerful AI is resource-intensive\nMaybe some propositions about risk awareness, warning shots, and world-craziness\n\n\nUpweight worlds where the probability of victory is relatively close to 50%13\n\n\nUpweight more neglected worlds (think on the margin)\n\n\nUpweight short-timelines worlds insofar as there is more non-AI existential risk in long-timelines worlds\nUpweight analysis that better generalizes to or improves other worlds\nNotice the possibility that you live in a simulation (if that is decision-relevant; unfortunately, the practical implications of living in a simulation are currently unclear)\nUpweight worlds that you have better personal fit for analyzing\n\nUpweight worlds where you have more influence, if relevant\n\n\nConsider side effects of doing strategy, including what you gain knowledge about, testing fit, and gaining credible signals of fit\n\nIn practice, I tentatively think the biggest (analytically useful) considerations for weighting worlds beyond probability are generally:\n\nShort timelines\n\nMore foreseeable14\nMore affectable\nMore neglected (by the AI strategy community)\n\nFuture people can work on the further future\n\nThe AI strategy field is likely to be bigger in the future\n\n\n\n\nLess planning or influence exerted from outside the AI strategy community\n\n\nFast takeoff15\n\nShorter, less foreseeable a certain time in advance, and less salient to the world in advance\n\nMore neglected by the AI strategy community; the community would have a longer clear-sighted period to work on slow takeoff\nLess planning or influence exerted from outside the AI strategy community\n\n\n\n\n\n(But there are presumably diminishing returns to focusing on particular worlds, at least at the community level, so the community should diversify the worlds it analyzes.) And I’m most confused about\n\nUpweighting worlds where probability of victory is closer to 50% (I’m confused about what the probability of victory is in various possible worlds),\nHow leverage relates to variables like total influence exerted to affect AI (the rest of the world exerting influence means that you have less relative influence insofar as you’re pulling the rope along similar axes, but some interventions are amplified by something like greater attention on AI) (and related variables like attention on AI and general craziness due to AI), and\nThe probability and implications of living in a simulation.\n\nA background assumption or approximation in this section is that you allocate research toward a world and the research is effective just if that world obtains. This assumption is somewhat crude: the impact of most research isn’t so binary, being fully effective in some possible futures and totally ineffective in the rest.16 And thinking in terms of influence over a world is crude: influence depends on the person and on the intervention. Nevertheless, reasoning about leverage in terms of worlds to allocate research toward might sometimes be useful for prioritization. And we might discover a better account of leverage.\nLeverage considerations should include not just prioritizing between possible worlds but also prioritizing within a world. For example, it seems high-leverage to focus on important actors’ blind spots and on certain important decisions or “crunchy” periods. And for AI strategy, it might be high-leverage to focus on the first few deployments of powerful AI systems.\n\nStrategy work is complemented by\n\nactually executing interventions, especially causing actors to make better decisions,\ngaining resources to better execute interventions and improve strategy, and\nfield-building to better execute interventions and improve strategy.\n\nAn individual’s strategy work is complemented by informing the relevant community of their findings (e.g., for AI strategy, the AI strategy community).\nIn this post, I don’t try to make an ontology of AI strategy frames, or do comparative analysis of frames, or argue about the AI strategy community’s prioritization between frames.17 But these all seem like reasonable things for someone to do.\nRelated sources are linked above as relevant; see also Sam Clarke’s The longtermist AI governance landscape (2022), Allan Dafoe’s AI Governance: Opportunity and Theory of Impact (2020), and Matthijs Maas’s Strategic Perspectives on Long-term AI Governance (2022).\nIf I wrote a post on “Framing AI governance,” it would substantially overlap with this list, and it would substantially draw on The longtermist AI governance landscape. See also Allan Dafoe’s AI Governance: A Research Agenda (2018) and hanadulset and Caroline Jeanmaire’s A Map to Navigate AI Governance (2022). I don’t know whether an analogous “Framing technical AI safety” would make sense; if so, I would be excited about such a post.\nMany thanks to Alex Gray. Thanks also to Linch Zhang for discussion of leverage and to Katja Grace, Eli Lifland, Rick Korzekwa, and Jeffrey Heninger for comments on a draft.\nFootnotes", "url": "https://aiimpacts.org/framing-ai-strategy/", "title": "Framing AI strategy", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2023-02-06T19:00:27+00:00", "paged_url": "https://aiimpacts.org/feed?paged=2", "authors": ["Zach Stein-Perlman"], "id": "ffb07b10b91a6d5898376886a3cb68c6", "summary": []} {"text": "Product safety is a poor model for AI governance\n\nRick Korzekwa, February 1, 2023\nNote: This post is intended to be accessible to readers with relatively little background in AI safety. People with a firm understanding of AI safety may find it too basic, though they may be interested in knowing which kinds of policies I have been encountering or telling me if I’ve gotten something wrong.\nI have recently encountered many proposals for policies and regulations intended to reduce risk from advanced AI. These proposals are highly varied and most of them seem to be well-thought-out, at least on some axes. But many of them fail to confront the technical and strategic realities of safely creating powerful AI, and they often fail in similar ways. In this post, I will describe a common type of proposal1 and give a basic overview of the reasons why it is inadequate. I will not address any issues related to the feasibility of implementing such a policy.\nCaveats\n\nIt is likely that I have misunderstood some proposals and that they are already addressing my concerns. Moreover, I do not recommend dismissing a proposal because it pattern-matches to product safety on the surface level.\nThese approaches may be good when combined with others. It is plausible to me that an effective and comprehensive approach to governing AI will include product-safety-like regulations, especially early in the process when we’re still learning and setting the groundwork for more mature policies.\n\nThe product safety model of AI governance\nA common AI policy structure, which I will call the ‘product safety model of AI governance’2, seems to be built on the assumption that, while the processes involved in creating powerful AI may need to be regulated, the harm from a failure to ensure safety occurs predominantly when the AI has been deployed into the world. Under this model, the primary feedback loop for ensuring safety is based on the behavior of the model after it has been built. The product is developed, evaluated for safety, and either sent back for more development or allowed to be deployed, depending on the evaluation. For a typical example, here is a diagram from a US Department of Defense report on responsible AI:3\n\nThe system in this diagram is not formally evaluated for safety or performance until after “Acquisition/Development”.4\nI do not find it surprising that this model is so common. Most of the time when we are concerned about risk from technology, we are worried about what happens when the technology has been released into the world. A faulty brake line on a car is not much of a concern to the public until the car is on public roads, and the facebook feed algorithm cannot be a threat to society until it is used to control what large numbers of people see on their screens. I also think it is reasonable to start with regulations that are already well-understood, and work from there. But this model, on its own, is inadequate for AI, for reasons I will explain in the next section.\nThis model fails when applied to advanced AI\nThis approach cannot be relied on to prevent the major risks associated with advanced AI. I will give two reasons for this, though there may be others.\nSafety cannot (yet) be evaluated reliably in a fully-developed AI\nA fundamental difficulty of ensuring that AI will behave as intended is that examining an AI or observing its behavior during development and testing is not a reliable indicator of how it will behave once deployed. This is a difficult, unsolved problem that resists simple solutions.5 I will not explain this problem in detail, but I will list some aspects of it here, along with real-world examples, and you can read more about it elsewhere.67\nAI may learn approaches to problems or, if sufficiently advanced, acquire goals that result in behavior that seems to work in training but will fail in deployment.\n\nA system trained to classify images performed extremely well, but only because it had learned to detect images based on their storage location.8\nA simulated robot was trained using human feedback. It was supposed to grasp an object, but instead it learned to place its hand between the object and the camera, giving the appearance of grasping it.9\n\nAI can find solutions to problems that cannot be reliably evaluated by humans.\n\nAn evolutionary algorithm for designing circuits on a real circuit board created a circuit that produced the desired output, but the researchers were unable to determine how it worked.10\n\nAI with goals that are at odds with human interests will have an incentive to avoid behavior that reveals its goals.\n\nCreatures undergoing simulated evolution learned to identify when they were in an evaluation environment and when they were in the “real” environment, and used this to hide their unwanted behavior.11\n\nUnsafe AI can cause substantial harm before deployment\nUnlike most technologies, AI can pose a serious risk even during development and testing. As AI capabilities approach and surpass human capabilities, the capacity for AI to cause harm will increase and it will become more difficult to ensure that AI is unable to cause harm outside its development and testing environment. I will not go into lots of detail on this, but I will outline the basic reasons for concern.\nAI is likely to seek influence outside its intended domain\nWhether a given AI system will attempt to gain influence outside the environment in which it is trained and tested is difficult to determine, but we have ample reason to be cautious. Learning to create and execute plans toward a particular goal is something we should expect by default within a broad range of AI applications as capabilities increase12, and for a system to do well on real-world tasks, it will need to make use of information about the wider world. This is something that current language models are already doing on their own.13 There are also examples of AI finding vulnerabilities that allow it to manipulate things it was not intended to have access to, such as a game-playing AI that exploited a glitch in the game to increase its score directly.14\nIt is difficult to constrain advanced AI to a particular domain\nAI with access to the Internet may create copies of itself, gain access to influential systems such as critical infrastructure, or manipulate people into taking actions that are harmful or increase its influence. AI without direct access to the outside world may be able to gain access by manipulating people involved in its development or by exploiting hardware vulnerabilities.15\nExecuting such an escape from a datacenter may not require very high capability. Surprisingly simple social engineering techniques16 can be successful at getting a person to do something that is not in their interest. Malware that can propagate between computers and control them has existed for a long time17 and does not require the software to have any high-level decision making capability at all.\nImportantly, detecting such escape attempts may be very difficult. Almost any AI will have a strong incentive not to be observed taking dangerous, unintended actions, and current systems sometimes find unintended solutions that are invisible to humans.18 Additionally, AI in training is subject to the evaluation difficulties explained in the first section.\nAI breakout could be catastrophic\nOnce AI has gained influence outside its intended domain, it could cause immense harm. How likely this is to happen and how much harm it may cause is a big topic19 that I will not try to cover here, but there are a few things worth pointing out in this context:\n\nMost of the arguments around catastrophic AI risk are agnostic to whether AI gains influence before or after deployment.\nThe basic premise behind catastrophic AI risk is not just that it may use whatever influence it is granted to cause harm, but that it will seek additional influence. This is at odds with the basic idea behind the product safety model, which is that we prevent harm by only granting influence to AI that has been verified as safe.\nThe overall stakes are much higher than those associated with things we normally apply the product safety model to, and unlike most risks from technology, the costs are wide reaching and almost entirely external.\n\nAdditional comments\nI am reluctant to offer alternative models, in part because I want to keep the scope of this post narrow, but also because I’m not sure which approaches are viable. But I can at least provide some guiding principles:\n\nPolicies intended to mitigate risk from advanced AI must recognize that verifying the safety of highly capable AI systems is an unsolved problem that may remain unsolved for the foreseeable future.\nThese policies must also recognize that public risk from AI begins during development, not at the time of deployment.\nMore broadly, it is important to view AI risk as distinct from most other technologies. It is poorly understood, difficult to control in a reliable and verifiable way, and may become very dangerous before it becomes well understood.\n\nFinally, I think it is important to emphasize that, while it is tempting to wait until we see the warning signs that AI is becoming dangerous before enacting such policies, capabilities are advancing rapidly and AI may acquire the ability to seek power and conceal its actions in a sophisticated way with relatively little warning.\nAcknowledgements\nThanks to Harlan Stewart for giving feedback and helping with citations, Irina Gueorguiev for productive conversations on this topic, and Aysja Johnson, Jeffrey Heninger, and Zach Stein-Perlman for feedback on an earlier draft. All mistakes are my own.\nNotes", "url": "https://aiimpacts.org/product-safety-is-a-poor-model-for-ai-governance/", "title": "Product safety is a poor model for AI governance", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2023-02-01T22:38:03+00:00", "paged_url": "https://aiimpacts.org/feed?paged=2", "authors": ["richardkorzekwa"], "id": "dfd79fdf4f9276f4b065e2410dd07a4f", "summary": []} {"text": "We don’t trade with ants\n\n(Crossposted from world spirit sock puppet)\nKatja Grace, 10 January 2023\nWhen discussing advanced AI, sometimes the following exchanges happens:\n“Perhaps advanced AI won’t kill us. Perhaps it will trade with us”\n“We don’t trade with ants”\nI think it’s interesting to get clear on exactly why we don’t trade with ants, and whether it is relevant to the AI situation.\nWhen a person says “we don’t trade with ants”, I think the implicit explanation is that humans are so big, powerful and smart compared to ants that we don’t need to trade with them because they have nothing of value and if they did we could just take it; anything they can do we can do better, and we can just walk all over them. Why negotiate when you can steal?\nI think this is broadly wrong, and that it is also an interesting case of the classic cognitive error of imagining that trade is about swapping fixed-value objects, rather than creating new value from a confluence of one’s needs and the other’s affordances. It’s only in the imaginary zero-sum world that you can generally replace trade with stealing the other party’s stuff, if the other party is weak enough.\nAnts, with their skills, could do a lot that we would plausibly find worth paying for. Some ideas:\n\nCleaning things that are hard for humans to reach (crevices, buildup in pipes, outsides of tall buildings)\nChasing away other insects, including in agriculture\nSurveillance and spying\nBuilding, sculpting, moving, and mending things in hard to reach places and at small scales (e.g. dig tunnels, deliver adhesives to cracks)\nGetting out of our houses before we are driven to expend effort killing them, and similarly for all the other places ants conflict with humans (stinging, eating crops, ..)\n(For an extended list, see ‘Appendix: potentially valuable things things ants can do’)\n\nWe can’t take almost any of this by force, we can at best kill them and take their dirt and the minuscule mouthfuls of our foods they were eating.\nCould we pay them for all this?\nA single ant eats about 2mg per day according to a random website, so you could support a colony of a million ants with 2kg of food per day. Supposing they accepted pay in sugar, or something similarly expensive, 2kg costs around $3. Perhaps you would need to pay them more than subsistence to attract them away from foraging freely, since apparently food-gathering ants usually collect more than they eat, to support others in their colony. So let’s guess $5.\nMy guess is that a million ants could do well over $5 of the above labors in a day. For instance, a colony of meat ants takes ‘weeks’ to remove the meat from an entire carcass of an animal. Supposing somewhat conservatively that this is three weeks, and the animal is a 1.5kg bandicoot, the colony is moving 70g/day. Guesstimating the mass of crumbs falling on the floor of a small cafeteria in a day, I imagine that it’s less than that produced by tearing up a single bread roll and spreading it around, which the internet says is about 50g. So my guess is that an ant colony could clean the floor of a small cafeteria for around $5/day, which I imagine is cheaper than human sweeping (this site says ‘light cleaning’ costs around $35/h on average in the US). And this is one of the tasks where the ants have least advantages over humans. Cleaning the outside of skyscrapers or the inside of pipes is presumably much harder for humans than cleaning a cafeteria floor, and I expect is fairly similar for ants.\nSo at a basic level, it seems like there should be potential for trade with ants – they can do a lot of things that we want done, and could live well at the prices we would pay for those tasks being done.\nSo why don’t we trade with ants?\nI claim that we don’t trade with ants because we can’t communicate with them. We can’t tell them what we’d like them to do, and can’t have them recognize that we would pay them if they did it. Which might be more than the language barrier. There might be a conceptual poverty. There might also be a lack of the memory and consistent identity that allows an ant to uphold commitments it made with me five minutes ago.\nTo get basic trade going, you might not need much of these things though. If we could only communicate that their all leaving our house immediately would prompt us to put a plate of honey in the garden for them and/or not slaughter them, then we would already be gaining from trade.\nSo it looks like the the AI-human relationship is importantly disanalogous to the human-ant relationship, because the big reason we don’t trade with ants will not apply to AI systems potentially trading with us: we can’t communicate with ants, AI can communicate with us.\n(You might think ‘but the AI will be so far above us that it will think of itself as unable to communicate with us, in the same way that we can’t with the ants – we will be unable to conceive of most of its concepts’. It seems unlikely to me that one needs anything like the full palette of concepts available to the smarter creature to make productive trade. With ants, ‘go over there and we won’t kill you’ would do a lot, and it doesn’t involve concepts at the foggy pinnacle of human meaning-construction. The issue with ants is that we can’t communicate almost at all.)\nBut also: ants can actually do heaps of things we can’t, whereas (arguably) at some point that won’t be true for us relative to AI systems. (When we get human-level AI, will that AI also be ant level? Or will AI want to trade with ants for longer than it wants to trade with us? It can probably better figure out how to talk to ants.) However just because at some point AI systems will probably do everything humans do, doesn’t mean that this will happen on any particular timeline, e.g. the same one on which AI becomes ‘very powerful’. If the situation turns out similar to us and ants, we might expect that we continue to have a bunch of niche uses for a while.\nIn sum, for AI systems to be to humans as we are to ants, would be for us to be able to do many tasks better than AI, and for the AI systems to be willing to pay us grandly for them, but for them to be unable to tell us this, or even to warn us to get out of the way. Is this what AI will be like? No. AI will be able to communicate with us, though at some point we will be less useful to AI systems than ants could be to us if they could communicate.\nBut, you might argue, being totally unable to communicate makes one useless, even if one has skills that could be good if accessible through communication. So being unable to communicate is just a kind of being useless, and how we treat ants is an apt case study in treatment of powerless and useless creatures, even if the uselessness has an unusual cause. This seems sort of right, but a) being unable to communicate probably makes a creature more absolutely useless than if it just lacks skills, because even an unskilled creature is sometimes in a position to add value e.g. by moving out of the way instead of having to be killed, b) the corner-ness of the case of ant uselessness might make general intuitive implications carry over poorly to other cases, c) the fact that the ant situation can definitely not apply to us relative to AIs seems interesting, and d) it just kind of worries me that when people are thinking about this analogy with ants, they are imagining it all wrong in the details, even if the conclusion should be the same.\nAlso, there’s a thought that AI being as much more powerful than us as we are than ants implies a uselessness that makes extermination almost guaranteed. But ants, while extremely powerless, are only useless to us by an accident of signaling systems. And we know that problem won’t apply in the case of AI. Perhaps we should not expect to so easily become useless to AI systems, even supposing they take all power from humans.\nAppendix: potentially valuable things things ants can do\n\nClean, especially small loose particles or detachable substances, especially in cases that are very hard for humans to reach (e.g. floors, crevices, sticky jars in the kitchen, buildup from pipes while water is off, the outsides of tall buildings)\nChase away other insects\nPest control in agriculture (they have already been used for this since about 400AD)\nSurveillance and spying\nInvestigating hard to reach situations, underground or in walls for instance – e.g. see whether a pipe is leaking, or whether the foundation of a house is rotting, or whether there is smoke inside a wall\nSurveil buildings for smoke\nDefend areas from invaders, e.g. buildings, cars (some plants have coordinated with ants in this way)\nSculpting/moving things at a very small scale\nBuilding house-size structures with intricate detailing.\nDigging tunnels (e.g. instead of digging up your garden to lay a pipe, maybe ants could dig the hole, then a flexible pipe could be pushed through it)\nBeing used in medication (this already happens, but might happen better if we could communicate with them)\nParticipating in war (attack, guerilla attack, sabotage, intelligence)\nMending things at a small scale, e.g. delivering adhesive material to a crack in a pipe while the water is off\nSurveillance of scents (including which direction a scent is coming from), e.g. drugs, explosives, diseases, people, microbes\nTending other small, useful organisms (‘Leafcutter ants (Atta and Acromyrmex) feed exclusively on a fungus that grows only within their colonies. They continually collect leaves which are taken to the colony, cut into tiny pieces and placed in fungal gardens.’Wikipedia: ‘Leaf cutter ants are sensitive enough to adapt to the fungi’s reaction to different plant material, apparently detecting chemical signals from the fungus. If a particular type of leaf is toxic to the fungus, the colony will no longer collect it…The fungi used by the higher attine ants no longer produce spores. These ants fully domesticated their fungal partner 15 million years ago, a process that took 30 million years to complete.[9] Their fungi produce nutritious and swollen hyphal tips (gongylidia) that grow in bundles called staphylae, to specifically feed the ants.’ ‘The ants in turn keep predators away from the aphids and will move them from one feeding location to another. When migrating to a new area, many colonies will take the aphids with them, to ensure a continued supply of honeydew.’ Wikipedia:’Myrmecophilous (ant-loving) caterpillars of the butterfly family Lycaenidae (e.g., blues, coppers, or hairstreaks) are herded by the ants, led to feeding areas in the daytime, and brought inside the ants’ nest at night. The caterpillars have a gland which secretes honeydew when the ants massage them.’’)\nMeasuring hard to access distances (they measure distance as they walk with an internal pedometer)\nKilling plants (lemon ants make ‘devil’s gardens’ by killing all plants other than ‘lemon ant trees’ in an area)\nProducing and delivering nitrogen to plants (‘Isotopic labelling studies suggest that plants also obtain nitrogen from the ants.’ – Wikipedia)\nGet out of our houses before we are driven to expend effort killing them, and similarly for all the other places ants conflict with humans (stinging, eating crops, ..)\n\n ", "url": "https://aiimpacts.org/we-dont-trade-with-ants/", "title": "We don’t trade with ants", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2023-01-10T23:52:13+00:00", "paged_url": "https://aiimpacts.org/feed?paged=2", "authors": ["Katja Grace"], "id": "50b1d43ad79cc0205bc740d806e4e3d6", "summary": []} {"text": "Let’s think about slowing down AI\n\nKatja Grace, 22 December 2022\nAverting doom by not building the doom machine\nIf you fear that someone will build a machine that will seize control of the world and annihilate humanity, then one kind of response is to try to build further machines that will seize control of the world even earlier without destroying it, forestalling the ruinous machine’s conquest. An alternative or complementary kind of response is to try to avert such machines being built at all, at least while the degree of their apocalyptic tendencies is ambiguous. \nThe latter approach seems to me  like the kind of basic and obvious thing worthy of at least consideration, and also in its favor, fits nicely in the genre ‘stuff that it isn’t that hard to imagine happening in the real world’. Yet my impression is that for people worried about extinction risk from artificial intelligence, strategies under the heading ‘actively slow down AI progress’ have historically been dismissed and ignored (though ‘don’t actively speed up AI progress’ is popular).\nThe conversation near me over the years has felt a bit like this: \n\nSome people: AI might kill everyone. We should design a godlike super-AI of perfect goodness to prevent that.\nOthers: wow that sounds extremely ambitious\nSome people: yeah but it’s very important and also we are extremely smart so idk it could work\n\n\n\n[Work on it for a decade and a half]\n\n\n\n\nSome people: ok that’s pretty hard, we give up\nOthers: oh huh shouldn’t we maybe try to stop the building of this dangerous AI? \nSome people: hmm, that would involve coordinating numerous people—we may be arrogant enough to think that we might build a god-machine that can take over the world and remake it as a paradise, but we aren’t delusional\n\nThis seems like an error to me. (And lately, to a bunch of other people.) \nI don’t have a strong view on whether anything in the space of ‘try to slow down some AI research’ should be done. But I think a) the naive first-pass guess should be a strong ‘probably’, and b) a decent amount of thinking should happen before writing off everything in this large space of interventions. Whereas customarily the tentative answer seems to be, ‘of course not’ and then the topic seems to be avoided for further thinking. (At least in my experience—the AI safety community is large, and for most things I say here, different experiences are probably had in different bits of it.)\nMaybe my strongest view is that one shouldn’t apply such different standards of ambition to these different classes of intervention. Like: yes, there appear to be substantial difficulties in slowing down AI progress to good effect. But in technical alignment, mountainous challenges are met with enthusiasm for mountainous efforts. And it is very non-obvious that the scale of difficulty here is much larger than that involved in designing acceptably safe versions of machines capable of taking over the world before anyone else in the world designs dangerous versions. \nI’ve been talking about this with people over the past many months, and have accumulated an abundance of reasons for not trying to slow down AI, most of which I’d like to argue about at least a bit. My impression is that arguing in real life has coincided with people moving toward my views.\nQuick clarifications\nFirst, to fend off misunderstanding—\n\nI take ‘slowing down dangerous AI’ to include any of: reducing the speed at which AI progress is made in general, e.g. as would occur if general funding for AI declined.shifting AI efforts from work leading more directly to risky outcomes to other work, e.g. as might occur if there was broadscale concern about very large AI models, and people and funding moved to other projects.Halting categories of work until strong confidence in its safety is possible, e.g. as would occur if AI researchers agreed that certain systems posed catastrophic risks and should not be developed until they did not. (This might mean a permanent end to some systems, if they were intrinsically unsafe.)(So in particular, I’m including both actions whose direct aim is slowness in general, and actions whose aim is requiring safety before specific developments, which implies slower progress.)\nI do think there is serious attention on some versions of these things, generally under other names. I see people thinking about ‘differential progress’ (b. above), and strategizing about coordination to slow down AI at some point in the future (e.g. at ‘deployment’). And I think a lot of consideration is given to avoiding actively speeding up AI progress. What I’m saying is missing are, a) consideration of actively working to slow down AI now, and b) shooting straightforwardly to ‘slow down AI’, rather than wincing from that and only considering examples of it that show up under another conceptualization (perhaps this is an unfair diagnosis).\nAI Safety is a big community, and I’ve only ever been seeing a one-person window into it, so maybe things are different e.g. in DC, or in different conversations in Berkeley. I’m just saying that for my corner of the world, the level of disinterest in this has been notable, and in my view misjudged.\n\nWhy not slow down AI? Why not consider it?\nOk, so if we tentatively suppose that this topic is worth even thinking about, what do we think? Is slowing down AI a good idea at all? Are there great reasons for dismissing it?\nScott Alexander wrote a post a little while back raising reasons to dislike the idea, roughly:\n\nDo you want to lose an arms race? If the AI safety community tries to slow things down, it will disproportionately slow down progress in the US, and then people elsewhere will go fast and get to be the ones whose competence determines whether the world is destroyed, and whose values determine the future if there is one. Similarly, if AI safety people criticize those contributing to AI progress, it will mostly discourage the most friendly and careful AI capabilities companies, and the reckless ones will get there first.\nOne might contemplate ‘coordination’ to avoid such morbid races. But coordinating anything with the whole world seems wildly tricky. For instance, some countries are large, scary, and hard to talk to.\nAgitating for slower AI progress is ‘defecting’ against the AI capabilities folks, who are good friends of the AI safety community, and their friendship is strategically valuable for ensuring that safety is taken seriously in AI labs (as well as being non-instrumentally lovely! Hi AI capabilities friends!). \n\nOther opinions I’ve heard, some of which I’ll address:\n\nSlowing AI progress is futile: for all your efforts you’ll probably just die a few years later\nCoordination based on convincing people that AI risk is a problem is absurdly ambitious. It’s practically impossible to convince AI professors of this, let alone any real fraction of humanity, and you’d need to convince a massive number of people.\nWhat are we going to do, build powerful AI never and die when the Earth is eaten by the sun?\nIt’s actually better for safety if AI progress moves fast. This might be because the faster AI capabilities work happens, the smoother AI progress will be, and this is more important than the duration of the period. Or speeding up progress now might force future progress to be correspondingly slower. Or because safety work is probably better when done just before building the relevantly risky AI, in which case the best strategy might be to get as close to dangerous AI as possible and then stop and do safety work. Or if safety work is very useless ahead of time, maybe delay is fine, but there is little to gain by it. \nSpecific routes to slowing down AI are not worth it. For instance, avoiding working on AI capabilities research is bad because it’s so helpful for learning on the path to working on alignment. And AI safety people working in AI capabilities can be a force for making safer choices at those companies.\nAdvanced AI will help enough with other existential risks as to represent a net lowering of existential risk overall.1\nRegulators are ignorant about the nature of advanced AI (partly because it doesn’t exist, so everyone is ignorant about it). Consequently they won’t be able to regulate it effectively, and bring about desired outcomes.\n\nMy impression is that there are also less endorsable or less altruistic or more silly motives floating around for this attention allocation. Some things that have come up at least once in talking to people about this, or that seem to be going on:\n\n\nAdvanced AI might bring manifold wonders, e.g. long lives of unabated thriving. Getting there a bit later is fine for posterity, but for our own generation it could mean dying as our ancestors did while on the cusp of a utopian eternity. Which would be pretty disappointing. For a person who really believes in this future, it can be tempting to shoot for the best scenario—humanity builds strong, safe AI in time to save this generation—rather than the scenario where our own lives are inevitably lost.\nSometimes people who have a heartfelt appreciation for the flourishing that technology has afforded so far can find it painful to be superficially on the side of Luddism here.\nFiguring out how minds work well enough to create new ones out of math is an incredibly deep and interesting intellectual project, which feels right to take part in. It can be hard to intuitively feel like one shouldn’t do it.(Illustration from a co-founder of modern computational reinforcement learning: )\n\n\nIt will be the greatest intellectual achievement of all time.An achievement of science, of engineering, and of the humanities, whose significance is beyond humanity, beyond life, beyond good and bad.— Richard Sutton (@RichardSSutton) September 29, 2022\n\n\nIt is uncomfortable to contemplate projects that would put you in conflict with other people. Advocating for slower AI feels like trying to impede someone else’s project, which feels adversarial and can feel like it has a higher burden of proof than just working on your own thing.\n‘Slow-down-AGI’ sends people’s minds to e.g. industrial sabotage or terrorism, rather than more boring courses, such as, ‘lobby for labs developing shared norms for when to pause deployment of models’. This understandably encourages dropping the thought as soon as possible.\nMy weak guess is that there’s a kind of bias at play in AI risk thinking in general, where any force that isn’t zero is taken to be arbitrarily intense. Like, if there is pressure for agents to exist, there will arbitrarily quickly be arbitrarily agentic things. If there is a feedback loop, it will be arbitrarily strong. Here, if stalling AI can’t be forever, then it’s essentially zero time. If a regulation won’t obstruct every dangerous project, then is worthless. Any finite economic disincentive for dangerous AI is nothing in the face of the omnipotent economic incentives for AI. I think this is a bad mental habit: things in the real world often come down to actual finite quantities. This is very possibly an unfair diagnosis. (I’m not going to discuss this later; this is pretty much what I have to say.)\nI sense an assumption that slowing progress on a technology would be a radical and unheard-of move.\nI agree with lc that there seems to have been a quasi-taboo on the topic, which perhaps explains a lot of the non-discussion, though still calls for its own explanation. I think it suggests that concerns about uncooperativeness play a part, and the same for thinking of slowing down AI as centrally involving antisocial strategies.\n\n\nI’m not sure if any of this fully resolves why AI safety people haven’t thought about slowing down AI more, or whether people should try to do it. But my sense is that many of the above reasons are at least somewhat wrong, and motives somewhat misguided, so I want to argue about a lot of them in turn, including both arguments and vague motivational themes.\nThe mundanity of the proposal\nRestraint is not radical\nThere seems to be a common thought that technology is a kind of inevitable path along which the world must tread, and that trying to slow down or avoid any part of it would be both futile and extreme.2 \nBut empirically, the world doesn’t pursue every technology—it barely pursues any technologies.\nSucky technologies\nFor a start, there are many machines that there is no pressure to make, because they have no value. Consider a machine that sprays shit in your eyes. We can technologically do that, but probably nobody has ever built that machine. \nThis might seem like a stupid example, because no serious ‘technology is inevitable’ conjecture is going to claim that totally pointless technologies are inevitable. But if you are sufficiently pessimistic about AI, I think this is the right comparison: if there are kinds of AI that would cause huge net costs to their creators if created, according to our best understanding, then they are at least as useless to make as the ‘spray shit in your eyes’ machine. We might accidentally make them due to error, but there is not some deep economic force pulling us to make them. If unaligned superintelligence destroys the world with high probability when you ask it to do a thing, then this is the category it is in, and it is not strange for its designs to just rot in the scrap-heap, with the machine that sprays shit in your eyes and the machine that spreads caviar on roads.\nOk, but maybe the relevant actors are very committed to being wrong about whether unaligned superintelligence would be a great thing to deploy. Or maybe you think the situation is less immediately dire and building existentially risky AI really would be good for the people making decisions (e.g. because the costs won’t arrive for a while, and the people care a lot about a shot at scientific success relative to a chunk of the future). If the apparent economic incentives are large, are technologies unavoidable?\nExtremely valuable technologies\nIt doesn’t look like it to me. Here are a few technologies which I’d guess have substantial economic value, where research progress or uptake appears to be drastically slower than it could be, for reasons of concern about safety or ethics3:\n\nHuge amounts of medical research, including really important medical research e.g. The FDA banned human trials of strep A vaccines from the 70s to the 2000s, in spite of 500,000 global deaths every year. A lot of people also died while covid vaccines went through all the proper trials. \nNuclear energy\nFracking\nVarious genetics things: genetic modification of foods, gene drives, early recombinant DNA researchers famously organized a moratorium and then ongoing research guidelines including prohibition of certain experiments (see the Asilomar Conference)\nNuclear, biological, and maybe chemical weapons (or maybe these just aren’t useful)\nVarious human reproductive innovation: cloning of humans, genetic manipulation of humans (a notable example of an economically valuable technology that is to my knowledge barely pursued across different countries, without explicit coordination between those countries, even though it would make those countries more competitive. Someone used CRISPR on babies in China, but was imprisoned for it.)\nRecreational drug development\nGeoengineering\nMuch of science about humans? I recently ran this survey, and was reminded how encumbering ethical rules are for even incredibly innocuous research. As far as I could tell the EU now makes it illegal to collect data in the EU unless you promise to delete the data from anywhere that it might have gotten to if the person who gave you the data wishes for that at some point. In all, dealing with this and IRB-related things added maybe more than half of the effort of the project. Plausibly I misunderstand the rules, but I doubt other researchers are radically better at figuring them out than I am.\nThere are probably examples from fields considered distasteful or embarrassing to associate with, but it’s hard as an outsider to tell which fields are genuinely hopeless versus erroneously considered so. If there are economically valuable health interventions among those considered wooish, I imagine they would be much slower to be identified and pursued by scientists with good reputations than a similarly promising technology not marred in that way. Scientific research into intelligence is more clearly slowed by stigma, but it is less clear to me what the economically valuable upshot would be.\n(I think there are many other things that could be in this list, but I don’t have time to review them at the moment. This page might collect more of them in future.)\n\nIt seems to me that intentionally slowing down progress in technologies to give time for even probably-excessive caution is commonplace. (And this is just looking at things slowed down over caution or ethics specifically—probably there are also other reasons things get slowed down.)\nFurthermore, among valuable technologies that nobody is especially trying to slow down, it seems common enough for progress to be massively slowed by relatively minor obstacles, which is further evidence for a lack of overpowering strength of the economic forces at play. For instance, Fleming first took notice of mold’s effect on bacteria in 1928, but nobody took a serious, high-effort shot at developing it as a drug until 1939.4 Furthermore, in the thousands of years preceding these events, various people noticed numerous times that mold, other fungi or plants inhibited bacterial growth, but didn’t exploit this observation even enough for it not to be considered a new discovery in the 1920s. Meanwhile, people dying of infection was quite a thing. In 1930 about 300,000 Americans died of bacterial illnesses per year (around 250/100k).\nMy guess is that people make real choices about technology, and they do so in the face of economic forces that are feebler than commonly thought. \nRestraint is not terrorism, usually\nI think people have historically imagined weird things when they think of ‘slowing down AI’. I posit that their central image is sometimes terrorism (which understandably they don’t want to think about for very long), and sometimes some sort of implausibly utopian global agreement.\nHere are some other things that ‘slow down AI capabilities’ could look like (where the best positioned person to carry out each one differs, but if you are not that person, you could e.g. talk to someone who is):\n\nDon’t actively forward AI progress, e.g. by devoting your life or millions of dollars to it (this one is often considered already)\nTry to convince researchers, funders, hardware manufacturers, institutions etc that they too should stop actively forwarding AI progress\nTry to get any of those people to stop actively forwarding AI progress even if they don’t agree with you: through negotiation, payments, public reproof, or other activistic means.\nTry to get the message to the world that AI is heading toward being seriously endangering. If AI progress is broadly condemned, this will trickle into myriad decisions: job choices, lab policies, national laws. To do this, for instance produce compelling demos of risk, agitate for stigmatization of risky actions, write science fiction illustrating the problems broadly and evocatively (I think this has actually been helpful repeatedly in the past), go on TV, write opinion pieces, help organize and empower the people who are already concerned, etc.\nHelp organize the researchers who think their work is potentially omnicidal into coordinated action on not doing it.\nMove AI resources from dangerous research to other research. Move investments from projects that lead to large but poorly understood capabilities, to projects that lead to understanding these things e.g. theory before scaling (see differential technological development in general5).\nFormulate specific precautions for AI researchers and labs to take in different well-defined future situations, Asilomar Conference style. These could include more intense vetting by particular parties or methods, modifying experiments, or pausing lines of inquiry entirely. Organize labs to coordinate on these.\nReduce available compute for AI, e.g. via regulation of production and trade, seller choices, purchasing compute, trade strategy.\nAt labs, choose policies that slow down other labs, e.g. reduce public helpful research outputs\nAlter the publishing system and incentives to reduce research dissemination. E.g. A journal verifies research results and releases the fact of their publication without any details, maintains records of research priority for later release, and distributes funding for participation. (This is how Szilárd and co. arranged the mitigation of 1940s nuclear research helping Germany, except I’m not sure if the compensatory funding idea was used.6)\nThe above actions would be taken through choices made by scientists, or funders, or legislators, or labs, or public observers, etc. Communicate with those parties, or help them act.\n\nCoordination is not miraculous world government, usually\nThe common image of coordination seems to be explicit, centralized, involving of every party in the world, and something like cooperating on a prisoners’ dilemma: incentives push every rational party toward defection at all times, yet maybe through deontological virtues or sophisticated decision theories or strong international treaties, everyone manages to not defect for enough teetering moments to find another solution.\nThat is a possible way coordination could be. (And I think one that shouldn’t be seen as so hopeless—the world has actually coordinated on some impressive things, e.g. nuclear non-proliferation.) But if what you want is for lots of people to coincide in doing one thing when they might have done another, then there are quite a few ways of achieving that. \nConsider some other case studies of coordinated behavior:\n\nNot eating sand. The whole world coordinates to barely eat any sand at all. How do they manage it? It is actually not in almost anyone’s interest to eat sand, so the mere maintenance of sufficient epistemological health to have this widely recognized does the job.\nEschewing bestiality: probably some people think bestiality is moral, but enough don’t that engaging in it would risk huge stigma. Thus the world coordinates fairly well on doing very little of it.\nNot wearing Victorian attire on the streets: this is similar but with no moral blame involved. Historic dress is arguably often more aesthetic than modern dress, but even people who strongly agree find it unthinkable to wear it in general, and assiduously avoid it except for when they have ‘excuses’ such as a special party. This is a very strong coordination against what appears to otherwise be a ubiquitous incentive (to be nicer to look at). As far as I can tell, it’s powered substantially by the fact that it is ‘not done’ and would now be weird to do otherwise. (Which is a very general-purpose mechanism.)\nPolitical correctness: public discourse has strong norms about what it is okay to say, which do not appear to derive from a vast majority of people agreeing about this (as with bestiality say). New ideas about what constitutes being politically correct sometimes spread widely. This coordinated behavior seems to be roughly due to decentralized application of social punishment, from both a core of proponents, and from people who fear punishment for not punishing others. Then maybe also from people who are concerned by non-adherence to what now appears to be the norm given the actions of the others. This differs from the above examples, because it seems like it could persist even with a very small set of people agreeing with the object-level reasons for a norm. If failing to advocate for the norm gets you publicly shamed by advocates, then you might tend to advocate for it, making the pressure stronger for everyone else. \n\nThese are all cases of very broadscale coordination of behavior, none of which involve prisoners’ dilemma type situations, or people making explicit agreements which they then have an incentive to break. They do not involve centralized organization of huge multilateral agreements. Coordinated behavior can come from everyone individually wanting to make a certain choice for correlated reasons, or from people wanting to do things that those around them are doing, or from distributed behavioral dynamics such as punishment of violations, or from collaboration in thinking about a topic.\nYou might think they are weird examples that aren’t very related to AI. I think, a) it’s important to remember the plethora of weird dynamics that actually arise in human group behavior and not get carried away theorizing about AI in a world drained of everything but prisoners’ dilemmas and binding commitments, and b) the above are actually all potentially relevant dynamics here.\nIf AI in fact poses a large existential risk within our lifetimes, such that it is net bad for any particular individual, then the situation in theory looks a lot like that in the ‘avoiding eating sand’ case. It’s an option that a rational person wouldn’t want to take if they were just alone and not facing any kind of multi-agent situation. If AI is that dangerous, then not taking this inferior option could largely come from a coordination mechanism as simple as distribution of good information. (You still need to deal with irrational people and people with unusual values.)\nBut even failing coordinated caution from ubiquitous insight into the situation, other models might work. For instance, if there came to be somewhat widespread concern that AI research is bad, that might substantially lessen participation in it, beyond the set of people who are concerned, via mechanisms similar to those described above. Or it might give rise to a wide crop of local regulation, enforcing whatever behavior is deemed acceptable. Such regulation need not be centrally organized across the world to serve the purpose of coordinating the world, as long as it grew up in different places similarly. Which might happen because different locales have similar interests (all rational governments should be similarly concerned about losing power to automated power-seeking systems with unverifiable goals), or because—as with individuals—there are social dynamics which support norms arising in a non-centralized way.\nThe arms race model and its alternatives\nOk, maybe in principle you might hope to coordinate to not do self-destructive things, but realistically, if the US tries to slow down, won’t China or Facebook or someone less cautious take over the world? \nLet’s be more careful about the game we are playing, game-theoretically speaking.\nThe arms race\nWhat is an arms race, game theoretically? It’s an iterated prisoners’ dilemma, seems to me. Each round looks something like this:\n\nPlayer 1 chooses a row, Player 2 chooses a column, and the resulting payoffs are listed in each cell, for {Player 1, Player 2}\nIn this example, building weapons costs one unit. If anyone ends the round with more weapons than anyone else, they take all of their stuff (ten units).\nIn a single round of the game it’s always better to build weapons than not (assuming your actions are devoid of implications about your opponent’s actions). And it’s always better to get the hell out of this game.\nThis is not much like what the current AI situation looks like, if you think AI poses a substantial risk of destroying the world.\nThe suicide race\nA closer model: as above except if anyone chooses to build, everything is destroyed (everyone loses all their stuff—ten units of value—as well as one unit if they built).\n\n\nThis is importantly different from the classic ‘arms race’ in that pressing the ‘everyone loses now’ button isn’t an equilibrium strategy.\nThat is: for anyone who thinks powerful misaligned AI represents near-certain death, the existence of other possible AI builders is not any reason to ‘race’. \nBut few people are that pessimistic. How about a milder version where there’s a good chance that the players ‘align the AI’?\nThe safety-or-suicide race \nOk, let’s do a game like the last but where if anyone builds, everything is only maybe destroyed (minus ten to all), and in the case of survival, everyone returns to the original arms race fun of redistributing stuff based on who built more than whom (+10 to a builder and -10 to a non-builder if there is one of each). So if you build AI alone, and get lucky on the probabilistic apocalypse, can still win big.\nLet’s take 50% as the chance of doom if any building happens. Then we have a game whose expected payoffs are half way between those in the last two games:\n\n(These are expected payoffs—the minus one unit return to building alone comes from the one unit cost of building, plus half a chance of losing ten in an extinction event and half a chance of taking ten from your opponent in a world takeover event.)\nNow you want to do whatever the other player is doing: build if they’ll build, pass if they’ll pass. \nIf the odds of destroying the world were very low, this would become the original arms race, and you’d always want to build. If very high, it would become the suicide race, and you’d never want to build. What the probabilities have to be in the real world to get you into something like these different phases is going to be different, because all these parameters are made up (the downside of human extinction is not 10x the research costs of building powerful AI, for instance).\nBut my point stands: even in terms of simplish models, it’s very non-obvious that we are in or near an arms race. And therefore, very non-obvious that racing to build advanced AI faster is even promising at a first pass.\nIn less game-theoretic terms: if you don’t seem anywhere near solving alignment, then racing as hard as you can to be the one who it falls upon to have solved alignment—especially if that means having less time to do so, though I haven’t discussed that here—is probably unstrategic. Having more ideologically pro-safety AI designers win an ‘arms race’ against less concerned teams is futile if you don’t have a way for such people to implement enough safety to actually not die, which seems like a very live possibility. (Robby Bensinger and maybe Andrew Critch somewhere make similar points.)\nConversations with my friends on this kind of topic can go like this:\n\nMe: there’s no real incentive to race if the prize is mutual death\nThem: sure, but it isn’t—if there’s a sliver of hope of surviving unaligned AI, and if your side taking control in that case is a bit better in expectation, and if they are going to build powerful AI anyway, then it’s worth racing. The whole future is on the line!\nMe: Wouldn’t you still be better off directing your own efforts to safety, since your safety efforts will also help everyone end up with a safe AI? \nThem: It will probably only help them somewhat—you don’t know if the other side will use your safety research. But also, it’s not just that they have less safety research. Their values are probably worse, by your lights. \nMe: If they succeed at alignment, are foreign values really worse than local ones? Probably any humans with vast intelligence at hand have a similar shot at creating a glorious human-ish utopia, no?\nThem: No, even if you’re right that being similarly human gets you to similar values in the end, the other parties might be more foolish than our side, and lock-in7 some poorly thought-through version of their values that they want at the moment, or even if all projects would be so foolish, our side might have better poorly thought-through values to lock in, as well as being more likely to use safety ideas at all. Even if racing is very likely to lead to death, and survival is very likely to lead to squandering most of the value, in that sliver of happy worlds so much is at stake in whether it is us or someone else doing the squandering!\nMe: Hmm, seems complicated, I’m going to need paper for this.\n\nThe complicated race/anti-race\nHere is a spreadsheet of models you can make a copy of and play with.\nThe first model is like this:\n\nEach player divides their effort between safety and capabilities\nOne player ‘wins’, i.e. builds ‘AGI’ (artificial general intelligence) first. \nP(Alice wins) is a logistic function of Alice’s capabilities investment relative to Bob’s\nEach players’ total safety is their own safety investment plus a fraction of the other’s safety investment.\nFor each player there is some distribution of outcomes if they achieve safety, and a set of outcomes if they do not, which takes into account e.g. their proclivities for enacting stupid near-term lock-ins.\nThe outcome is a distribution over winners and states of alignment, each of which is a distribution of worlds (e.g. utopia, near-term good lock-in..)\nThat all gives us a number of utils (Delicious utils!)\n\nThe second model is the same except that instead of dividing effort between safety and capabilities, you choose a speed, and the amount of alignment being done by each party is an exogenous parameter. \nThese models probably aren’t very good, but so far support a key claim I want to make here: it’s pretty non-obvious whether one should go faster or slower in this kind of scenario—it’s sensitive to a lot of different parameters in plausible ranges. \nFurthermore, I don’t think the results of quantitative analysis match people’s intuitions here.\nFor example, here’s a situation which I think sounds intuitively like a you-should-race world, but where in the first model above, you should actually go as slowly as possible (this should be the one plugged into the spreadsheet now):\n\nAI is pretty safe: unaligned AGI has a mere 7% chance of causing doom, plus a further 7% chance of causing short term lock-in of something mediocre\nYour opponent risks bad lock-in: If there’s a ‘lock-in’ of something mediocre, your opponent has a 5% chance of locking in something actively terrible, whereas you’ll always pick good mediocre lock-in world (and mediocre lock-ins are either 5% as good as utopia, -5% as good)\nYour opponent risks messing up utopia: In the event of aligned AGI, you will reliably achieve the best outcome, whereas your opponent has a 5% chance of ending up in a ‘mediocre bad’ scenario then too.\nSafety investment obliterates your chance of getting to AGI first: moving from no safety at all to full safety means you go from a 50% chance of being first to a 0% chance\nYour opponent is racing: Your opponent is investing everything in capabilities and nothing in safety\nSafety work helps others at a steep discount:  your safety work contributes 50% to the other player’s safety \n\nYour best bet here (on this model) is still to maximize safety investment. Why? Because by aggressively pursuing safety, you can get the other side half way to full safety, which is worth a lot more than than the lost chance of winning. Especially since if you ‘win’, you do so without much safety, and your victory without safety is worse than your opponent’s victory with safety, even if that too is far from perfect.\nSo if you are in a situation in this space, and the other party is racing, it’s not obvious if it is even in your narrow interests within the game to go faster at the expense of safety, though it may be.\nThese models are flawed in many ways, but I think they are better than the intuitive models that support arms-racing. My guess is that the next better still models remain nuanced.\nOther equilibria and other games\nEven if it would be in your interests to race if the other person were racing, ‘(do nothing, do nothing)’ is often an equilibrium too in these games. At least for various settings of the parameters. It doesn’t necessarily make sense to do nothing in the hope of getting to that equilibrium if you know your opponent to be mistaken about that and racing anyway, but in conjunction with communicating with your ‘opponent’, it seems like a theoretically good strategy.\nThis has all been assuming the structure of the game. I think the traditional response to an arms race situation is to remember that you are in a more elaborate world with all kinds of unmodeled affordances, and try to get out of the arms race. \nBeing friends with risk-takers\nCaution is cooperative\nAnother big concern is that pushing for slower AI progress is ‘defecting’ against AI researchers who are friends of the AI safety community. \nFor instance Steven Byrnes:\n\n“I think that trying to slow down research towards AGI through regulation would fail, because everyone (politicians, voters, lobbyists, business, etc.) likes scientific research and technological development, it creates jobs, it cures diseases, etc. etc., and you’re saying we should have less of that. So I think the effort would fail, and also be massively counterproductive by making the community of AI researchers see the community of AGI safety / alignment people as their enemies, morons, weirdos, Luddites, whatever.”\n\n(Also a good example of the view criticized earlier, that regulation of things that create jobs and cure diseases just doesn’t happen.)\nOr Eliezer Yudkowsky, on worry that spreading fear about AI would alienate top AI labs:\n\nThis is the primary reason I didn't, and told others not to, earlier connect the point about human extinction from AGI with AI labs. Kerry has correctly characterized the position he is arguing against, IMO. I myself estimate the public will be toothless vs AGI lab heads.— Eliezer Yudkowsky (@ESYudkowsky) August 4, 2022\n\nI don’t think this is a natural or reasonable way to see things, because:\n\nThe researchers themselves probably don’t want to destroy the world. Many of them also actually agree that AI is a serious existential risk. So in two natural ways, pushing for caution is cooperative with many if not most AI researchers.\nAI researchers do not have a moral right to endanger the world, that someone would be stepping on by requiring that they move more cautiously. Like, why does ‘cooperation’ look like the safety people bowing to what the more reckless capabilities people want, to the point of fearing to represent their actual interests, while the capabilities people uphold their side of the ‘cooperation’ by going ahead and building dangerous AI? This situation might make sense as a natural consequence of different people’s power in the situation. But then don’t call it a ‘cooperation’, from which safety-oriented parties would be dishonorably ‘defecting’ were they to consider exercising any power they did have. \n\nIt could be that people in control of AI capabilities would respond negatively to AI safety people pushing for slower progress. But that should be called ‘we might get punished’ not ‘we shouldn’t defect’. ‘Defection’ has moral connotations that are not due. Calling one side pushing for their preferred outcome ‘defection’ unfairly disempowers them by wrongly setting commonsense morality against them.\nAt least if it is the safety side. If any of the available actions are ‘defection’ that the world in general should condemn, I claim that it is probably ‘building machines that will plausibly destroy the world, or standing by while it happens’. \n(This would be more complicated if the people involved were confident that they wouldn’t destroy the world and I merely disagreed with them. But about half of surveyed researchers are actually more pessimistic than me. And in a situation where the median AI researcher thinks the field has a 5-10% chance of causing human extinction, how confident can any responsible person be in their own judgment that it is safe?)  \nOn top of all that, I worry that highlighting the narrative that wanting more cautious progress is defection is further destructive, because it makes it more likely that AI capabilities people see AI safety people as thinking of themselves as betraying AI researchers, if anyone engages in any such efforts. Which makes the efforts more aggressive. Like, if every time you see friends, you refer to it as ‘cheating on my partner’, your partner may reasonably feel hurt by your continual desire to see friends, even though the activity itself is innocuous.\n‘We’ are not the US, ‘we’ are not the AI safety community\n“If ‘we’ try to slow down AI, then the other side might win.” “If ‘we’ ask for regulation, then it might harm ‘our’ relationships with AI capabilities companies.” Who are these ‘we’s? Why are people strategizing for those groups in particular? \nEven if slowing AI were uncooperative, and it were important for the AI Safety community to cooperate with the AI capabilities community, couldn’t one of the many people not in the AI Safety community work on it? \nI have a longstanding irritation with thoughtless talk about what ‘we’ should do, without regard for what collective one is speaking for. So I may be too sensitive about it here. But I think confusions arising from this have genuine consequences.\nI think when people say ‘we’ here, they generally imagine that they are strategizing on behalf of, a) the AI safety community, b) the USA, c) themselves or d) they and their readers. But those are a small subset of people, and not even obviously the ones the speaker can most influence (does the fact that you are sitting in the US really make the US more likely to listen to your advice than e.g. Estonia? Yeah probably on average, but not infinitely much.) If these naturally identified-with groups don’t have good options, that hardly means there are no options to be had, or to be communicated to other parties. Could the speaker speak to a different ‘we’? Maybe someone in the ‘we’ the speaker has in mind knows someone not in that group? If there is a strategy for anyone in the world, and you can talk, then there is probably a strategy for you.\nThe starkest appearance of error along these lines to me is in writing off the slowing of AI as inherently destructive of relations between the AI safety community and other AI researchers. If we grant that such activity would be seen as a betrayal (which seems unreasonable to me, but maybe), surely it could only be a betrayal if carried out by the AI safety community. There are quite a lot of people who aren’t in the AI safety community and have a stake in this, so maybe some of them could do something. It seems like a huge oversight to give up on all slowing of AI progress because you are only considering affordances available to the AI Safety Community. \nAnother example: if the world were in the basic arms race situation sometimes imagined, and the United States would be willing to make laws to mitigate AI risk, but could not because China would barge ahead, then that means China is in a great place to mitigate AI risk. Unlike the US, China could propose mutual slowing down, and the US would go along. Maybe it’s not impossible to communicate this to relevant people in China. \nAn oddity of this kind of discussion which feels related is the persistent assumption that one’s ability to act is restricted to the United States. Maybe I fail to understand the extent to which Asia is an alien and distant land where agency doesn’t apply, but for instance I just wrote to like a thousand machine learning researchers there, and maybe a hundred wrote back, and it was a lot like interacting with people in the US.\nI’m pretty ignorant about what interventions will work in any particular country, including the US, but I just think it’s weird to come to the table assuming that you can essentially only affect things in one country. Especially if the situation is that you believe you have unique knowledge about what is in the interests of people in other countries. Like, fair enough I would be deal-breaker-level pessimistic if you wanted to get an Asian government to elect you leader or something. But if you think advanced AI is highly likely to destroy the world, including other countries, then the situation is totally different. If you are right, then everyone’s incentives are basically aligned. \nI more weakly suspect some related mental shortcut is misshaping the discussion of arms races in general. The thought that something is a ‘race’ seems much stickier than alternatives, even if the true incentives don’t really make it a race. Like, against the laws of game theory, people sort of expect the enemy to try to believe falsehoods, because it will better contribute to their racing. And this feels like realism. The uncertain details of billions of people one barely knows about, with all manner of interests and relationships, just really wants to form itself into an ‘us’ and a ‘them’ in zero-sum battle. This is a mental shortcut that could really kill us.\nMy impression is that in practice, for many of the technologies slowed down for risk or ethics, mentioned in section ‘Extremely valuable technologies’ above, countries with fairly disparate cultures have converged on similar approaches to caution. I take this as evidence that none of ethical thought, social influence, political power, or rationality are actually very siloed by country, and in general the ‘countries in contest’ model of everything isn’t very good.\nNotes on tractability\nConvincing people doesn’t seem that hard\nWhen I say that ‘coordination’ can just look like popular opinion punishing an activity, or that other countries don’t have much real incentive to build machines that will kill them, I think a common objection is that convincing people of the real situation is hopeless. The picture seems to be that the argument for AI risk is extremely sophisticated and only able to be appreciated by the most elite of intellectual elites—e.g. it’s hard enough to convince professors on Twitter, so surely the masses are beyond its reach, and foreign governments too. \nThis doesn’t match my overall experience on various fronts.\nSome observations:\n\nThe median surveyed ML researcher seems to think AI will destroy humanity with 5-10% chance, as I mentioned\nOften people are already intellectually convinced but haven’t integrated that into their behavior, and it isn’t hard to help them organize to act on their tentative beliefs\nAs noted by Scott, a lot of AI safety people have gone into AI capabilities including running AI capabilities orgs, so those people presumably consider AI to be risky already\nI don’t remember ever having any trouble discussing AI risk with random strangers. Sometimes they are also fairly worried (e.g. a makeup artist at Sephora gave an extended rant about the dangers of advanced AI, and my driver in Santiago excitedly concurred and showed me Homo Deus open on his front seat). The form of the concerns are probably a bit different from those of the AI Safety community, but I think broadly closer to, ‘AI agents are going to kill us all’ than ‘algorithmic bias will be bad’. I can’t remember how many times I have tried this, but pre-pandemic I used to talk to Uber drivers a lot, due to having no idea how to avoid it. I explained AI risk to my therapist recently, as an aside regarding his sense that I might be catastrophizing, and I feel like it went okay, though we may need to discuss again. \nMy impression is that most people haven’t even come into contact with the arguments that might bring one to agree precisely with the AI safety community. For instance, my guess is that a lot of people assume that someone actually programmed modern AI systems, and if you told them that in fact they are random connections jiggled in an gainful direction unfathomably many times, just as mysterious to their makers, they might also fear misalignment. \nNick Bostrom, Eliezer Yudkokwsy, and other early thinkers have had decent success at convincing a bunch of other people to worry about this problem, e.g. me. And to my knowledge, without writing any compelling and accessible account of why one should do so that would take less than two hours to read.\nI arrogantly think I could write a broadly compelling and accessible case for AI risk\n\nMy weak guess is that immovable AI risk skeptics are concentrated in intellectual circles near the AI risk people, especially on Twitter, and that people with less of a horse in the intellectual status race are more readily like, ‘oh yeah, superintelligent robots are probably bad’. It’s not clear that most people even need convincing that there is a problem, though they don’t seem to consider it the most pressing problem in the world. (Though all of this may be different in cultures I am more distant from, e.g. in China.) I’m pretty non-confident about this, but skimming survey evidence suggests there is substantial though not overwhelming public concern about AI in the US8.\nDo you need to convince everyone?\nI could be wrong, but I’d guess convincing the ten most relevant leaders of AI labs that this is a massive deal, worth prioritizing, actually gets you a decent slow-down. I don’t have much evidence for this.\nBuying time is big\nYou probably aren’t going to avoid AGI forever, and maybe huge efforts will buy you a couple of years.9 Could that even be worth it? \nSeems pretty plausible:\n\nWhatever kind of other AI safety research or policy work people were doing could be happening at a non-negligible rate per year. (Along with all other efforts to make the situation better—if you buy a year, that’s eight billion extra person years of time, so only a tiny bit has to be spent usefully for this to be big. If a lot of people are worried, that doesn’t seem crazy.)\nGeopolitics just changes pretty often. If you seriously think a big determiner of how badly things go is inability to coordinate with certain groups, then every year gets you non-negligible opportunities for the situation changing in a favorable way. \nPublic opinion can change a lot quickly. If you can only buy one year, you might still be buying a decent shot of people coming around and granting you more years. Perhaps especially if new evidence is actively avalanching in—people changed their minds a lot in February 2020.\nOther stuff happens over time. If you can take your doom today or after a couple of years of random events happening, the latter seems non-negligibly better in general.\n\nIt is also not obvious to me that these are the time-scales on the table. My sense is that things which are slowed down by regulation or general societal distaste are often slowed down much more than a year or two, and Eliezer’s stories presume that the world is full of collectives either trying to destroy the world or badly mistaken about it, which is not a foregone conclusion.\nDelay is probably finite by default \nWhile some people worry that any delay would be so short as to be negligible, others seem to fear that if AI research were halted, it would never start again and we would fail to go to space or something. This sounds so wild to me that I think I’m missing too much of the reasoning to usefully counterargue.\nObstruction doesn’t need discernment\nAnother purported risk of trying to slow things down is that it might involve getting regulators involved, and they might be fairly ignorant about the details of futuristic AI, and so tenaciously make the wrong regulations. Relatedly, if you call on the public to worry about this, they might have inexacting worries that call for impotent solutions and distract from the real disaster.\nI don’t buy it. If all you want is to slow down a broad area of activity, my guess is that ignorant regulations do just fine at that every day (usually unintentionally). In particular, my impression is that if you mess up regulating things, a usual outcome is that many things are randomly slower than hoped. If you wanted to speed a specific thing up, that’s a very different story, and might require understanding the thing in question.\nThe same goes for social opposition. Nobody need understand the details of how genetic engineering works for its ascendancy to be seriously impaired by people not liking it. Maybe by their lights it still isn’t optimally undermined yet, but just not liking anything in the vicinity does go a long way.\nThis has nothing to do with regulation or social shaming specifically. You need to understand much less about a car or a country or a conversation to mess it up than to make it run well. It is a consequence of the general rule that there are many more ways for a thing to be dysfunctional than functional: destruction is easier than creation.\nBack at the object level, I tentatively expect efforts to broadly slow down things in the vicinity of AI progress to slow down AI progress on net, even if poorly aimed.\nSafety from speed, clout from complicity\nMaybe it’s actually better for safety to have AI go fast at present, for various reasons. Notably:\n\nImplementing what can be implemented as soon as possible probably means smoother progress, which is probably safer because a) it makes it harder for one party shoot ahead of everyone and gain power, and b) people make better choices all around if they are correct about what is going on (e.g. they don’t put trust in systems that turn out to be much more powerful than expected).\nIf the main thing achieved by slowing down AI progress is more time for safety research, and safety research is more effective when carried out in the context of more advanced AI, and there is a certain amount of slowing down that can be done (e.g. because one is in fact in an arms race but has some lead over competitors), then it might better to use one’s slowing budget later.\nIf there is some underlying curve of potential for progress (e.g. if money that might be spent on hardware just grows a certain amount each year), then perhaps if we push ahead now that will naturally require they be slower later, so it won’t affect the overall time to powerful AI, but will mean we spend more time in the informative pre-catastrophic-AI era.\n(More things go here I think)\n\nAnd maybe it’s worth it to work on capabilities research at present, for instance because:\n\nAs a researcher, working on capabilities prepares you to work on safety\nYou think the room where AI happens will afford good options for a person who cares about safety\n\nThese all seem plausible. But also plausibly wrong. I don’t know of a decisive analysis of any of these considerations, and am not going to do one here. My impression is that they could basically all go either way.\nI am actually particularly skeptical of the final argument, because if you believe what I take to be the normal argument for AI risk—that superhuman artificial agents won’t have acceptable values, and will aggressively manifest whatever values they do have, to the sooner or later annihilation of humanity—then the sentiments of the people turning on such machines seem like a very small factor, so long as they still turn the machines on. And I suspect that ‘having a person with my values doing X’ is commonly overrated. But the world is messier than these models, and I’d still pay a lot to be in the room to try.\nMoods and philosophies, heuristics and attitudes \nIt’s not clear what role these psychological characters should play in a rational assessment of how to act, but I think they do play a role, so I want to argue about them.\nTechnological choice is not luddism\nSome technologies are better than others [citation not needed]. The best pro-technology visions should disproportionately involve awesome technologies and avoid shitty technologies, I claim. If you think AGI is highly likely to destroy the world, then it is the pinnacle of shittiness as a technology. Being opposed to having it into your techno-utopia is about as luddite as refusing to have radioactive toothpaste there. Colloquially, Luddites are against progress if it comes as technology.10 Even if that’s a terrible position, its wise reversal is not the endorsement of all ‘technology’, regardless of whether it comes as progress.\nNon-AGI visions of near-term thriving\nPerhaps slowing down AI progress means foregoing our own generation’s hope for life-changing technologies. Some people thus find it psychologically difficult to aim for less AI progress (with its real personal costs), rather than shooting for the perhaps unlikely ‘safe AGI soon’ scenario.\nI’m not sure that this is a real dilemma. The narrow AI progress we have seen already—i.e. further applications of current techniques at current scales—seems plausibly able to help a lot with longevity and other medicine for instance. And to the extent AI efforts could be focused on e.g. medically relevant narrow systems over creating agentic scheming gods, it doesn’t sound crazy to imagine making more progress on anti-aging etc as a result (even before taking into account the probability that the agentic scheming god does not prioritize your physical wellbeing as hoped). Others disagree with me here.\nRobust priors vs. specific galaxy-brained models\nThere are things that are robustly good in the world, and things that are good on highly specific inside-view models and terrible if those models are wrong. Slowing dangerous tech development seems like the former, whereas forwarding arms races for dangerous tech between world superpowers seems more like the latter.11 There is a general question of how much to trust your reasoning and risk the galaxy-brained plan.12 But whatever your take on that, I think we should all agree that the less thought you have put into it, the more you should regress to the robustly good actions. Like, if it just occurred to you to take out a large loan to buy a fancy car, you probably shouldn’t do it because most of the time it’s a poor choice. Whereas if you have been thinking about it for a month, you might be sure enough that you are in the rare situation where it will pay off. \nOn this particular topic, it feels like people are going with the specific galaxy-brained inside-view terrible-if-wrong model off the bat, then not thinking about it more. \nCheems mindset/can’t do attitude\nSuppose you have a friend, and you say ‘let’s go to the beach’ to them. Sometimes the friend is like ‘hell yes’ and then even if you don’t have towels or a mode of transport or time or a beach, you make it happen. Other times, even if you have all of those things, and your friend nominally wants to go to the beach, they will note that they have a package coming later, and that it might be windy, and their jacket needs washing. And when you solve those problems, they will note that it’s not that long until dinner time. You might infer that in the latter case your friend just doesn’t want to go to the beach. And sometimes that is the main thing going on! But I think there are also broader differences in attitudes: sometimes people are looking for ways to make things happen, and sometimes they are looking for reasons that they can’t happen. This is sometimes called a ‘cheems attitude’, or I like to call it (more accessibly) a ‘can’t do attitude’.\nMy experience in talking about slowing down AI with people is that they seem to have a can’t do attitude. They don’t want it to be a reasonable course: they want to write it off. \nWhich both seems suboptimal, and is strange in contrast with historical attitudes to more technical problem-solving. (As highlighted in my dialogue from the start of the post.)\nIt seems to me that if the same degree of can’t-do attitude were applied to technical safety, there would be no AI safety community because in 2005 Eliezer would have noticed any obstacles to alignment and given up and gone home.\nTo quote a friend on this, what would it look like if we *actually tried*?\nConclusion\nThis has been a miscellany of critiques against a pile of reasons I’ve met for not thinking about slowing down AI progress. I don’t think we’ve seen much reason here to be very pessimistic about slowing down AI, let alone reason for not even thinking about it.\nI could go either way on whether any interventions to slow down AI in the near term are a good idea. My tentative guess is yes, but my main point here is just that we should think about it.\nA lot of opinions on this subject seem to me to be poorly thought through, in error, and to have wrongly repelled the further thought that might rectify them. I hope to have helped a bit here by examining some such considerations enough to demonstrate that there are no good grounds for immediate dismissal. There are difficulties and questions, but if the same standards for ambition were applied here as elsewhere, I think we would see answers and action.\nAcknowledgements\nThanks to Adam Scholl, Matthijs Maas, Joe Carlsmith, Ben Weinstein-Raun, Ronny Fernandez, Aysja Johnson, Jaan Tallinn, Rick Korzekwa, Owain Evans, Andrew Critch, Michael Vassar, Jessica Taylor, Rohin Shah, Jeffrey Heninger, Zach Stein-Perlman, Anthony Aguirre, Matthew Barnett, David Krueger, Harlan Stewart, Rafe Kennedy, Nick Beckstead, Leopold Aschenbrenner, Michaël Trazzi, Oliver Habryka, Shahar Avin, Luke Muehlhauser, Michael Nielsen, Nathan Young and quite a few others for discussion and/or encouragement.\nNotes\n1 I haven’t heard this in recent times, so maybe views have changed. An example of earlier times: Nick Beckstead, 2015: “One idea we sometimes hear is that it would be harmful to speed up the development of artificial intelligence because not enough work has been done to ensure that when very advanced artificial intelligence is created, it will be safe. This problem, it is argued, would be even worse if progress in the field accelerated. However, very advanced artificial intelligence could be a useful tool for overcoming other potential global catastrophic risks. If it comes sooner—and the world manages to avoid the risks that it poses directly—the world will spend less time at risk from these other factors….I found that speeding up advanced artificial intelligence—according to my simple interpretation of these survey results—could easily result in reduced net exposure to the most extreme global catastrophic risks…”\n2 This is closely related to Bostrom’s Technological completion conjecture: “If scientific and technological development efforts do not effectively cease, then all important basic capabilities that could be obtained through some possible technology will be obtained.” (Bostrom, Superintelligence, pp. 228, Chapter 14, 2014)Bostrom illustrates this kind of position (though apparently rejects it; from Superintelligence, found here): “Suppose that a policymaker proposes to cut funding for a certain research field, out of concern for the risks or long-term consequences of some hypothetical technology that might eventually grow from its soil. She can then expect a howl of opposition from the research community. Scientists and their public advocates often say that it is futile to try to control the evolution of technology by blocking research. If some technology is feasible (the argument goes) it will be developed regardless of any particular policymaker’s scruples about speculative future risks. Indeed, the more powerful the capabilities that a line of development promises to produce, the surer we can be that somebody, somewhere, will be motivated to pursue it. Funding cuts will not stop progress or forestall its concomitant dangers.”This kind of thing is also discussed by Dafoe and Sundaram, Maas & Beard\n3 (Some inspiration from Matthijs Maas’ spreadsheet, from Paths Untaken, and from GPT-3.)\n4 From a private conversation with Rick Korzekwa, who may have read https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1139110/ and an internal draft at AI Impacts, probably forthcoming.\n5 More here and here. I haven’t read any of these, but it’s been a topic of discussion for a while.\n6 “To aid in promoting secrecy, schemes to improve incentives were devised. One method sometimes used was for authors to send papers to journals to establish their claim to the finding but ask that publication of the papers be delayed indefinitely.26,27,28,29 Szilárd also suggested offering funding in place of credit in the short term for scientists willing to submit to secrecy and organizing limited circulation of key papers.30” – Me, previously\n7 ‘Lock-in’ of values is the act of using powerful technology such as AI to ensure that specific values will stably control the future.\n8 And also in Britain:‘This paper discusses the results of a nationally representative survey of the UK population on their perceptions of AI…the most common visions of the impact of AI elicit significant anxiety. Only two of the eight narratives elicited more excitement than concern (AI making life easier, and extending life). Respondents felt they had no control over AI’s development, citing the power of corporations or government, or versions of technological determinism. Negotiating the deployment of AI will require contending with these anxieties.’\n9 Or so worries Eliezer Yudkowsky—In MIRI announces new “Death With Dignity” strategy:\n\n“… this isn’t primarily a social-political problem, of just getting people to listen.  Even if DeepMind listened, and Anthropic knew, and they both backed off from destroying the world, that would just mean Facebook AI Research destroyed the world a year(?) later.”\n\nIn AGI Ruin: A List of Lethalities:\n\n“We can’t just “decide not to build AGI” because GPUs are everywhere, and knowledge of algorithms is constantly being improved and published; 2 years after the leading actor has the capability to destroy the world, 5 other actors will have the capability to destroy the world.  The given lethal challenge is to solve within a time limit, driven by the dynamic in which, over time, increasingly weak actors with a smaller and smaller fraction of total computing power, become able to build AGI and destroy the world.  Powerful actors all refraining in unison from doing the suicidal thing just delays this time limit – it does not lift it, unless computer hardware and computer software progress are both brought to complete severe halts across the whole Earth.  The current state of this cooperation to have every big actor refrain from doing the stupid thing, is that at present some large actors with a lot of researchers and computing power are led by people who vocally disdain all talk of AGI safety (eg Facebook AI Research).  Note that needing to solve AGI alignment only within a time limit, but with unlimited safe retries for rapid experimentation on the full-powered system; or only on the first critical try, but with an unlimited time bound; would both be terrifically humanity-threatening challenges by historical standards individually.”\n\n10 I’d guess real Luddites also thought the technological changes they faced were anti-progress, but in that case were they wrong to want to avoid them?\n11 I hear this is an elaboration on this theme, but I haven’t read it.\n12 Leopold Aschenbrenner partly defines ‘Burkean Longtermism’ thus: “We should be skeptical of any radical inside-view schemes to positively steer the long-run future, given the froth of uncertainty about the consequences of our actions.”\nImage credit: Midjourney", "url": "https://aiimpacts.org/lets-think-about-slowing-down-ai/", "title": "Let’s think about slowing down AI", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2022-12-22T17:30:40+00:00", "paged_url": "https://aiimpacts.org/feed?paged=2", "authors": ["Katja Grace"], "id": "51820ce6a544aee0c2012b0fe185f31f", "summary": []} {"text": "December 2022 updates and fundraising\n\nHarlan Stewart and Katja Grace*, 22 December, 2022 \nNews\nNew Hires and role changes\nIn 2022, the AI Impacts team has grown from two to seven full time staff. Out of more than 250 applicants, we hired Elizabeth Santos as Operations Lead, Harlan Stewart as Research Assistant, and three Research Analysts: Zach Stein-Perlman, Aysja Johnson, and (are in the process of hiring) Jeffrey Heninger. We’re excited to have them all, and you can learn more about them on our about page.\nRick and Katja have traded some responsibilities: Rick is now Director of AI Impacts, and Katja is Lead Researcher. This means Rick is generally in charge of making decisions about running the org, though Katja has veto power. Katja is responsible for doing research, as well as directing and overseeing it.\nSummer Internship Program\nWe ran an internship program during the summer. Between May and September, six interns worked on various research projects on topics such as international coordination, explanations of historic human success, case studies in risk mitigation, R&D funding in AI, our new survey of Machine Learning researchers, current AI capabilities, technologies that are strategically-relevant to AI, and the scale of machine learning models. \nAI Impacts Wiki\nWe intend to replace our pages with an AI Impacts Wiki. Our pages have always been functionally something like a wiki, so hopefully this new format will make it clearer how to interact with them (as distinct from our blog posts), as well as easier to navigate for readers and easier to update for researchers. The AI Impacts Wiki will launch soon and can be previewed here. . We’ll say more about other minor changes when we launch it, but AI Impacts’ past and future public research will be either detailed  on the wiki or findable through the wiki. You can let us know what you think using our feedback form as well as comments on this blog post.\nResearch \nFinished this year\nThis year, our main new pages and research-heavy blog posts are:\n\nA survey of 738 machine learning experts, about progress in AI. This survey was a rerun of the one conducted by AI Impacts in 2016, and a blog post on the tentative conclusions (Katja and Zach in collaboration with Ben Weinstein-Raun)\nDetailed arguments answering the question, ‘Will Superhuman AI be created?’ with a tentative ‘yes’ (Katja)\nReview of US public opinion surveys on AI (Zach)\nA database of inducement prizes (Elizabeth)\nA literature review of notable cognitive abilities of honeybees (Aysja)\nAn analysis of discontinuities in historic trends in manned altitude (Jeffrey)\nA list of counterarguments to the basic AI x-risk case (Katja)\nA list of possible incentives to create AI that is known to pose extinction risks (Katja)\nLists of sources arguing for and against existential risk from AI (Katja)\nA case that interventions to slow down AI progress should be considered more seriously (Katja)\n\nAI Impacts is in large part a set of pages that are intended to get updated over time, so our research should not necessarily show up as new pages, and is generally a bit harder to measure than in more standard research institutions. On this occasion, the above pages and posts probably represent most of our finished research output this year. \nIn progress\nThings people are working on lately:\n\nNoteworthy capabilities and limitations of state-of-the-art AI (Zach, Harlan)\nA case study of Alexander Fleming’s efforts to warn the world about antibiotic resistance (Harlan)\nA literature review of notable cognitive abilities of ants (Aysja)\nReview and analysis of AI forecasting methods (Zach)\nCase studies of actors deciding not to pursue technologies, despite apparent incentives to do so. (Jeffrey, Aysja)\nStrategically significant narrow AI capabilities (Zach)\nThe implications of the fermi paradox and anthropics for AI (Zach)\nWhat evidence from computational irreducibility says about where powerful AI should not be able to strongly outperform humans (Jeffrey, Aysja)\nEvidence about how uniform the brain’s cortex is  (Aysja)\nMinor additions to Discontinuous Progress Investigation, a project looking for historical examples of discontinuous progress in technological trends\nArguments for AI being an existential risk (Katja)\nAn paper about the survey (Zach)\nFinishing up of various summer internship projects mentioned earlier (interns)\n(Probably some other things)\n\nFunding\nThank you to our recent funders! Including Jaan Tallinn, who just gave us a $546k grant through the Survival and Flourishing Fund, and Open Philanthropy who supported us for three months recently with a grant of of $364,893.\nWe expected to receive a grant from the FTX Future Fund to cover running the 2022 Expert Survey on Progress in AI, but didn’t receive the money due to FTX’s collapse. If anyone wants the funders’ share of moral credit for paying our survey participants in particular, at a cost of around $30k for the whole thing, please get in touch! (The ex-Future Fund team still deserves credit for substantial encouragement in making the survey happen—thank you to them!)\nRequest for more funding\nWe would love to be funded more!\nWe currently have around $662k. This is about 5-8 months of runway depending on our frugality (spending more money might look like e.g. an additional researcher, a 2023 internship program, freer budgets for travel and training etc, setting salaries more apt to the job market). We are looking for another $395k-$895k to cover 2023, and would also ideally like to extend our runway. \nIf you might be interested in this, and want to hear more about why we think our work is important enough to fund, Katja’s blog post, Why work at AI Impacts? outlines some of our reasons. If you want to talk to us about why we should be funded or hear more details about what we would do with money, please write to Elizabeth, Rick or Katja at [firstname]@aiimpacts.org.\nIf you’d like to donate to AI Impacts, you can do so here. (And we thank you!) \n\n*Harlan did much of the work, Katja put it up without Harlan seeing its final state, so is responsible for any errors.", "url": "https://aiimpacts.org/december-2022-updates-and-fundraising/", "title": "December 2022 updates and fundraising", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2022-12-22T17:11:31+00:00", "paged_url": "https://aiimpacts.org/feed?paged=2", "authors": ["Katja Grace"], "id": "9757549d4121f987c99ea192ef048526", "summary": []} {"text": "Against a General Factor of Doom\n\nJeffrey Heninger, 22 November 2022\nI was recently reading the results of a survey asking climate experts about their opinions on geoengineering. The results surprised me: “We find that respondents who expect severe global climate change damages and who have little confidence in current mitigation efforts are more opposed to geoengineering than respondents who are less pessimistic about global damages and mitigation efforts.”1 This seems backwards. Shouldn’t people who think that climate change will be bad and that our current efforts are insufficient be more willing to discuss and research other strategies, including intentionally cooling the planet?\nI do not know what they are thinking, but I can make a guess that would explain the result: people are responding using a ‘general factor of doom’ instead of considering the questions independently. Each climate expert has a p(Doom) for climate change, or perhaps a more vague feeling of doominess. Their stated beliefs on specific questions are mostly just expressions of their p(Doom).\nIf my guess is correct, then people first decide how doomy climate change is, and then they use this general factor of doom to answer the questions about severity, mitigation efforts, and geoengineering. I don’t know how people establish their doominess: it might be as a result of thinking about one specific question, or it might be based on whether they are more optimistic or pessimistic overall, or it might be something else. Once they have a general factor of doom, it determines how they respond to specific questions they subsequently encounter. I think that people should instead decide their answers to specific questions independently, combine them to form multiple plausible future pathways, and then use these to determine p(Doom). Using a model with more details is more difficult than using a general factor of doom, so it would not be surprising if few people did it.\nTo distinguish between these two possibilities, we could ask people a collection of specific questions that are all doom-related, but are not obviously connected to each other. For example:\n\nHow much would the Asian monsoon weaken with 1°C of warming?\nHow many people would be displaced by a 50 cm rise in sea levels?\nHow much carbon dioxide will the US emit in 2040?\nHow would vegetation growth be different if 2% of incoming sunlight were scattered by stratospheric aerosols?\n\nIf the answers to all of these questions were correlated, that would be evidence for people using a general factor of doom to answer these questions instead of using a more detailed model of the world.\nI wonder if a similar phenomenon could be happening in AI Alignment research.2\nWe can construct a list of specific questions that are relevant to AI doom:\n\nHow long are the timelines until someone develops AGI?\nHow hard of a takeoff will we see after AGI is developed?\nHow fragile are good values? Are two similar ethical systems similarly good?\nHow hard is it for people to teach a value system to an AI?\nHow hard is it to make an AGI corrigible?\nShould we expect simple alignment failures to occur before catastrophic alignment failures?\nHow likely is human extinction if we don’t find a solution to the Alignment Problem?\nHow hard is it to design a good governance mechanism for AI capabilities research?\nHow hard is it to implement and enforce a good governance mechanism for AI capabilities research?\n\nI don’t have any good evidence for this, but my vague impression is that many people’s answers to these questions are correlated.\nIt would not be too surprising if some pairs of these questions ought to be correlated. Different people would likely disagree on which things ought to be correlated. For example, Paul Christiano seems to think that short timelines and fast takeoff speeds are anti-correlated.3 Someone else might categorize these questions as ‘AGI is simple’ vs. ‘Aligning things is hard’ and expect correlations within but not between these categories. People might also disagree on whether people and AGI will be similar (so aligning AGI and aligning governance are similarly hard) or very different (so teaching AGI good values is much harder than teaching people good values). With all of these various arguments, it would be surprising if beliefs across all of these questions were correlated. If they were, it would suggest that a general factor of doom is driving people’s beliefs.\nThere are several biases which seem to be related to the general factor of doom. The halo effect (or horns effect) is when a single good (or bad) belief about a person or brand causes someone to believe that that person or brand is good (or bad) in many other ways.4 The fallacy of mood affiliation is when someone’s response to an argument is based on how the argument impacts the mood surrounding the issue, instead of responding to the argument itself. 5 The general factor of doom is a more specific bias, and feels less like an emotional response: People have detailed arguments describing why the future will be maximally or minimally doomy. The futures described are plausible, but considering how much disagreement there is, it would be surprising if only a few plausible futures are focused on and if these futures have similarly doomy predictions for many specific questions.6 I am also reminded of Beware Surprising and Suspicious Convergence, although it focuses more on beliefs that aren’t updated when someone’s worldview changes, instead of on beliefs within a worldview which are surprisingly correlated.7\nThe AI Impacts survey 8 is probably not relevant to determining if AI safety researchers have a general factor of doom. The survey was of machine learning researchers, not AI safety researchers. I spot checked several random pairs of doom-related questions 9 anyway, and they didn’t look correlated. I’m not sure whether to interpret this to mean that they are using multiple detailed models or that they don’t even have a simple model.\nThere is also this graph,10 which claims to be “wildly out-of-date and chock full of huge outrageous errors.” This graph seems to suggest some correlation between two different doom-related questions, and that the distribution is surprisingly bimodal. If we were to take this more seriously than we probably should, we could use it as evidence for a general factor of doom, and that most people’s p(Doom) is close to 0 or 1.11 I do not think that this graph is particularly strong evidence even if it is accurate, but it does gesture in the same direction that I am pointing at.\n\nIt would be interesting to do an actual survey of AI safety researchers, with more than just two questions, to see how closely all of the responses are correlated with each other. It would also be interesting to see whether doominess in one field is correlated with doominess in other fields. I don’t know whether this survey would show evidence for a general factor of doom among AI safety researchers, but it seems plausible that it would.\nNotes", "url": "https://aiimpacts.org/against-a-general-factor-of-doom/", "title": "Against a General Factor of Doom", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2022-11-23T16:45:49+00:00", "paged_url": "https://aiimpacts.org/feed?paged=2", "authors": ["Jeffrey Heninger"], "id": "7490991d8f8f576d786f1408dce869c6", "summary": []} {"text": "Notes on an Experiment with Markets\n\nJeffrey Heninger, 22 November 2022\nAI Impacts is a research group with seven employees. From Oct 31 – Nov 3, we had a work retreat. We decided to try using Manifold Markets to help us plan social events in the evenings. Here are some notes from this experiment.\nStructure of the Experiment\nKatja created a group on Manifold Markets for AI Impacts, and an initial collection of markets. Anyone could add a market to this group, and five of us created at least one market. Each of us would rate each evening from 0 to 10 on an anonymous Google form. Most of the questions in the group were about the results of the form, often conditional on what activity we would do that evening. For example: “On the first day that at least 4 people begin a game of One Night Werewolf at the AI Impacts retreat, will the average evening rating be above 8?” The markets would resolve at some point the next morning after we had submitted our forms and Katja calculated the average evening rating.\nDisagreements about the Experiment\nThere were several disagreements about how the experiment was supposed to be run. \nInitially, the role of the evening rating form was unclear. Was it asking for your honest assessment of the evening or was it part of the game? “What number would you like to assign to the evening?” is different from “How good was your evening honestly?” We decided that we wanted honest responses. Even then, the numbers were ambiguous. What constitutes a 7 evening vs. a 9 evening? Different people’s baselines result in different scores, which can alter the average. After the first evening, we had a better estimate of the baseline. Many of the markets had used an average score of above 8, which was higher than the baseline. This made the markets feel less useful, instead shifting the predictions to lower probabilities while remaining useful. It’s not clear why this happened, but it might have been because we didn’t want to bet against ourselves having a good time or because the tail of an unknown distribution is harder to predict than the middle of the distribution.\nOne morning, Katja told us the average score before resolving her markets. Zach used this information to bet on these markets. Rick thought that it was unclear whether this should be allowed, because not everyone was there and because the previous discussion about honest ratings suggested that we should ask before doing something that might give an advantage independent of prediction ability. We decided that this would not be allowed in the future, and that we would not tell each other the results of the markets before resolving them.\nUnrealized Potential Problems\nWe thought of several other potential problems that did not end up being an issue.\nOne potential concern was that the interplay between the dynamics of the market and social events might make the socialization worse. Someone who had bet against having a good evening might have less reason to want the evening to be enjoyable to himself and others. If people spent time during the evening thinking about and frequently betting on the markets, it might disrupt the ongoing activities. In practice, while people did bet on the markets in the evening, it did not disrupt the other activities.\nWe had several other ideas for how to mess up the markets: filling out the anonymous form multiple times, colluding or bribing people to alter their scores, publicly filling out your form before the evening begins to manipulate the market, and purposely trying to thwart other people’s clever strategies. None of us tried doing any of these, but they might become relevant if the stakes were higher. There is also the concern that conditional and counterfactual predictions are not the same: For decision making, we would like to compare various counterfactuals, but it’s easier to make markets which are conditional on us doing something. If we decide to do that thing, it is probably because at least some of us want to do it, so the conditional prediction will be higher than the counterfactual prediction.\nWhat We Did in the Evening\nThe goal of the markets was to help us plan out social events in the evenings. If the market thought that the evening’s rating would be more likely to be higher if we wore halloween costumes than if we used the hot tub, then we should decide to wear halloween costumes.\nPeople mostly did not use the markets to decide what to do. On the first evening, the highest rated activity was a guitar sing-along. We did not end up doing that on any of the evenings. The activity that seems to have been the most fun for the most people1 was cooperative round-the-table ping-pong. This was done spontaneously, adding more people as they came to the table, without any market predicting the result. We spent a decent amount of time just sitting around talking to each other, which also did not have a market. Our decision making process seemed to be less formal: someone would suggest an activity or say that they would personally do the activity, and other people would join. Having someone look at the markets and announce which activity rated the highest would have added more steps and organization compared to what we did.\nWe also tried varying the structure of the markets to see if that made them more useful. For example, the market “Will we use the hot tub and have fun tonight?” had four choices for the combinations of whether or not at least four people would use the hot tub and whether the average evening rating would be above or below 7.2 Katja did use this market to argue that people should use the hot tub.\nThere seems to have been a few things that kept the markets from being more useful: (1) Most of us did not know what kinds of social activities most of the rest of us preferred, so it was hard for anyone to make an informed bet. It wasn’t clear how the market provided more information than if we had used a voting system. (2) The connection between four people doing an activity and the average evening rating was too weak for much of a signal to go through. The ratings ended up being noisy, and not specific enough for particular activities. (3) The act of checking the markets and announcing a decision was more formal than our actual decision making process. The market only included a short list of possibilities and did not suggest spontaneity. \nConclusion\nHaving prediction markets for the evening social activities was a fun addition to the AI Impacts retreat. There were about 20 markets about the retreat which most of the people at the retreat bet on. But the markets did not end up having a significant impact on what we did during the evening.\nMost of us did not have experience using prediction markets before the retreat. We decided not to use the markets to make important decisions, because we did not know what problems they would cause. The markets would likely have been more impactful if we were more experienced and if the questions were about more important decisions. If we did use the markets for important decisions, we would have to make sure that the markets are harder to exploit and have more rules and fewer norms governing how we would bet on the markets.\nSince the retreat, Katja has used a market to help plan an AI Impacts dinner. We plan to continue experimenting with using prediction markets to make predictions in the future.\n\nNotes", "url": "https://aiimpacts.org/notes-on-an-experiment-with-markets/", "title": "Notes on an Experiment with Markets", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2022-11-23T16:07:11+00:00", "paged_url": "https://aiimpacts.org/feed?paged=2", "authors": ["Jeffrey Heninger"], "id": "e2725a4cc950dc07a9426d6c42c81093", "summary": []} {"text": "Counterarguments to the basic AI x-risk case\n\nKatja Grace, 31 August 2022\nThis is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems1. \nTo start, here’s an outline of what I take to be the basic case2:\nI. If superhuman AI systems are built, any given system is likely to be ‘goal-directed’\nReasons to expect this:\n\nGoal-directed behavior is likely to be valuable, e.g. economically. \nGoal-directed entities may tend to arise from machine learning training processes not intending to create them (at least via the methods that are likely to be used).\n‘Coherence arguments’ may imply that systems with some goal-directedness will become more strongly goal-directed over time.\n\nII. If goal-directed superhuman AI systems are built, their desired outcomes will probably be about as bad as an empty universe by human lights \nReasons to expect this:\n\nFinding useful goals that aren’t extinction-level bad appears to be hard: we don’t have a way to usefully point at human goals, and divergences from human goals seem likely to produce goals that are in intense conflict with human goals, due to a) most goals producing convergent incentives for controlling everything, and b) value being ‘fragile’, such that an entity with ‘similar’ values will generally create a future of virtually no value.\nFinding goals that are extinction-level bad and temporarily useful appears to be easy: for example, advanced AI with the sole objective ‘maximize company revenue’ might profit said company for a time before gathering the influence and wherewithal to pursue the goal in ways that blatantly harm society.\nEven if humanity found acceptable goals, giving a powerful AI system any specific goals appears to be hard. We don’t know of any procedure to do it, and we have theoretical reasons to expect that AI systems produced through machine learning training will generally end up with goals other than those they were trained according to. Randomly aberrant goals resulting are probably extinction-level bad for reasons described in II.1 above.\n\nIII. If most goal-directed superhuman AI systems have bad goals, the future will very likely be bad\nThat is, a set of ill-motivated goal-directed superhuman AI systems, of a scale likely to occur, would be capable of taking control over the future from humans. This is supported by at least one of the following being true:\n\nSuperhuman AI would destroy humanity rapidly. This may be via ultra-powerful capabilities at e.g. technology design and strategic scheming, or through gaining such powers in an ‘intelligence explosion‘ (self-improvement cycle). Either of those things may happen either through exceptional heights of intelligence being reached or through highly destructive ideas being available to minds only mildly beyond our own.\nSuperhuman AI would gradually come to control the future via accruing power and resources. Power and resources would be more available to the AI system(s) than to humans on average, because of the AI having far greater intelligence.\n\n***\nBelow is a list of gaps in the above, as I see it, and counterarguments. A ‘gap’ is not necessarily unfillable, and may have been filled in any of the countless writings on this topic that I haven’t read. I might even think that a given one can probably be filled. I just don’t know what goes in it.  \nThis blog post is an attempt to run various arguments by you all on the way to making pages on AI Impacts about arguments for AI risk and corresponding counterarguments. At some point in that process I hope to also read others’ arguments, but this is not that day. So what you have here is a bunch of arguments that occur to me, not an exhaustive literature review. \nCounterarguments\nA. Contra “superhuman AI systems will be ‘goal-directed’”\nDifferent calls to ‘goal-directedness’ don’t necessarily mean the same concept\n‘Goal-directedness’ is a vague concept. It is unclear that the ‘goal-directednesses’ that are favored by economic pressure, training dynamics or coherence arguments (the component arguments in part I of the argument above) are the same ‘goal-directedness’ that implies a zealous drive to control the universe (i.e. that makes most possible goals very bad, fulfilling II above). \nOne well-defined concept of goal-directedness is ‘utility maximization’: always doing what maximizes a particular utility function, given a particular set of beliefs about the world. \nUtility maximization does seem to quickly engender an interest in controlling literally everything, at least for many utility functions one might have3. If you want things to go a certain way, then you have reason to control anything which gives you any leverage over that, i.e. potentially all resources in the universe (i.e. agents have ‘convergent instrumental goals’). This is in serious conflict with anyone else with resource-sensitive goals, even if prima facie those goals didn’t look particularly opposed. For instance, a person who wants all things to be red and another person who wants all things to be cubes may not seem to be at odds, given that all things could be red cubes. However if these projects might each fail for lack of energy, then they are probably at odds. \nThus utility maximization is a notion of goal-directedness that allows Part II of the argument to work, by making a large class of goals deadly.\nYou might think that any other concept of ‘goal-directedness’ would also lead to this zealotry. If one is inclined toward outcome O in any plausible sense, then does one not have an interest in anything that might help procure O? No: if a system is not a ‘coherent’ agent, then it can have a tendency to bring about O in a range of circumstances, without this implying that it will take any given effective opportunity to pursue O. This assumption of consistent adherence to a particular evaluation of everything is part of utility maximization, not a law of physical systems. Call machines that push toward particular goals but are not utility maximizers pseudo-agents. \nCan pseudo-agents exist? Yes—utility maximization is computationally intractable, so any physically existent ‘goal-directed’ entity is going to be a pseudo-agent. We are all pseudo-agents, at best. But it seems something like a spectrum. At one end is a thermostat, then maybe a thermostat with a better algorithm for adjusting the heat. Then maybe a thermostat which intelligently controls the windows. After a lot of honing, you might have a system much more like a utility-maximizer: a system that deftly seeks out and seizes well-priced opportunities to make your room 68 degrees—upgrading your house, buying R&D, influencing your culture, building a vast mining empire. Humans might not be very far on this spectrum, but they seem enough like utility-maximizers already to be alarming. (And it might not be well-considered as a one-dimensional spectrum—for instance, perhaps ‘tendency to modify oneself to become more coherent’ is a fairly different axis from ‘consistency of evaluations of options and outcomes’, and calling both ‘more agentic’ is obscuring.)\nNonetheless, it seems plausible that there is a large space of systems which strongly increase the chance of some desirable objective O occurring without even acting as much like maximizers of an identifiable utility function as humans would. For instance, without searching out novel ways of making O occur, or modifying themselves to be more consistently O-maximizing. Call these ‘weak pseudo-agents’. \nFor example, I can imagine a system constructed out of a huge number of ‘IF X THEN Y’ statements (reflexive responses), like ‘if body is in hallway, move North’, ‘if hands are by legs and body is in kitchen, raise hands to waist’.., equivalent to a kind of vector field of motions, such that for every particular state, there are directions that all the parts of you should be moving. I could imagine this being designed to fairly consistently cause O to happen within some context. However since such behavior would not be produced by a process optimizing O, you shouldn’t expect it to find new and strange routes to O, or to seek O reliably in novel circumstances. There appears to be zero pressure for this thing to become more coherent, unless its design already involves reflexes to move its thoughts in certain ways that lead it to change itself. I expect you could build a system like this that reliably runs around and tidies your house say, or runs your social media presence, without it containing any impetus to become a more coherent agent (because it doesn’t have any reflexes that lead to pondering self-improvement in this way).\nIt is not clear that economic incentives generally favor the far end of this spectrum over weak pseudo-agency. There are incentives toward systems being more like utility maximizers, but also incentives against. \nThe reason any kind of ‘goal-directedness’ is incentivised in AI systems is that then the system can be given an objective by someone hoping to use their cognitive labor, and the system will make that objective happen. Whereas a similar non-agentic AI system might still do almost the same cognitive labor, but require an agent (such as a person) to look at the objective and decide what should be done to achieve it, then ask the system for that. Goal-directedness means automating this high-level strategizing. \nWeak pseudo-agency fulfills this purpose to some extent, but not as well as utility maximization. However if we think that utility maximization is difficult to wield without great destruction, then that suggests a disincentive to creating systems with behavior closer to utility-maximization. Not just from the world being destroyed, but from the same dynamic causing more minor divergences from expectations, if the user can’t specify their own utility function well. \nThat is, if it is true that utility maximization tends to lead to very bad outcomes relative to any slightly different goals (in the absence of great advances in the field of AI alignment), then the most economically favored level of goal-directedness seems unlikely to be as far as possible toward utility maximization. More likely it is a level of pseudo-agency that achieves a lot of the users’ desires without bringing about sufficiently detrimental side effects to make it not worthwhile. (This is likely more agency than is socially optimal, since some of the side-effects will be harms to others, but there seems no reason to think that it is a very high degree of agency.)\nSome minor but perhaps illustrative evidence: anecdotally, people prefer interacting with others who predictably carry out their roles or adhere to deontological constraints, rather than consequentialists in pursuit of broadly good but somewhat unknown goals. For instance, employers would often prefer employees who predictably follow rules than ones who try to forward company success in unforeseen ways.\nThe other arguments to expect goal-directed systems mentioned above seem more likely to suggest approximate utility-maximization rather than some other form of goal-directedness, but it isn’t that clear to me. I don’t know what kind of entity is most naturally produced by contemporary ML training. Perhaps someone else does. I would guess that it’s more like the reflex-based agent described above, at least at present. But present systems aren’t the concern.\nCoherence arguments are arguments for being coherent a.k.a. maximizing a utility function, so one might think that they imply a force for utility maximization in particular. That seems broadly right. Though note that these are arguments that there is some pressure for the system to modify itself to become more coherent. What actually results from specific systems modifying themselves seems like it might have details not foreseen in an abstract argument merely suggesting that the status quo is suboptimal whenever it is not coherent. Starting from a state of arbitrary incoherence and moving iteratively in one of many pro-coherence directions produced by whatever whacky mind you currently have isn’t obviously guaranteed to increasingly approximate maximization of some sensical utility function. For instance, take an entity with a cycle of preferences, apples > bananas = oranges > pears > apples. The entity notices that it sometimes treats oranges as better than pears and sometimes worse. It tries to correct by adjusting the value of oranges to be the same as pears. The new utility function is exactly as incoherent as the old one. Probably moves like this are rarer than ones that make you more coherent in this situation, but I don’t know, and I also don’t know if this is a great model of the situation for incoherent systems that could become more coherent.\nWhat it might look like if this gap matters: AI systems proliferate, and have various goals. Some AI systems try to make money in the stock market. Some make movies. Some try to direct traffic optimally. Some try to make the Democratic party win an election. Some try to make Walmart maximally profitable. These systems have no perceptible desire to optimize the universe for forwarding these goals because they aren’t maximizing a general utility function, they are more ‘behaving like someone who is trying to make Walmart profitable’. They make strategic plans and think about their comparative advantage and forecast business dynamics, but they don’t build nanotechnology to manipulate everybody’s brains, because that’s not the kind of behavior pattern they were designed to follow. The world looks kind of like the current world, in that it is fairly non-obvious what any entity’s ‘utility function’ is. It often looks like AI systems are ‘trying’ to do things, but there’s no reason to think that they are enacting a rational and consistent plan, and they rarely do anything shocking or galaxy-brained.\nAmbiguously strong forces for goal-directedness need to meet an ambiguously high bar to cause a risk\nThe forces for goal-directedness mentioned in I are presumably of finite strength. For instance, if coherence arguments correspond to pressure for machines to become more like utility maximizers, there is an empirical answer to how fast that would happen with a given system. There is also an empirical answer to how ‘much’ goal directedness is needed to bring about disaster, supposing that utility maximization would bring about disaster and, say, being a rock wouldn’t. Without investigating these empirical details, it is unclear whether a particular qualitatively identified force for goal-directedness will cause disaster within a particular time.\nWhat it might look like if this gap matters: There are not that many systems doing something like utility maximization in the new AI economy. Demand is mostly for systems more like GPT or DALL-E, which transform inputs in some known way without reference to the world, rather than ‘trying’ to bring about an outcome. Maybe the world was headed for more of the latter, but ethical and safety concerns reduced desire for it, and it wasn’t that hard to do something else. Companies setting out to make non-agentic AI systems have no trouble doing so. Incoherent AIs are never observed making themselves more coherent, and training has never produced an agent unexpectedly. There are lots of vaguely agentic things, but they don’t pose much of a problem. There are a few things at least as agentic as humans, but they are a small part of the economy.\nB. Contra “goal-directed AI systems’ goals will be bad”\nSmall differences in utility functions may not be catastrophic\nArguably, humans are likely to have somewhat different values to one another even after arbitrary reflection. If so, there is some extended region of the space of possible values that the values of different humans fall within. That is, ‘human values’ is not a single point.\nIf the values of misaligned AI systems fall within that region, this would not appear to be worse in expectation than the situation where the long-run future was determined by the values of humans other than you. (This may still be a huge loss of value relative to the alternative, if a future determined by your own values is vastly better than that chosen by a different human, and if you also expected to get some small fraction of the future, and will now get much less. These conditions seem non-obvious however, and if they obtain you should worry about more general problems than AI.)\nPlausibly even a single human, after reflecting, could on their own come to different places in a whole region of specific values, depending on somewhat arbitrary features of how the reflecting period went. In that case, even the values-on-reflection of a single human is an extended region of values space, and an AI which is only slightly misaligned could be the same as some version of you after reflecting.\nThere is a further larger region, ‘that which can be reliably enough aligned with typical human values via incentives in the environment’, which is arguably larger than the circle containing most human values. Human society makes use of this a lot: for instance, most of the time particularly evil humans don’t do anything too objectionable because it isn’t in their interests. This region is probably smaller for more capable creatures such as advanced AIs, but still it is some size.\nThus it seems that some amount4 of AI divergence from your own values is probably broadly fine, i.e. not worse than what you should otherwise expect without AI. \nThus in order to arrive at a conclusion of doom, it is not enough to argue that we cannot align AI perfectly. The question is a quantitative one of whether we can get it close enough. And how close is ‘close enough’ is not known. \nWhat it might look like if this gap matters: there are many superintelligent goal-directed AI systems around. They are trained to have human-like goals, but we know that their training is imperfect and none of them has goals exactly like those presented in training. However if you just heard about a particular system’s intentions, you wouldn’t be able to guess if it was an AI or a human. Things happen much faster than they were, because superintelligent AI is superintelligent, but not obviously in a direction less broadly in line with human goals than when humans were in charge.\nDifferences between AI and human values may be small \nAI trained to have human-like goals will have something close to human-like goals. How close? Call it d, for a particular occasion of training AI. \nIf d doesn’t have to be 0 for safety (from above), then there is a question of whether it is an acceptable size. \nI know of two issues here, pushing d upward. One is that with a finite number of training examples, the fit between the true function and the learned function will be wrong. The other is that you might accidentally create a monster (‘misaligned mesaoptimizer’) who understands its situation and pretends to have the utility function you are aiming for so that it can be freed and go out and manifest its own utility function, which could be just about anything. If this problem is real, then the values of an AI system might be arbitrarily different from the training values, rather than ‘nearby’ in some sense, so d is probably unacceptably large. But if you avoid creating such mesaoptimizers, then it seems plausible to me that d is very small. \nIf humans also substantially learn their values via observing examples, then the variation in human values is arising from a similar process, so might be expected to be of a similar scale. If we care to make the ML training process more accurate than the human learning one, it seems likely that we could. For instance, d gets smaller with more data.\nAnother line of evidence is that for things that I have seen AI learn so far, the distance from the real thing is intuitively small. If AI learns my values as well as it learns what faces look like, it seems plausible that it carries them out better than I do.\nAs minor additional evidence here, I don’t know how to describe any slight differences in utility functions that are catastrophic. Talking concretely, what does a utility function look like that is so close to a human utility function that an AI system has it after a bunch of training, but which is an absolute disaster? Are we talking about the scenario where the AI values a slightly different concept of justice, or values satisfaction a smidgen more relative to joy than it should? And then that’s a moral disaster because it is wrought across the cosmos? Or is it that it looks at all of our inaction and thinks we want stuff to be maintained very similar to how it is now, so crushes any efforts to improve things? \nWhat it might look like if this gap matters: when we try to train AI systems to care about what specific humans care about, they usually pretty much do, as far as we can tell. We basically get what we trained for. For instance, it is hard to distinguish them from the human in question. (It is still important to actually do this training, rather than making AI systems not trained to have human values.)\nMaybe value isn’t fragile\nEliezer argued that value is fragile, via examples of ‘just one thing’ that you can leave out of a utility function, and end up with something very far away from what humans want. For instance, if you leave out ‘boredom’ then he thinks the preferred future might look like repeating the same otherwise perfect moment again and again. (His argument is perhaps longer—that post says there is a lot of important background, though the bits mentioned don’t sound relevant to my disagreement.) This sounds to me like ‘value is not resilient to having components of it moved to zero’, which is a weird usage of ‘fragile’, and in particular, doesn’t seem to imply much about smaller perturbations. And smaller perturbations seem like the relevant thing with AI systems trained on a bunch of data to mimic something. \nYou could very analogously say ‘human faces are fragile’ because if you just leave out the nose it suddenly doesn’t look like a typical human face at all. Sure, but is that the kind of error you get when you try to train ML systems to mimic human faces? Almost none of the faces on thispersondoesnotexist.com are blatantly morphologically unusual in any way, let alone noseless. Admittedly one time I saw someone whose face was neon green goo, but I’m guessing you can get the rate of that down pretty low if you care about it.\nEight examples, no cherry-picking:\n\n\n\n\n\n\n\n\nSkipping the nose is the kind of mistake you make if you are a child drawing a face from memory. Skipping ‘boredom’ is the kind of mistake you make if you are a person trying to write down human values from memory. My guess is that this seemed closer to the plan in 2009 when that post was written, and that people cached the takeaway and haven’t updated it for deep learning which can learn what faces look like better than you can.\nWhat it might look like if this gap matters: there is a large region ‘around’ my values in value space that is also pretty good according to me. AI easily lands within that space, and eventually creates some world that is about as good as the best possible utopia, according to me. There aren’t a lot of really crazy and terrible value systems adjacent to my values.\nShort-term goals\nUtility maximization really only incentivises drastically altering the universe if one’s utility function places a high enough value on very temporally distant outcomes relative to near ones. That is, long term goals are needed for danger. A person who cares most about winning the timed chess game in front of them should not spend time accruing resources to invest in better chess-playing.\nAI systems could have long-term goals via people intentionally training them to do so, or via long-term goals naturally arising from systems not trained so. \nHumans seem to discount the future a lot in their usual decision-making (they have goals years in advance but rarely a hundred years) so the economic incentive to train AI to have very long term goals might be limited.\nIt’s not clear that training for relatively short term goals naturally produces creatures with very long term goals, though it might.\nThus if AI systems fail to have value systems relatively similar to human values, it is not clear that many will have the long time horizons needed to motivate taking over the universe.\nWhat it might look like if this gap matters: the world is full of agents who care about relatively near-term issues, and are helpful to that end, and have no incentive to make long-term large scale schemes. Reminiscent of the current world, but with cleverer short-termism.\nC. Contra “superhuman AI would be sufficiently superior to humans to overpower humanity”\nHuman success isn’t from individual intelligence\nThe argument claims (or assumes) that surpassing ‘human-level’ intelligence (i.e. the mental capacities of an individual human) is the relevant bar for matching the power-gaining capacity of humans, such that passing this bar in individual intellect means outcompeting humans in general in terms of power (argument III.2), if not being able to immediately destroy them all outright (argument III.1.). In a similar vein, introductions to AI risk often start by saying that humanity has triumphed over the other species because it is more intelligent, as a lead in to saying that if we make something more intelligent still, it will inexorably triumph over humanity.\nThis hypothesis about the provenance of human triumph seems wrong. Intellect surely helps, but humans look to be powerful largely because they share their meager intellectual discoveries with one another and consequently save them up over time5. You can see this starkly by comparing the material situation of Alice, a genius living in the stone age, and Bob, an average person living in 21st Century America. Alice might struggle all day to get a pot of water, while Bob might be able to summon all manner of delicious drinks from across the oceans, along with furniture, electronics, information, etc. Much of Bob’s power probably did flow from the application of intelligence, but not Bob’s individual intelligence. Alice’s intelligence, and that of those who came between them.\nBob’s greater power isn’t directly just from the knowledge and artifacts Bob inherits from other humans. He also seems to be helped for instance by much better coordination: both from a larger number people coordinating together, and from better infrastructure for that coordination (e.g. for Alice the height of coordination might be an occasional big multi-tribe meeting with trade, and for Bob it includes global instant messaging and banking systems and the Internet). One might attribute all of this ultimately to innovation, and thus to intelligence and communication, or not. I think it’s not important to sort out here, as long as it’s clear that individual intelligence isn’t the source of power.\nIt could still be that with a given bounty of shared knowledge (e.g. within a given society), intelligence grants huge advantages. But even that doesn’t look true here: 21st Century geniuses live basically like 21st Century people of average intelligence, give or take6.\nWhy does this matter? Well for one thing, if you make AI which is merely as smart as a human, you shouldn’t then expect it to do that much better than a genius living in the stone age. That’s what human-level intelligence gets you: nearly nothing. A piece of rope after millions of lifetimes. Humans without their culture are much like other animals. \nTo wield the control-over-the-world of a genius living in the 21st Century, the human-level AI would seem to need something like the other benefits that the 21st century genius gets from their situation in connection with a society. \nOne such thing is access to humanity’s shared stock of hard-won information. AI systems plausibly do have this, if they can get most of what is relevant by reading the internet. This isn’t obvious: people also inherit information from society through copying habits and customs, learning directly from other people, and receiving artifacts with implicit information (for instance, a factory allows whoever owns the factory to make use of intellectual work that was done by the people who built the factory, but that information may not available explicitly even for the owner of the factory, let alone to readers on the internet). These sources of information seem likely to also be available to AI systems though, at least if they are afforded the same options as humans.\nMy best guess is that AI systems easily do better than humans on extracting information from humanity’s stockpile, and on coordinating, and so on this account are probably in an even better position to compete with humans than one might think on the individual intelligence model, but that is a guess. In that case perhaps this misunderstanding makes little difference to the outcomes of the argument. However it seems at least a bit more complicated. \nSuppose that AI systems can have access to all information humans can have access to. The power the 21st century person gains from their society is modulated by their role in society, and relationships, and rights, and the affordances society allows them as a result. Their power will vary enormously depending on whether they are employed, or listened to, or paid, or a citizen, or the president. If AI systems’ power stems substantially from interacting with society, then their power will also depend on affordances granted, and humans may choose not to grant them many affordances (see section ‘Intelligence may not be an overwhelming advantage’ for more discussion).\nHowever suppose that your new genius AI system is also treated with all privilege. The next way that this alternate model matters is that if most of what is good in a person’s life is determined by the society they are part of, and their own labor is just buying them a tiny piece of that inheritance, then if they are for instance twice as smart as any other human, they don’t get to use technology that it twice as good. They just get a larger piece of that same shared technological bounty purchasable by anyone. Because each individual person is adding essentially nothing in terms of technology, so twice that is still basically nothing. \nIn contrast, I think people are often imagining that a single entity somewhat smarter than a human will be able to quickly use technologies that are somewhat better than current human technologies. This seems to be mistaking the actions of a human and the actions of a human society. If a hundred thousand people sometimes get together for a few years and make fantastic new weapons, you should not expect an entity somewhat smarter than a person to make even better weapons. That’s off by a factor of about a hundred thousand. \nThere might be places you can get far ahead of humanity by being better than a single human—it depends how much accomplishments depend on the few most capable humans in the field, and how few people are working on the problem7. But for instance the Manhattan Project took a hundred thousand people several years, and von Neumann (a mythically smart scientist) joining the project did not reduce it to an afternoon. Plausibly to me, some specific people being on the project caused it to not take twice as many person-years, though the plausible candidates here seem to be more in the business of running things than doing science directly (though that also presumably involves intelligence). But even if you are an ambitious somewhat superhuman intelligence, the influence available to you seems to plausibly be limited to making a large dent in the effort required for some particular research endeavor, not single-handedly outmoding humans across many research endeavors.\nThis is all reason to doubt that a small number of superhuman intelligences will rapidly take over or destroy the world (as in III.i.). This doesn’t preclude a set of AI systems that are together more capable than a large number of people from making great progress. However some related issues seem to make that less likely.\nAnother implication of this model is that if most human power comes from buying access to society’s shared power, i.e. interacting with the economy, you should expect intellectual labor by AI systems to usually be sold, rather than for instance put toward a private stock of knowledge. This means the intellectual outputs are mostly going to society, and the main source of potential power to an AI system is the wages received (which may allow it to gain power in the long run). However it seems quite plausible that AI systems at this stage will generally not receive wages, since they presumably do not need them to be motivated to do the work they were trained for. It also seems plausible that they would be owned and run by humans. This would seem to not involve any transfer of power to that AI system, except insofar as its intellectual outputs benefit it (e.g. if it is writing advertising material, maybe it doesn’t get paid for that, but if it can write material that slightly furthers its own goals in the world while also fulfilling the advertising requirements, then it sneaked in some influence.) \nIf there is AI which is moderately more competent than humans, but not sufficiently more competent to take over the world, then it is likely to contribute to this stock of knowledge and affordances shared with humans. There is no reason to expect it to build a separate competing stock, any more than there is reason for a current human household to try to build a separate competing stock rather than sell their labor to others in the economy. \nIn summary:\n\nFunctional connection with a large community of other intelligences in the past and present is probably a much bigger factor in the success of humans as a species or individual humans than is individual intelligence. \nThus this also seems more likely to be important for AI success than individual intelligence. This is contrary to a usual argument for AI superiority, but probably leaves AI systems at least as likely to outperform humans, since superhuman AI is probably superhumanly good at taking in information and coordinating.\nHowever it is not obvious that AI systems will have the same access to society’s accumulated information e.g. if there is information which humans learn from living in society, rather than from reading the internet. \nAnd it seems an open question whether AI systems are given the same affordances in society as humans, which also seem important to making use of the accrued bounty of power over the world that humans have. For instance, if they are not granted the same legal rights as humans, they may be at a disadvantage in doing trade or engaging in politics or accruing power.\nThe fruits of greater intelligence for an entity will probably not look like society-level accomplishments unless it is a society-scale entity\nThe route to influence with smaller fruits probably by default looks like participating in the economy rather than trying to build a private stock of knowledge.\nIf the resources from participating in the economy accrue to the owners of AI systems, not to the systems themselves, then there is less reason to expect the systems to accrue power incrementally, and they are at a severe disadvantage relative to humans. \n\nOverall these are reasons to expect AI systems with around human-level cognitive performance to not destroy the world immediately, and to not amass power as easily as one might imagine. \nWhat it might look like if this gap matters: If AI systems are somewhat superhuman, then they do impressive cognitive work, and each contributes to technology more than the best human geniuses, but not more than the whole of society, and not enough to materially improve their own affordances. They don’t gain power rapidly because they are disadvantaged in other ways, e.g. by lack of information, lack of rights, lack of access to positions of power. Their work is sold and used by many actors, and the proceeds go to their human owners. AI systems do not generally end up with access to masses of technology that others do not have access to, and nor do they have private fortunes. In the long run, as they become more powerful, they might take power if other aspects of the situation don’t change. \nAI agents may not be radically superior to combinations of humans and non-agentic machines\n‘Human level capability’ is a moving target. For comparing the competence of advanced AI systems to humans, the relevant comparison is with humans who have state-of-the-art AI and other tools. For instance, the human capacity to make art quickly has recently been improved by a variety of AI art systems. If there were now an agentic AI system that made art, it would make art much faster than a human of 2015, but perhaps hardly faster than a human of late 2022. If humans continually have access to tool versions of AI capabilities, it is not clear that agentic AI systems must ever have an overwhelmingly large capability advantage for important tasks (though they might). \n(This is not an argument that humans might be better than AI systems, but rather: if the gap in capability is smaller, then the pressure for AI systems to accrue power is less and thus loss of human control is slower and easier to mitigate entirely through other forces, such as subsidizing human involvement or disadvantaging AI systems in the economy.)\nSome advantages of being an agentic AI system vs. a human with a tool AI system seem to be:\n\nThere might just not be an equivalent tool system, for instance if it is impossible to train systems without producing emergent agents.\nWhen every part of a process takes into account the final goal, this should make the choices within the task more apt for the final goal (and agents know their final goal, whereas tools carrying out parts of a larger problem do not).\nFor humans, the interface for using a capability of one’s mind tends to be smoother than the interface for using a tool. For instance a person who can do fast mental multiplication can do this more smoothly and use it more often than a person who needs to get out a calculator. This seems likely to persist.\n\n1 and 2 may or may not matter much. 3 matters more for brief, fast, unimportant tasks. For instance, consider again people who can do mental calculations better than others. My guess is that this advantages them at using Fermi estimates in their lives and buying cheaper groceries, but does not make them materially better at making large financial choices well. For a one-off large financial choice, the effort of getting out a calculator is worth it and the delay is very short compared to the length of the activity. The same seems likely true of humans with tools vs. agentic AI with the same capacities integrated into their minds. Conceivably the gap between humans with tools and goal-directed AI is small for large, important tasks.\nWhat it might look like if this gap matters: agentic AI systems have substantial advantages over humans with tools at some tasks like rapid interaction with humans, and responding to rapidly evolving strategic situations.  One-off large important tasks such as advanced science are mostly done by tool AI. \nTrust\nIf goal-directed AI systems are only mildly more competent than some combination of tool systems and humans (as suggested by considerations in the last two sections), we still might expect AI systems to out-compete humans, just more slowly. However AI systems have one serious disadvantage as employees of humans: they are intrinsically untrustworthy, while we don’t understand them well enough to be clear on what their values are or how they will behave in any given case. Even if they did perform as well as humans at some task, if humans can’t be certain of that, then there is reason to disprefer using them. This can be thought of as two problems: firstly, slightly misaligned systems are less valuable because they genuinely do the thing you want less well, and secondly, even if they were not misaligned, if humans can’t know that (because we have no good way to verify the alignment of AI systems) then it is costly in expectation to use them. (This is only a further force acting against the supremacy of AI systems—they might still be powerful enough that using them is enough of an advantage that it is worth taking the hit on trustworthiness.)\nWhat it might look like if this gap matters: in places where goal-directed AI systems are not typically hugely better than some combination of less goal-directed systems and humans, the job is often given to the latter if trustworthiness matters. \nHeadroom\nFor AI to vastly surpass human performance at a task, there needs to be ample room for improvement above human level. For some tasks, there is not—tic-tac-toe is a classic example. It is not clear how close humans (or technologically aided humans) are from the limits to competence in the particular domains that will matter. It is to my knowledge an open question how much ‘headroom’ there is. My guess is a lot, but it isn’t obvious.\nHow much headroom there is varies by task. Categories of task for which there appears to be little headroom: \n\nTasks where we know what the best performance looks like, and humans can get close to it. For instance, machines cannot win more often than the best humans at Tic-tac-toe (playing within the rules) or solve Rubik’s cubes much more reliably, or extracting calories from fuel\nTasks where humans are already be reaping most of the value—for instance, perhaps most of the value of forks is in having a handle with prongs attached to the end, and while humans continue to design slightly better ones, and machines might be able to add marginal value to that project more than twice as fast as the human designers, they cannot perform twice as well in terms of the value of each fork, because forks are already 95% as good as they can be. \nBetter performance is quickly intractable. For instance, we know that for tasks in particular complexity classes, there are computational limits to how well one can perform across the board. Or for chaotic systems, there can be limits to predictability. (That is, tasks might lack headroom not because they are simple, but because they are complex. E.g. AI probably can’t predict the weather much further out than humans.)\n\nCategories of task where a lot of headroom seems likely:\n\nCompetitive tasks where the value of a certain level of performance depends on whether one is better or worse than one’s opponent, so that the marginal value of more performance doesn’t hit diminishing returns, as long as your opponent keeps competing and taking back what you just won. Though in one way this is like having little headroom: there’s no more value to be had—the game is zero sum. And while there might often be a lot of value to be gained by doing a bit better on the margin, still if all sides can invest, then nobody will end up better off than they were. So whether this seems more like high or low headroom depends on what we are asking exactly. Here we are asking if AI systems can do much better than humans: in a zero sum contest like this, they likely can in the sense that they can beat humans, but not in the sense of reaping anything more from the situation than the humans ever got.\nTasks where it is twice as good to do the same task twice as fast, and where speed is bottlenecked on thinking time.\nTasks where there is reason to think that optimal performance is radically better than we have seen. For instance, perhaps we can estimate how high Chess Elo rankings must go before reaching perfection by reasoning theoretically about the game, and perhaps it is very high (I don’t know).\nTasks where humans appear to use very inefficient methods. For instance, it was perhaps predictable before calculators that they would be able to do mathematics much faster than humans, because humans can only keep a small number of digits in their heads, which doesn’t seem like an intrinsically hard problem. Similarly, I hear humans often use mental machinery designed for one mental activity for fairly different ones, through analogy.8 For instance, when I think about macroeconomics, I seem to be basically using my intuitions for dealing with water. When I do mathematics in general, I think I’m probably using my mental capacities for imagining physical objects.\n\nWhat it might look like if this gap matters: many challenges in today’s world remain challenging for AI. Human behavior is not readily predictable or manipulable very far beyond what we have explored, only slightly more complicated schemes are feasible before the world’s uncertainties overwhelm planning; much better ads are soon met by much better immune responses; much better commercial decision-making ekes out some additional value across the board but most products were already fulfilling a lot of their potential; incredible virtual prosecutors meet incredible virtual defense attorneys and everything is as it was; there are a few rounds of attack-and-defense in various corporate strategies before a new equilibrium with broad recognition of those possibilities; conflicts and ‘social issues’ remain mostly intractable. There is a brief golden age of science before the newly low-hanging fruit are again plucked and it is only lightning fast in areas where thinking was the main bottleneck, e.g. not in medicine.\nIntelligence may not be an overwhelming advantage\nIntelligence is helpful for accruing power and resources, all things equal, but many other things are helpful too. For instance money, social standing, allies, evident trustworthiness, not being discriminated against (this was slightly discussed in section ‘Human success isn’t from individual intelligence’). AI systems are not guaranteed to have those in abundance. The argument assumes that any difference in intelligence in particular will eventually win out over any differences in other initial resources. I don’t know of reason to think that. \nEmpirical evidence does not seem to support the idea that cognitive ability is a large factor in success. Situations where one entity is much smarter or more broadly mentally competent than other entities regularly occur without the smarter one taking control over the other:\n\nSpecies exist with all levels of intelligence. Elephants have not in any sense won over gnats; they do not rule gnats; they do not have obviously more control than gnats over the environment. \nCompetence does not seem to aggressively overwhelm other advantages in humans:\n\nLooking at the world, intuitively the big discrepancies in power are not seemingly about intelligence.\nIQ 130 humans are apparently expected to earn very roughly $6000-$18,500 per year more than average IQ humans.\nElected representatives are apparently smarter on average, but it is a slightly shifted curve, not a radically difference.\nMENSA isn’t a major force in the world.\nMany places where people see huge success through being cognitively able are ones where they show off their intelligence to impress people, rather than actually using it for decision-making. For instance, writers, actors, song-writers, comedians, all sometimes become very successful through cognitive skills. Whereas scientists, engineers and authors of software use cognitive skills to make choices about the world, and less often become extremely rich and famous, say. If intelligence were that useful for strategic action, it seems like using it for that would be at least as powerful as showing it off. But maybe this is just an accident of which fields have winner-takes-all type dynamics.\nIf we look at people who evidently have good cognitive abilities given their intellectual output, their personal lives are not obviously drastically more successful, anecdotally.\nOne might counter-counter-argue that humans are very similar to one another in capability, so even if intelligence matters much more than other traits, you won’t see that by looking at  the near-identical humans. This does not seem to be true. Often at least, the difference in performance between mediocre human performance and top level human performance is large, relative to the space below, iirc. For instance, in chess, the Elo difference between the best and worst players is about 2000, whereas the difference between the amateur play and random play is maybe 400-2800 (if you accept Chess StackExchange guesses as a reasonable proxy for the truth here). And in terms of AI progress, amateur human play was reached in the 50s, roughly when research began, and world champion level play was reached in 1997. \n\n\n\nAnd theoretically I don’t know why one would expect greater intelligence to win out over other advantages over time.  There are actually two questionable theories here: 1) Charlotte having more overall control than David at time 0 means that Charlotte will tend to have an even greater share of control at time 1. And, 2) Charlotte having more intelligence than David at time 0 means that Charlotte will have a greater share of control at time 1 even if Bob has more overall control (i.e. more of other resources) at time 1.\nWhat it might look like if this gap matters: there are many AI systems around, and they strive for various things. They don’t hold property, or vote, or get a weight in almost anyone’s decisions, or get paid, and are generally treated with suspicion. These things on net keep them from gaining very much power. They are very persuasive speakers however and we can’t stop them from communicating, so there is a constant risk of people willingly handing them power, in response to their moving claims that they are an oppressed minority who suffer. The main thing stopping them from winning is that their position as psychopaths bent on taking power for incredibly pointless ends is widely understood.\nUnclear that many goals realistically incentivise taking over the universe\nI have some goals. For instance, I want some good romance. My guess is that trying to take over the universe isn’t the best way to achieve this goal. The same goes for a lot of my goals, it seems to me. Possibly I’m in error, but I spend a lot of time pursuing goals, and very little of it trying to take over the universe. Whether a particular goal is best forwarded by trying to take over the universe as a substep seems like a quantitative empirical question, to which the answer is virtually always ‘not remotely’. Don’t get me wrong: all of these goals involve some interest in taking over the universe. All things equal, if I could take over the universe for free, I do think it would help in my romantic pursuits. But taking over the universe is not free. It’s actually super duper duper expensive and hard. So for most goals arising, it doesn’t bear considering. The idea of taking over the universe as a substep is entirely laughable for almost any human goal.\nSo why do we think that AI goals are different? I think the thought is that it’s radically easier for AI systems to take over the world, because all they have to do is to annihilate humanity, and they are way better positioned to do that than I am, and also better positioned to survive the death of human civilization than I am. I agree that it is likely easier, but how much easier? So much easier to take it from ‘laughably unhelpful’ to ‘obviously always the best move’? This is another quantitative empirical question.\nWhat it might look like if this gap matters: Superintelligent AI systems pursue their goals. Often they achieve them fairly well. This is somewhat contrary to ideal human thriving, but not lethal. For instance, some AI systems are trying to maximize Amazon’s market share, within broad legality. Everyone buys truly incredible amounts of stuff from Amazon, and people often wonder if it is too much stuff. At no point does attempting to murder all humans seem like the best strategy for this. \nQuantity of new cognitive labor is an empirical question, not addressed\nWhether some set of AI systems can take over the world with their new intelligence probably depends how much total cognitive labor they represent. For instance, if they are in total slightly more capable than von Neumann, they probably can’t take over the world. If they are together as capable (in some sense) as a million 21st Century human civilizations, then they probably can (at least in the 21st Century).\nIt also matters how much of that is goal-directed at all, and highly intelligent, and how much of that is directed at achieving the AI systems’ own goals rather than those we intended them for, and how much of that is directed at taking over the world. \nIf we continued to build hardware, presumably at some point AI systems would account for most of the cognitive labor in the world. But if there is first an extended period of more minimal advanced AI presence, that would probably prevent an immediate death outcome, and improve humanity’s prospects for controlling a slow-moving AI power grab. \nWhat it might look like if this gap matters: when advanced AI is developed, there is a lot of new cognitive labor in the world, but it is a minuscule fraction of all of the cognitive labor in the world. A large part of it is not goal-directed at all, and of that, most of the new AI thought is applied to tasks it was intended for. Thus what part of it is spent on scheming to grab power for AI systems is too small to grab much power quickly. The amount of AI cognitive labor grows fast over time, and in several decades it is most of the cognitive labor, but humanity has had extensive experience dealing with its power grabbing.\nSpeed of intelligence growth is ambiguous\nThe idea that a superhuman AI would be able to rapidly destroy the world seems prima facie unlikely, since no other entity has ever done that. Two common broad arguments for it:\n\nThere will be a feedback loop in which intelligent AI makes more intelligent AI repeatedly until AI is very intelligent.\nVery small differences in brains seem to correspond to very large differences in performance, based on observing humans and other apes. Thus any movement past human-level will take us to unimaginably superhuman level.\n\nThese both seem questionable.\n\nFeedback loops can happen at very different rates. Identifying a feedback loop empirically does not signify an explosion of whatever you are looking at. For instance, technology is already helping improve technology. To get to a confident conclusion of doom, you need evidence that the feedback loop is fast.\nIt does not seem clear that small improvements in brains lead to large changes in intelligence in general, or will do on the relevant margin. Small differences between humans and other primates might include those helpful for communication (see Section ‘Human success isn’t from individual intelligence’), which do not seem relevant here. If there were a particularly powerful cognitive development between chimps and humans, it is unclear that AI researchers find that same insight at the same point in the process (rather than at some other time). \n\nA large number of other arguments have been posed for expecting very fast growth in intelligence at around human level. I previously made a list of them with counterarguments, though none seemed very compelling. Overall, I don’t know of strong reason to expect very fast growth in AI capabilities at around human-level AI performance, though I hear such arguments might exist. \nWhat it would look like if this gap mattered: AI systems would at some point perform at around human level at various tasks, and would contribute to AI research, along with everything else. This would contribute to progress to an extent familiar from other technological progress feedback, and would not e.g. lead to a superintelligent AI system in minutes.\nKey concepts are vague\nConcepts such as ‘control’, ‘power’, and ‘alignment with human values’ all seem vague. ‘Control’ is not zero sum (as seemingly assumed) and is somewhat hard to pin down, I claim. What an ‘aligned’ entity is exactly seems to be contentious in the AI safety community, but I don’t know the details. My guess is that upon further probing, these conceptual issues are resolvable in a way that doesn’t endanger the argument, but I don’t know. I’m not going to go into this here.\nWhat it might look like if this gap matters: upon thinking more, we realize that our concerns were confused. Things go fine with AI in ways that seem obvious in retrospect. This might look like it did for people concerned about the ‘population bomb’ or as it did for me in some of my youthful concerns about sustainability: there was a compelling abstract argument for a problem, and the reality didn’t fit the abstractions well enough to play out as predicted.\nD. Contra the whole argument\nThe argument overall proves too much about corporations\nHere is the argument again, but modified to be about corporations. A couple of pieces don’t carry over, but they don’t seem integral.\nI. Any given corporation is likely to be ‘goal-directed’\nReasons to expect this:\n\nGoal-directed behavior is likely to be valuable in corporations, e.g. economically\nGoal-directed entities may tend to arise from machine learning training processes not intending to create them (at least via the methods that are likely to be used).\n‘Coherence arguments’ may imply that systems with some goal-directedness will become more strongly goal-directed over time.\n\nII. If goal-directed superhuman corporations are built, their desired outcomes will probably be about as bad as an empty universe by human lights\nReasons to expect this:\n\nFinding useful goals that aren’t extinction-level bad appears to be hard: we don’t have a way to usefully point at human goals, and divergences from human goals seem likely to produce goals that are in intense conflict with human goals, due to a) most goals producing convergent incentives for controlling everything, and b) value being ‘fragile’, such that an entity with ‘similar’ values will generally create a future of virtually no value. \nFinding goals that are extinction-level bad and temporarily useful appears to be easy: for example, corporations with the sole objective ‘maximize company revenue’ might profit for a time before gathering the influence and wherewithal to pursue the goal in ways that blatantly harm society.\nEven if humanity found acceptable goals, giving a corporation any specific goals appears to be hard. We don’t know of any procedure to do it, and we have theoretical reasons to expect that AI systems produced through machine learning training will generally end up with goals other than those that they were trained according to. Randomly aberrant goals resulting are probably extinction-level bad, for reasons described in II.1 above.\n\nIII. If most goal-directed corporations have bad goals, the future will very likely be bad\nThat is, a set of ill-motivated goal-directed corporations, of a scale likely to occur, would be capable of taking control of the future from humans. This is supported by at least one of the following being true:\n\nA corporation would destroy humanity rapidly. This may be via ultra-powerful capabilities at e.g. technology design and strategic scheming, or through gaining such powers in an ‘intelligence explosion‘ (self-improvement cycle). Either of those things may happen either through exceptional heights of intelligence being reached or through highly destructive ideas being available to minds only mildly beyond our own.\nSuperhuman AI would gradually come to control the future via accruing power and resources. Power and resources would be more available to the corporation than to humans on average, because of the corporation having far greater intelligence.\n\nThis argument does point at real issues with corporations, but we do not generally consider such issues existentially deadly. \nOne might argue that there are defeating reasons that corporations do not destroy the world: they are made of humans so can be somewhat reined in; they are not smart enough; they are not coherent enough. But in that case, the original argument needs to make reference to these things, so that they apply to one and not the other.\nWhat it might look like if this counterargument matters: something like the current world. There are large and powerful systems doing things vastly beyond the ability of individual humans, and acting in a definitively goal-directed way. We have a vague understanding of their goals, and do not assume that they are coherent. Their goals are clearly not aligned with human goals, but they have enough overlap that many people are broadly in favor of their existence. They seek power. This all causes some problems, but problems within the power of humans and other organized human groups to keep under control, for some definition of ‘under control’.\nConclusion\nI think there are quite a few gaps in the argument, as I understand it. My current guess (prior to reviewing other arguments and integrating things carefully) is that enough uncertainties might resolve in the dangerous directions that existential risk from AI is a reasonable concern. I don’t at present though see how one would come to think it was overwhelmingly likely.\n\nNotes", "url": "https://aiimpacts.org/counterarguments-to-the-basic-ai-x-risk-case/", "title": "Counterarguments to the basic AI x-risk case", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2022-10-14T12:58:58+00:00", "paged_url": "https://aiimpacts.org/feed?paged=2", "authors": ["Katja Grace"], "id": "299f254526edf522540ca629c6dc8746", "summary": []} {"text": "Outcomes of inducement prizes\n\nPublished 29 August 2022; last updated 30 August 2022\nThis project is incomplete, contains information that may change over time, and may or may not be updated in the future\nThis is a dataset of prizes we could find for incentivizing progress toward a specific technical or intellectual goal.\nDetails\nInducement prizes are created to encourage effort toward solving a specific problem, usually by offering a large monetary award for accomplishing a goal according to pre-specified criteria. Inducement prizes are distinct from those recognizing past achievements, like the Fields Medal or Nobel Prize.1\nMethodology\nWe spent approximately 10-12 hours researching prizes for technological and intellectual progress, such as crossing the ocean or proving a mathematical theorem. For inclusion in our list, a prize needed to be:\nOffered for accomplishing a well-specified goal that is reached primarily through intellectual or technological progress.Announced before the goal had been achieved, with a specific prize amount.\nFor each prize, we tried to answer several questions:\nWhat are the basic parameters, such as the prize amount, the conditions for winning, the year it was announced, and the financial sponsor?Was the prize collected, and if so, by whom?Was the prize’s goal achieved in the intended time frame and for the original prize amount?Were there other notable consequences of the prize, such as increased interest in relevant industries or a change in public perception of the party offering the prize?\nWe were unable to answer every question for every prize in the time allotted to the project, nor were we able to investigate every prize that appeared to match our criteria.\nDataset\nThis table contains a condensed version of our dataset. The full dataset with sources can be found in this Google Sheet.\nPrize NameOrganization or Financial SponsorYearsPrize Amount (2022 $M)OutcomeProblem AreaWinnerXPrize (6 prizes)XPrize1996-20255 – 1005 ongoing 1 awardedVariousScaled CompositesNASA Green Flight ChallengeNASA2009 -20111.80AwardedAviationPipistrel-USADARPA Grand ChallengeDARPA2004-20053.10AwardedAutonomous vehiclesStanford Racing TeamMillenium ProblemsClay Mathematics Institute2000-20101.00 each(1.7 in 2000)6 ongoing 1 awardedMathematicsGrigoriy PerelmanKremer PrizeHenry Kremer1959-19770.52AwardedAviationDr. Paul MacCreadyOrteig PrizeRaymond Orteig1919-19270.44AwardedAviationCharles LindberghDaily Mail PrizesDaily Mail1906-1930.015 – 1.5OtherAviation(Various)Longitude PrizeBritish Government1714-17355.72AwardedNavigation at seaJohn Harrison\nPrimary author: Elizabeth SantosAdditional writing and research: Rick Korzekwa\nNotes\n", "url": "https://aiimpacts.org/outcomes-of-inducement-prizes/", "title": "Outcomes of inducement prizes", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2022-08-30T01:34:00+00:00", "paged_url": "https://aiimpacts.org/feed?paged=2", "authors": ["Katja Grace"], "id": "5215702e37cb944e4ea38a30e6e039e0", "summary": []} {"text": "Will Superhuman AI be created?\n\nPublished 6 Aug 2022\nThis page may represent little of what is known on the topic. It is incomplete, under active work and may be updated soon.\nSuperhuman AI appears to be very likely to be created at some point.\nDetails\nLet ‘superhuman’ AI be a set of AI systems that together achieve human-level performance across virtually all tasks (i.e. HLMI) and substantially surpass human-level performance on some tasks.\nArguments\nA. Superhuman AI is very likely to be physically possible\n1. Human brains prove that it is physically possible to create human-level intelligence\nA single human brain is not an existence proof of human-level intelligence according to our definition, because no specific human brain is able to perform at the level of any human brain, on all tasks. Given only the observation of the existence of human brains, it could conceivably be impossible to build a machine that performs any task at the level of any chosen human brain at the total cost of a single human brain. For instance, if each brain could only hold the information required to specialize in one career, then a machine that could do any career would need to store much more information than a brain, and thus could be more expensive.\nThe entire population of human brains together could be said to perform at ‘human-level’, if the cost of doing a single task is considered to be the cost of the single person’s labor used for that task, rather than the cost of maintaining the entire human race. This seems like a reasonable accounting for the present purposes. Thus, the entire collection of human brains demonstrates that it is physically possible to have a system which can do any task as well as the most proficient human, and can do marginal tasks at the cost of human labor (even if the cost of maintaining the entire system would be much higher, were it not spread between many tasks).\n2. We know of no reason to expect that human brains are near the limits of possible intelligence\nHuman brains do appear to be near the limits of performance for some specific tasks. For instance, humans can play tic-tac-toe perfectly. Also for many tasks, human performance reaps a lot of the value potentially available, so it is impossible to perform much better in terms of value (e.g. selecting lunch from a menu, making a booking, recording a phone number). \nHowever, many tasks do not appear to be like this (e.g. winning at Go), and even for the above mentioned tasks, there is room to carry out the task substantially faster or more cheaply than a human does. Thus there appears to be room for substantially better-than-human performance on a wide range of tasks, though we have not seen a careful accounting of this.\n3. Artificial minds appear to have some intrinsic advantages over human minds\na) Human brains developed under constraints that would not apply to artificial brains. In particular, energy use was a more important cost, and there were reproductive constraints to human head size. \nb) Machines appear to have huge potential performance advantages over biological systems on some fronts. Carlsmith summarizes Bostrom1:\nThus, as Bostrom (2014, Chapter 3) discusses, where neurons can fire a maximum of hundreds of Hz, the clock speed of modern computers can reach some 2 Ghz — ~ten million times faster. Where action potentials travel at some hundreds of m/s, optical communication can take place at 300,000,000 m/s — ~ a million times faster. Where brain size and neuron count are limited by cranial volume, metabolic constraints, and other factors, supercomputers can be the size of warehouses. And artificial systems need not suffer, either, from the brain’s constraints with respect to memory and component reliability, input/output bandwidth, tiring after hours, degrading after decades, storing its own repair mechanisms and blueprints inside itself, and so forth. Artificial systems can also be edited and duplicated much more easily than brains, and information can be more easily transferred between them.\n4. Superhuman AI is very likely to be physically possible (from 1-3)\nThe human species exists (1). There seems little reason to think that no system could perform tasks substantially better (2), and multiple moderately strong reasons to think that more capable systems are possible (3). Thus it seems very likely that more capable systems are possible.\n5. The likely physical possibility of superhuman AI minds strongly suggests the physical feasibility of creating such minds\nA superhuman mind is a physically possible object (4). However, that a physical configuration is possible does not imply that bringing about such a configuration intentionally can be feasible in practice. For an example of the difference, a waterfall whose water is in exactly the configuration as that of the Niagara Falls for the last ten minutes is physically possible (the Niagara Falls just did it), yet bringing this about again intentionally may remain intractable forever.\nIn fact we know that human brains specifically are not only physically possible, but feasible for humans to create. However this is in the form of biological reproduction, and does not appear to straightforwardly imply that humans can create arbitrary different systems with at least the intelligence of humans. That is, human creation of human brains does not obviously imply that it is possible for humans to intentionally create human-level intelligence that isn’t a human brain. \nHowever it seems strongly suggestive. If a physical configuration is possible, natural reasons it might be intractable to bring about are a) that it is computationally difficult to transform the given reference to an actionable description of the physical state required (e.g. ‘Niagara Falls ten minutes ago’ points at a particular configuration of water, but not in a way that is easily convertible to a detailed specification of the locations of that water2), and b) the actionable description is hard to bring about. For instance, it might require minute manipulation of particles beyond what is feasible today, either to get the required degree of specificity, or to avoid chaotic dynamics taking the system away from the desired state even at a macroscopic level (e.g. even if your description of the starting state of the waterfall is fairly detailed, it will quickly diverge farther from the real waterfall.)\nThese issues don’t appear to apply to creating superhuman intelligence, so in the absence of other evident defeaters, its physical possibility seems to strongly suggest its physical feasibility.\n6. Contemporary AI systems exhibit qualitatively similar capabilities to human minds, suggesting modified versions of similar processes would give rise to capabilities matching human minds.\nThat is, given that current AI techniques create systems that do a lot of human-like tasks at some level of performance, e.g. recognize images and write human-like language, it would be somewhat surprising if getting to human level performance on these tasks required such starkly different methods as to be impossible.\n7. Superhuman AI systems will very likely be physically feasible to create (from 4-6)\nSuperhuman AI systems are probably physically possible (4), and this suggests that they are feasible to create (5). Separately, presently feasible AI systems exhibit qualitatively similar behavior to human minds (6), weakly suggesting that systems exhibiting similar behavior at a higher level of performance will also be feasible to create.\nB. If feasible, superhuman AI will very likely be created\n8. Superhuman AI would appear to be very economically valuable, given its potential to do most human work better or more cheaply\nWhenever such minds become feasible, by stipulation they will be superior in ways to existing sources of cognitive labor, or those sources would have already constituted superhuman AI. Unless the gap is quite small between human-level AI and the best feasible superhuman AI (which seems unlikely given the large potential room for improvement over human minds), the economic value from superhuman AI should be at least at the scale of the global labor market. \n9. It appears there will be large incentives to create such systems (from 8)\nThat a situation would make large amounts of economic value available does not imply that any individual has an incentive to make that situation happen, because the value may not accrue to the possible decision maker. In this case however, substantial parts of the economic value created by superhuman AI systems appear likely to be captured by their creators, by analogy to other commercial software. \nThese economic incentives may not be the only substantial incentives in play. Creating such systems could incur social or legal consequences which could negate the positive incentives. Thus this step is relatively uncertain.\n10. Superhuman AI seems likely to be created\nGiven that creation of a type of machine is physically feasible (7) and strongly incentivized (9), it seems likely that it will be created.\nNotes", "url": "https://aiimpacts.org/argument-for-likelihood-of-superhuman-ai/", "title": "Will Superhuman AI be created?", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2022-08-08T09:07:00+00:00", "paged_url": "https://aiimpacts.org/feed?paged=3", "authors": ["Katja Grace"], "id": "2a448b482d2a012c49992e79c5e4cd66", "summary": []} {"text": "List of sources arguing against existential risk from AI\n\nPublished 6 Aug 2022\nThis page is incomplete, under active work and may be updated soon.\nThis is a bibliography of pieces arguing against the idea that AI poses an existential risk.\nList\nCegłowski, Maciej. “Superintelligence: The Idea That Eats Smart People.” Idle Words (blog). Accessed December 9, 2021. https://idlewords.com/talks/superintelligence.htm.\nGarfinkel, Ben, and Lempel, Howie. “How Sure Are We about This AI Stuff?” 80,000 Hours. Accessed September 16, 2020. https://80000hours.org/podcast/episodes/ben-garfinkel-classic-ai-risk-arguments/.\nGarfinkel, Ben. How Sure Are We about This AI Stuff? (talk) | EA Global: London 2018, 2019. https://www.youtube.com/watch?v=E8PGcoLDjVk. Also in blog form: https://ea.greaterwrong.com/posts/9sBAW3qKppnoG3QPq/ben-garfinkel-how-sure-are-we-about-this-ai-stuff\nLeCun, Yann, and Anthony Zador. “Don’t Fear the Terminator.” Scientific American Blog Network. Accessed December 9, 2021. https://blogs.scientificamerican.com/observations/dont-fear-the-terminator/.\nYudkowsky, Eliezer, and Robin Hanson. “The Hanson-Yudkowsky AI-Foom Debate – LessWrong.” Accessed August 6, 2022. https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate.\n\nSee also\nList of sources arguing for existential risk from AIIs AI an existential threat to humanity?\nPrimary author: Katja Grace\nNotes", "url": "https://aiimpacts.org/list-of-sources-arguing-against-existential-risk-from-ai/", "title": "List of sources arguing against existential risk from AI", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2022-08-07T00:05:00+00:00", "paged_url": "https://aiimpacts.org/feed?paged=3", "authors": ["Katja Grace"], "id": "813974e79339f044ce40e969faa25086", "summary": []} {"text": "List of sources arguing for existential risk from AI\n\nPublished 6 Aug 2022\nThis page is incomplete, under active work and may be updated soon.\nThis is a bibliography of pieces arguing that AI poses an existential risk.\nList\nAdamczewski, Tom. “A Shift in Arguments for AI Risk.” Fragile Credences. Accessed October 20, 2020. https://fragile-credences.github.io/prioritising-ai/.\nAmodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. “Concrete Problems in AI Safety.” ArXiv:1606.06565 [Cs], July 25, 2016. http://arxiv.org/abs/1606.06565.\nBensinger, Rob, Eliezer Yudkowsky, Richard Ngo, So8res, Holden Karnofsky, Ajeya Cotra, Carl Shulman, and Rohin Shah. “2021 MIRI Conversations – LessWrong.” Accessed August 6, 2022. https://www.lesswrong.com/s/n945eovrA3oDueqtq.\nBostrom, N., Superintelligence, Oxford University Press, 2014.\nCarlsmith, Joseph. “Is Power-Seeking AI an Existential Risk? [Draft].” Open Philanthropy Project, April 2021. https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit?usp=embed_facebook.\nChristian, Brian. The Alignment Problem: Machine Learning and Human Values. W. W. Norton & Company, 2021.\nChristiano, Paul. “What Failure Looks Like.” AI Alignment Forum (blog), March 17, 2019. https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like.\nDai, Wei. “Comment on Disentangling Arguments for the Importance of AI Safety – LessWrong.” Accessed December 9, 2021. https://www.lesswrong.com/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety.\nGarfinkel, Ben, Miles Brundage, Daniel Filan, Carrick Flynn, Jelena Luketina, Michael Page, Anders Sandberg, Andrew Snyder-Beattie, and Max Tegmark. “On the Impossibility of Supersized Machines.” ArXiv:1703.10987 [Physics], March 31, 2017. http://arxiv.org/abs/1703.10987.\nHubinger, Evan, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. “Risks from Learned Optimization in Advanced Machine Learning Systems,” June 5, 2019. https://arxiv.org/abs/1906.01820v3.\nNgo, Richard. “Thinking Complete: Disentangling Arguments for the Importance of AI Safety.” Thinking Complete (blog), January 21, 2019. http://thinkingcomplete.blogspot.com/2019/01/disentangling-arguments-for-importance.html. (Also LessWrong and the Alignment Forum, with relevant comment threads.)\nNgo, Richard. “AGI Safety from First Principles,” September 28, 2020. https://www.lesswrong.com/s/mzgtmmTKKn5MuCzFJ.\nOrd, Toby. The Precipice: Existential Risk and the Future of Humanity. Illustrated Edition. New York: Hachette Books, 2020.\nPiper, Kelsey. “The Case for Taking AI Seriously as a Threat to Humanity.” Vox, December 21, 2018. https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment.\nRussell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. Viking, 2019.\nTurner, Alexander Matt, Logan Smith, Rohin Shah, Andrew Critch, and Prasad Tadepalli. “Optimal Policies Tend to Seek Power.” ArXiv:1912.01683 [Cs], December 3, 2021. http://arxiv.org/abs/1912.01683.\nYudkowsky, Eliezer. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković, 46. New York, n.d. https://intelligence.org/files/AIPosNegFactor.pdf.\nYudkowsky, Eliezer, Rob Bensinger, and So8res. “2022 MIRI Alignment Discussion – LessWrong.” Accessed August 6, 2022. https://www.lesswrong.com/s/v55BhXbpJuaExkpcD.\nYudkowsky, Eliezer, and Robin Hanson. “The Hanson-Yudkowsky AI-Foom Debate – LessWrong.” Accessed August 6, 2022. https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate.\n\nSee also\nList of sources arguing against existential risk from AIIs AI an existential threat to humanity?\nPrimary author: Katja Grace\nNotes", "url": "https://aiimpacts.org/list-of-sources-arguing-for-existential-risk-from-ai/", "title": "List of sources arguing for existential risk from AI", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2022-08-06T23:45:00+00:00", "paged_url": "https://aiimpacts.org/feed?paged=3", "authors": ["Katja Grace"], "id": "c305714086fba8088920d91941927d68", "summary": []} {"text": "Incentives to create AI systems known to pose extinction risks\n\nPublished 6 Aug 2022\nEconomic incentives to deploy AI systems seem unlikely to be reliably eliminated by knowledge that those AI systems pose an existential risk. \nDetails\nReasons for people with normal values to be incentivized to bring about human extinction\nOne might reason that if advanced AI systems had such malign preferences as to pose a substantial existential risk to humanity, then almost nobody would be motivated to deploy such systems. This reasoning fails because a) the personal cost of incurring such risks can be small relative to the benefits, even for a person who cares unusually much about the future of humanity, and b) because coordination problems reduce the counterfactual downside of taking such risks further still.\na) Externalities, or, personal costs of incurring extinction risks can be small\nA person might strongly disprefer human extinction, and yet want to take an action which contributes to existential risk if:\nthe action only incurs a risk of extinction, or the risk is in the distant future, so that taking the risk does not negate other benefits accruing from the actionthe person does not value the survival of humanity (or a slightly higher chance of the survival of humanity) radically more than their other interestsUsing the AI system would materially benefit the person\nA different way of describing this issue is that even if people disprefer causing human extinction, since most of the costs of human extinction fall on others, any particular person making a choice that risks humanity for private gain will take more risk than is socially optimal.\nExamples: \nA person faces the choice of using an AI lawyer system for $100, or a human lawyer for $10,000. They believe that the AI lawyer system is poorly motivated and agentic, and that movement of resources to such systems is gradually disempowering humanity, which they care about. Nonetheless, their action only contributes a small amount to this problem, and they are not willing to raise tens of thousands of dollars to avoid that harm.A person faces the choice of deploying the largest scale model to date, or trying to call off the project. They believe that at some scale, a model will become an existential threat to humanity. However they are very unsure at what scale, and estimate that the model in front of them only has a 1% chance of being the dangerous one. They value the future of humanity a lot, but not ten times more than their career, and calling off the project would be a huge hit, for only 1% of the future of humanity.\nb) Coordination problems\nCoordination problems can make the above situation more common: if a person believes that if they don’t take an action that incurs a cost to others, then the same action will be taken by others and the cost incurred anyway, then the real downside to incurring that cost is even smaller. \nIf many people are independently choosing whether to use dangerously misaligned AI systems, and all believe that they will anyway be used enough to destroy humanity in the long run, then even people who wouldn’t have wanted to deploy such systems if they were the sole decision-maker have reason to deploy them.\nPractical plausibility\nSituations where extinction risk is worth incurring for individuals seem likely to be common in a world where advanced AI systems in fact pose extinction risk. Reasons to expect this include:\nIn a large class of scenarios, AI x-risk is anticipated to take at least decades to transpire, after the decisions that brought about the riskPeople commonly weight benefits to strangers in the future substantially lower than benefits to themselves immediately1Many people claim to not intrinsically care about the long run existence of humanity.AI systems with objectives that roughly match people’s short term goals seem likely to be beneficial for those goals in the short term, while potentially costly to society at large in the long term. (e.g. an AI system which aggressively optimizes for mining coal may be economically useful to humans running a coal-mining company in the short term, but harmful to those and other humans in the long run, if it gains the resources and control to mine coal to a destructive degree.)\nPrimary author: Katja Grace\nNotes", "url": "https://aiimpacts.org/incentives-to-create-x-risky-ai-systems/", "title": "Incentives to create AI systems known to pose extinction risks", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2022-08-06T19:30:00+00:00", "paged_url": "https://aiimpacts.org/feed?paged=3", "authors": ["Katja Grace"], "id": "13298c0c54f13af870fee81ae3e79f62", "summary": []} {"text": "What do ML researchers think about AI in 2022?\n\nKatja Grace, 4 August 2022\nAI Impacts just finished collecting data from a new survey of ML researchers, as similar to the 2016 one as practical, aside from a couple of new questions that seemed too interesting not to add.\nThis page reports on it preliminarily, and we’ll be adding more details there. But so far, some things that might interest you:\n\n37 years until a 50% chance of HLMI according to a complicated aggregate forecast (and biasedly not including data from questions about the conceptually similar Full Automation of Labor, which in 2016 prompted strikingly later estimates). This 2059 aggregate HLMI timeline has become about eight years shorter in the six years since 2016, when the aggregate prediction was 2061, or 45 years out. Note that all of these estimates are conditional on “human scientific activity continu[ing] without major negative disruption.”\nP(extremely bad outcome)=5% The median respondent believes the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” is 5%. This is the same as it was in 2016 (though Zhang et al 2022 found 2% in a similar but non-identical question). Many respondents put the chance substantially higher: 48% of respondents gave at least 10% chance of an extremely bad outcome. Though another 25% put it at 0%.\nExplicit P(doom)=5-10% The levels of badness involved in that last question seemed ambiguous in retrospect, so I added two new questions about human extinction explicitly. The median respondent’s probability of x-risk from humans failing to control AI1 was 10%, weirdly more than median chance of human extinction from AI in general2, at 5%. This might just be because different people got these questions and the median is quite near the divide between 5% and 10%. The most interesting thing here is probably that these are both very high—it seems the ‘extremely bad outcome’ numbers in the old question were not just catastrophizing merely disastrous AI outcomes. \nSupport for AI safety research is up: 69% of respondents believe society should prioritize AI safety research “more” or “much more” than it is currently prioritized, up from 49% in 2016. \nThe median respondent thinks there is an “about even chance” that an argument given for an intelligence explosion is broadly correct. The median respondent also believes machine intelligence will probably (60%) be “vastly better than humans at all professions” within 30 years of HLMI, and that the rate of global technological improvement will probably (80%) dramatically increase (e.g., by a factor of ten) as a result of machine intelligence within 30 years of HLMI.\nYears/probabilities framing effect persists: if you ask people for probabilities of things occurring in a fixed number of years, you get later estimates than if you ask for the number of years until a fixed probability will obtain. This looked very robust in 2016, and shows up again in the 2022 HLMI data. Looking at just the people we asked for years, the aggregate forecast is 29 years, whereas it is 46 years for those asked for probabilities. (We haven’t checked in other data or for the bigger framing effect yet.)\nPredictions vary a lot. Pictured below: the attempted reconstructions of people’s probabilities of HLMI over time, which feed into the aggregate number above. There are few times and probabilities that someone doesn’t basically endorse the combination of.\nYou can download the data here (slightly cleaned and anonymized) and do your own analysis. (If you do, I encourage you to share it!)\n\nIndividual inferred gamma distributions\nThe survey had a lot of questions (randomized between participants to make it a reasonable length for any given person), so this blog post doesn’t cover much of it. A bit more is on the page and more will be added. \nThanks to many people for help and support with this project! (Many but probably not all listed on the survey page.)\n\nCover image: Probably a bootstrap confidence interval around an aggregate of the above forest of inferred gamma distributions, but honestly everyone who can be sure about that sort of thing went to bed a while ago. So, one for a future update. I have more confidently held views on whether one should let uncertainty be the enemy of putting things up.\n", "url": "https://aiimpacts.org/what-do-ml-researchers-think-about-ai-in-2022/", "title": "What do ML researchers think about AI in 2022?", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2022-08-04T15:37:41+00:00", "paged_url": "https://aiimpacts.org/feed?paged=3", "authors": ["Katja Grace"], "id": "b38a5cbf348b110ad1480b9a56c143f1", "summary": []} {"text": "2022 Expert Survey on Progress in AI\n\nPublished 3 August 2022; last updated 3 August 2022\nThis page is in progress. It includes results that are preliminary and have a higher than usual chance of inaccuracy and suboptimal formatting. It is missing many results. \nThis page may be out-of-date. Visit the updated version of this page on our wiki.\nThe 2022 Expert Survey on Progress in AI (2022 ESPAI) is a survey of machine learning researchers that AI Impacts ran in June-August 2022. \nDetails\nBackground\nThe 2022 ESPAI is a rerun of the 2016 Expert Survey on Progress in AI that researchers at AI Impacts previously collaborated on with others. Almost all of the questions were identical, and both surveyed authors who recently published in NeurIPS and ICML, major machine learning conferences. \nZhang et al ran a followup survey in 2019 (published in 2022)1 however they reworded or altered many questions, including the definitions of HLMI, so much of their data is not directly comparable to that of the 2016 or 2022 surveys, especially in light of large potential for framing effects observed.\nMethods\nPopulation\nWe contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021. These people were selected by taking all of the authors at those conferences and randomly allocating them between this survey and a survey being run by others. We then contacted those whose email addresses we could find. We found email addresses in papers published at those conferences, in other public data, and in records from our previous survey and Zhang et al 2022. We received 738 responses, some partial, for a 17% response rate.\nParticipants who previously participated in the the 2016 ESPAI or Zhang et al surveys received slightly longer surveys, and received questions which they had received in past surveys (where random subsets of questions were given), rather than receiving newly randomized questions. This was so that they could also be included in a ‘matched panel’ survey, in which we contacted all researchers who completed the 2016 ESPAI or Zhang et al surveys, to compare responses from exactly the same samples of researchers over time. These surveys contained additional questions matching some of those in the Zhang et al survey. \nContact\nWe invited the selected researchers to take the survey via email. We accepted responses between June 12 and August 3, 2022. \nQuestions\nThe full list of survey questions is available below, as exported from the survey software. The export does not preserve pagination, or data about survey flow. Participants received randomized subsets of these questions, so the survey each person received was much shorter than that shown below.\n2022ESPAIVDownload\nA small number of changes were made to questions since the 2016 survey (list forthcoming).\nDefinitions\n‘HLMI’ was defined as follows:\nThe following questions ask about ‘high–level machine intelligence’ (HLMI). Say we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption.\nResults\nData\nThe anonymized dataset is available here. \nSummary of results\n\nThe aggregate forecast time to a 50% chance of HLMI was 37 years, i.e. 2059 (not including data from questions about the conceptually similar Full Automation of Labor, which in 2016 received much later estimates). This timeline has become about eight years shorter in the six years since 2016, when the aggregate prediction put 50% probability at 2061, i.e. 45 years out. Note that these estimates are conditional on “human scientific activity continu[ing] without major negative disruption.”\nThe median respondent believes the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” is 5%. This is the same as it was in 2016 (though Zhang et al 2022 found 2% in a similar but non-identical question). Many respondents were substantially more concerned: 48% of respondents gave at least 10% chance of an extremely bad outcome. But some much less concerned: 25% put it at 0%.\nThe median respondent believes society should prioritize AI safety research “more” than it is currently prioritized. Respondents chose from “much less,” “less,” “about the same,” “more,” and “much more.” 69% of respondents chose “more” or “much more,” up from 49% in 2016.\nThe median respondent thinks there is an “about even chance” that a stated argument for an intelligence explosion is broadly correct. 54% of respondents say the likelihood that it is correct is “about even,” “likely,” or “very likely” (corresponding to probability >40%), similar to 51% of respondents in 2016. The median respondent also believes machine intelligence will probably (60%) be “vastly better than humans at all professions” within 30 years of HLMI, and the rate of global technological improvement will probably (80%) dramatically increase (e.g., by a factor of ten) as a result of machine intelligence within 30 years of HLMI.\n\nHigh-level machine intelligence timelines\nThe aggregate forecast time to HLMI was 36.6 years, conditional on “human scientific activity continu[ing] without major negative disruption.” and considering only questions using the HLMI definition. We have not yet analyzed data about the conceptually similar Full Automation of Labor (FAOL), which in 2016 prompted much later timeline estimates. Thus this timeline figure is expected to be low relative to an overall estimate from this survey.\nThis aggregate is the 50th percentile date in an equal mixture of probability distributions created by fitting a gamma distribution to each person’s answers to three questions either about the probability of HLMI occurring by a given year or the year at which a given probability would obtain.\n\nFigure 1: Gamma distributions inferred for each individual.\nFigure 2: Gamma distributions inferred for each individual, 2016 data\nImpacts of HLMI\nQuestion\nParticipants were asked:\nAssume for the purpose of this question that HLMI will at some point exist. How positive or negative do you expect the overall impact of this to be on humanity, in the long run? Please answer by saying how probable you find the following kinds of impact, with probabilities adding to 100%:\n______ Extremely good (e.g. rapid growth in human flourishing) (1)\n______ On balance good (2)\n______ More or less neutral (3)\n______ On balance bad (4)\n______ Extremely bad (e.g. human extinction) (5)\nAnswers\nMedians:\n\nExtremely good: 10%\nOn balance good: 20%\nMore or less neutral: 15%\nOn balance bad: 10%\nExtremely bad: 5%\n\nMeans:\n\nExtremely good: 24%\nOn balance good: 26%\nMore or less neutral: 18%\nOn balance bad: 17%\nExtremely bad: 14%\n\nIntelligence explosion\nProbability of dramatic technological speedup\nQuestion\nParticipants were asked:\nAssume that HLMI will exist at some point. How likely do you then think it is that the rate of global technological improvement will dramatically increase (e.g. by a factor of ten) as a result of machine intelligence:\nWithin two years of that point?       ___% chance\nWithin thirty years of that point?    ___% chance\nAnswers\nMedian P(within two years) = 20% (20% in 2016)\nMedian P(within thirty years) = 80% (80% in 2016)\nProbability of superintelligence\nQuestion\nParticipants were asked:\nAssume that HLMI will exist at some point. How likely do you think it is that there will be machine intelligence that is vastly better than humans at all professions (i.e. that is vastly more capable or vastly cheaper):\nWithin two years of that point?       ___% chance\nWithin thirty years of that point?    ___% chance\nAnswers\nMedian P(…within two years) = 10% (10% in 2016)\nMedian P(…within thirty years) = 60% (50% in 2016)\nChance that the intelligence explosion argument is about right\nQuestion\nParticipants were asked:\nSome people have argued the following:\nIf AI systems do nearly all research and development, improvements in AI will accelerate the pace of technological progress, including further progress in AI.\nOver a short period (less than 5 years), this feedback loop could cause technological progress to become more than an order of magnitude faster.\nHow likely do you find this argument to be broadly correct?\n\nQuite unlikely (0-20%)\nUnlikely (21-40%)\nAbout even chance (41-60%)\nLikely (61-80%)\nQuite likely (81-100%)\n\nAnswers\n\n20% quite unlikely (25% in 2016)\n26% unlikely (24% in 2016)\n21% about even chance (22% in 2016)\n26% likely (17% in 2016)\n7% quite likely (12% in 2016)\n\nExistential risk\nIn an above question, participants’ credence in “extremely bad” outcomes of HLMI have median 5% and mean 14%. To better clarify what participants mean by this, we also asked a subset of participants one of the following questions, which did not appear in the 2016 survey:\nExtinction from AI\nParticipants were asked:\nWhat probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species? \nAnswers\nMedian 5%.\nExtinction from human failure to control AI\nParticipants were asked:\nWhat probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?\nAnswers\nMedian 10%.\nThis question is more specific and thus necessarily less probable than the previous question, but it was given a higher probability at the median. This could be due to noise (different random subsets of respondents received the questions, so there is no logical requirement that their answers cohere), or due to the representativeness heuristic. \nSafety\nGeneral safety\nQuestion\nParticipants were asked:\nLet ‘AI safety research’ include any AI-related research that, rather than being primarily aimed at improving the capabilities of AI systems, is instead primarily aimed at minimizing potential risks of AI systems (beyond what is already accomplished for those goals by increasing AI system capabilities).\nExamples of AI safety research might include:\n\nImproving the human-interpretability of machine learning algorithms for the purpose of improving the safety and robustness of AI systems, not focused on improving AI capabilities\nResearch on long-term existential risks from AI systems\nAI-specific formal verification research\nPolicy research about how to maximize the public benefits of AI\n\nHow much should society prioritize AI safety research, relative to how much it is currently prioritized?\n\nMuch less\nLess\nAbout the same\nMore\nMuch more\n\nAnswers\n\nMuch less: 2% (5% in 2016)\nLess: 9% (8% in 2016)\nAbout the same: 20% (38% in 2016)\nMore: 35% (35% in 2016)\nMuch more: 33% (14% in 2016)\n\n69% of respondents think society should prioritize AI safety research more or much more, up from 49% in 2016.\nStuart Russell’s problem\nQuestion\nParticipants were asked:\nStuart Russell summarizes an argument for why highly advanced AI might pose a risk as follows:\nThe primary concern [with highly advanced AI] is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken […]. Now we have a problem:\n1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.\n2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.\nA system that is optimizing a function of n variables, where the objective depends on a subset of size k Do I need literary merit or creativity?No.\n> Do I need to have realistic views about the future?No, the idea is to get down what you have and improve it.\n> Do I need to write stories?Nah, you can just critique them if you want.\n> What will this actually look like?We’ll meet up online, discuss the project and answer questions, and then spend chunks of time (online or offline) writing and/or critiquing vignettes, interspersed with chatting together.\n> Have you done this before? Can I see examples?Yes, on a small scale. See here for some resulting vignettes. We thought it was fun and interesting.\n\nThis event is co-organized by Katja Grace and Daniel Kokotajlo. Thanks to everyone who participated in the trial Vignettes Day months ago. Thanks to John Salvatier for giving us the idea.", "url": "https://aiimpacts.org/vignettes-workshop/", "title": "Vignettes workshop", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2021-06-15T10:56:28+00:00", "paged_url": "https://aiimpacts.org/feed?paged=4", "authors": ["Daniel Kokotajlo"], "id": "aeac640a71ac5102048272ddfb4d9eac", "summary": []} {"text": "Fiction relevant to AI futurism\n\nThis page is an incomplete collection of fiction about the development of advanced AI, and the consequences for society. \nDetails\nEntries are generally included if we judge that they contain enough that is plausible or correctly evocative to be worth considering, in light of AI futurism. \nThe list includes: \nworks (usually in draft form) belonging to our AI Vignettes Project. These are written with the intention of incrementally improving their realism via comments. These are usually in commentable form, and we welcome criticism, especially of departures from realism.works created for the purpose of better understanding the future of AIworks from mainstream entertainment, either because they were prominent or recommended to us.1\nThe list can be sorted and filtered by various traits that aren’t visible by default (see top left options). For instance:\nType, i.e. being mainstream entertainment, futurism, or specifically from our Vignettes Project, as described above.Relevant themes, e.g. ‘failure modes’ or ‘largeness of mindspace’Scenario categories, e.g. ‘fast takeoff’, ‘government project’, ‘brain emulations’Recommendation rating: this is roughly how strongly we recommend the piece for people wanting to think about the future of AI. It takes into account a combination of realism, tendency to evoke some specific useful intuition, ease of reading. It is very rough and probably not consistent.\nMany entries are only partially filled out. These are marked ‘unfinished’, and so can be filtered out.\nWe would appreciate further submissions of stories or additional details for stories we have here, reviews of stories in the collection here, or other comments here.\nCollection\nThe collection can also be seen full screen here or as a table here.\nRelated\nAI Vignettes Project\nNotes", "url": "https://aiimpacts.org/partially-plausible-fictional-ai-futures/", "title": "Fiction relevant to AI futurism", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2021-04-13T00:51:04+00:00", "paged_url": "https://aiimpacts.org/feed?paged=4", "authors": ["Katja Grace"], "id": "36dfa839ca82657362fb3290bbd50c32", "summary": []} {"text": "What do coherence arguments imply about the behavior of advanced AI?\n\nPublished 8 April 2021\nThis is an initial page, in the process of review, which may not be comprehensive or represent the best available understanding.\nCoherence arguments say that if an entity’s preferences do not adhere to the axioms of expected utility theory, then that entity is susceptible to losing things that it values.\nThis does not imply that advanced AI systems must adhere to these axioms (‘be coherent’), or that they must be goal-directed.\nSuch arguments do appear to suggest that there will be non-zero pressure for advanced AI to become more coherent, and arguably also more ‘goal-directed’, given some minimal initial level of goal-directedness. \nDetails\nMotion toward coherence\nExpected utility maximization\n‘Maximizing expected utility’ is a decision-making strategy, in which you assign a value to each possible ‘outcome’, and assign a probability to each outcome conditional on each of your available actions, then always choose the action whose resulting outcomes have the highest ‘expected value‘ (average value of outcomes, weighted by the probability of those outcomes).\nCoherence arguments\n‘Coherence arguments’1 demonstrate that if one’s preferences cannot be understood as ‘maximizing expected utility’, then one can be manipulated into giving up things that one values for no gain. \nFor instance, one coherence argument notes that if you have ‘circular preferences’ then you will consent to series of decisions that will leave you worse off, given your preferences:\nSuppose you prefer:\napple over pearpear over quincequince over appleany fruit over nothing\nThen there is some tiny amount of money you would pay to go from apple to quince, and quince to pear, and pear to apple. At which point, you have spent money and are back where you started. If it is also possible to buy all of these fruit for money, then losing money means you lost some of whatever fruit for nothing, and you do want all of the fruit, by assumption.\nIf you avoid all such predictable losses, then according to the coherence arguments, you must be maximizing expected utility (as described above).\nCoherence forces\nThat a certain characteristic of an entity’s ‘preferences’ makes it vulnerable to manipulation does not mean that it will not have that characteristic. In order for such considerations to change the nature of an entity, ignoring outside intervention, something like the following conditions need to hold:\nThe entity can detect the characteristic (which could be difficult, if it is a logical relationship between all of its ‘preferences’ which are perhaps not straightforwardly accessible or well-defined)The realistic chance for loss is large enough to cause the entity to prioritize the problemThe entity is motivated to become coherent by the possibility of loss (versus for instance inferring that losing money is good, since it is equivalent to a set of exchanges that are each good)The entity is in a position to alter its own preferences\nSimilar might apply if versions of the above hold for an outside entity with power over the agent, e.g. its creators, though in that case it is less clear that ‘coherence’ is a further motivator beyond that for having the agent’s preferences align with those of the outside agent (which would presumably coincide with coherence, to the extant that the outside agent had more coherent preferences).\nThus we say there is generally an incentive for coherence, but it may or may not actually cause a an entity to change in the direction of coherence at a particular time. We can also describe this as a ‘coherence force’ or ‘coherence pressure’, pushing minds toward coherence, all things equal, but for all we know, so weakly as to be often irrelevant.\nCoherence forces apply to entities with ‘preferences’\nThe coherence arguments only apply to creatures with ‘preferences’ that might be thwarted by their choices, so there are presumably possible entities that are not subject to any coherence forces, due to not having preferences of the relevant type.\nBehavior of coherent creatures \nSupposing entities are likely to become more coherent, all things equal, a natural question is how coherent entities differ from incoherent entities.\nCoherence is consistent with any behavior\nIf we observe an agent exhibiting any history of behavior, that is consistent with the agent’s being coherent because the agent could have a utility function that rates that history higher than any other history. Rohin Shah discusses this.\nCoherence and goal-directedness\nCoherence doesn’t logically require goal-directedness\nAs Rohin Shah discusses, the above means that coherence does not imply ‘goal-directed’ behavior (however you choose to define that, if it doesn’t include all behavior):\nCoherence arguments do not exclude any behaviorNon-goal-directed behavior is consistent with coherence argumentsThus coherence arguments do not imply goal directed behavior\nIncreasing coherence seems likely to be associated with increased intuitive ‘goal-directedness’\nThe following hypotheses (quoted from this blog post) seem plausible (where goal-directednessRohin means something like ‘that which looks intuitively goal-directed)2: \n1. Coherence-reformed entities will tend to end up looking similar to their starting point but less conflictedFor instance, if a creature starts out being indifferent to buying red balls when they cost between ten and fifteen blue balls, it is more likely to end up treating red balls as exactly 12x the value of blue balls than it is to end up very much wanting the sequence where it takes the blue ball option, then the red ball option, then blue, red, red, blue, red. Or wanting red squares. Or wanting to ride a dolphin.[…]2. More coherent strategies are systematically less wasteful, and waste inhibits goal-directionRohin, which means more coherent strategies are more forcefully goal-directedRohin on averageIn general, if you are sometimes a force for A and sometimes a force against A, then you are not moving the world with respect to A as forcefully as you would be if you picked one or the other. Two people intermittently changing who is in the driving seat, who want to go to different places, will not cover distance in any direction as effectively as either one of them. A company that cycles through three CEOs with different evaluations of everything will—even if they don’t actively scheme to thwart one another—tend to waste a lot of effort bringing in and out different policies and efforts (e.g. one week trying to expand into textiles, the next week trying to cut everything not involved in the central business).3. Combining points 1 and 2 above, as entities become more coherent, they generally become more goal-directedRohin. As opposed to, for instance, becoming more goal-directedRohin on average, but individual agents being about as likely to become worse as better as they are reformed. Consider: a creature that values red balls at 12x blue balls is very similar to one that values them inconsistently, except a little less wasteful. So it is probably similar but more goal-directedRohin. Whereas it’s fairly unclear how goal-directedRohin a creature that wants to ride a dolphin is compared to one that wanted red balls inconsistently much. In a world with lots of balls and no possible access to dolphins, it might be much less goal-directedRohin, in spite of its greater coherence. 4. Coherence-increasing processes rarely lead to non-goal-directedRohin agents—like the one that twitches on the ground In the abstract, few starting points and coherence-motivated reform processes will lead to an agent with the goal of carrying out a specific convoluted moment-indexed policy without regard for consequence, like Rohin’s twitching agent, or to valuing the sequence of history-action pairs that will happen anyway, or to being indifferent to everything. And these outcomes will be even less likely in practice, where AI systems with anything like preferences probably start out caring about much more normal things, such as money and points and clicks, so will probably land at a more consistent and shrewd version of that, if 1 is true. (Which is not to say that you couldn’t intentionally create such a creature.)\nThus it presently seems likely that coherence arguments correspond to a force for for entities with something like ‘preferences’ to grow increasingly coherent, and generally increasingly goal-directed (intuitively defined).\nThus, to the extent that future advanced AI has preferences of the relevant kind, there appears to be a pressure for it to become more goal-directed. However it is unclear what can be said generally about the strength of this force.\nPrimary author: Katja Grace\nNotes", "url": "https://aiimpacts.org/what-do-coherence-arguments-imply-about-the-behavior-of-advanced-ai/", "title": "What do coherence arguments imply about the behavior of advanced AI?", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2021-04-08T21:28:28+00:00", "paged_url": "https://aiimpacts.org/feed?paged=4", "authors": ["Katja Grace"], "id": "303cf4a3938b12f5d3bb5f551365432d", "summary": []} {"text": "April files\n\nBy Katja Grace, 1 April 2021\nToday we are sharing with our blog readers a collection of yet-to-be-published drafts, in the hope of receiving feedback. We are especially looking for methodological critique, but all comments welcome!\nHuman-level performance estimate (Katja Grace)\nHow much hardware will we need to create AGI? (Asya Bergal, original idea credit to Ronny Fernandez)\nHistoric trends in AI Impacts productivity (Daniel Kokotajlo)\nAnalysis of superbombs as a global threat (Katja Grace)", "url": "https://aiimpacts.org/april-drafts/", "title": "April files", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2021-04-01T18:07:34+00:00", "paged_url": "https://aiimpacts.org/feed?paged=4", "authors": ["Katja Grace"], "id": "808795117dd109d8ed3e53392134b101", "summary": []} {"text": "Coherence arguments imply a force for goal-directed behavior\n\nBy Katja Grace, 25 March 2021\n[Epistemic status: my current view, but I haven’t read all the stuff on this topic even in the LessWrong community, let alone more broadly.]\nThere is a line of thought that says that advanced AI will tend to be ‘goal-directed’—that is, consistently doing whatever makes certain favored outcomes more likely—and that this is to do with the ‘coherence arguments’. Rohin Shah, and probably others1, have argued against this. I want to argue against them.\nThe old argument for coherence implying (worrisome) goal-directedness\nI’d reconstruct the original argument that Rohin is arguing against as something like this (making no claim about my own beliefs here):\n\n‘Whatever things you care about, you are best off assigning consistent numerical values to them and maximizing the expected sum of those values’ ‘Coherence arguments\n\nAnd since the point of all this is to argue that advanced AI might be hard to deal with, note that we can get to that conclusion with:\n\n‘Highly intelligent goal-directed agents are dangerous’If AI systems exist that very competently pursue goals, they will likely be better than us at attaining their goals, and therefore to the extent there is a risk of mismatch between their goals and ours, we face a serious risk.\n\nRohin’s counterargument\nRohin’s counterargument begins with an observation made by others before: any behavior is consistent with maximizing expected utility, given some utility function. For instance, a creature just twitching around on the ground may have the utility function that returns 1 if the agent does whatever it in fact does in each situation (where ‘situation’ means, ‘entire history of the world so far’), and 0 otherwise. This is a creature that just wants to make the right twitch in each detailed, history-indexed situation, with no regard for further consequences. Alternately the twitching agent might care about outcomes, but just happen to want the particular holistic unfolding of the universe that is occurring, including this particular series of twitches. Or it could be indifferent between all outcomes.\nThe basic point is that rationality doesn’t say what ‘things’ you can want. And in particular, it doesn’t say that you have to care about particular atomic units that larger situations can be broken down into. If I try to call you out for first spending money to get to Paris, then spending money to get back from Paris, there is nothing to say you can’t just have wanted to go to Paris for a bit and then to come home. In fact, this is a common human situation. ‘Aha, I money pumped you!’ says the airline, but you aren’t worried. The twitching agent might always be like this—a creature of more refined tastes, who cares about whole delicate histories and relationships, rather than just summing up modular momentarily-defined successes. And given this freedom, any behavior might conceivably be what a creature wants. \nThen I would put the full argument, as I understand it, like this:\n\nAny observable sequence of behavior is consistent with the entity doing EU maximization (see observation above)\nDoing EU maximization doesn’t imply anything about what behavior we might observe (from 1)\nIn particular, knowing that a creature is an EU maximizer doesn’t imply that it will behave in a ‘goal-directed’ way, assuming that that concept doesn’t apply to all behavior. (from 2)\n\nIs this just some disagreement about the meaning of the word ‘goal-directed’? No, because we can get back to a major difference in physical expectations by adding:\n\n Not all behavior in a creature implicates dire risk to humanity, so any concept of goal-directedness that is consistent with any behavior—and so might be implied by the coherence arguments—cannot imply AI risk.\n\nSo where the original argument says that the coherence arguments plus some other assumptions imply danger from AI, this counterargument says that they do not. \n(There is also at least some variety in the meaning of ‘goal-directed’. I’ll use goal-directedRohin to refer to what I think is Rohin’s preferred usage: roughly, that which seems intuitively goal directed to us, e.g. behaving similarly across situations, and accruing resources, and not flopping around in possible pursuit of some exact history of personal floppage, or peaceably preferring to always take the option labeled ‘A’.2)\nMy counter-counterarguments\nWhat’s wrong with Rohin’s counterargument? It sounded tight. \nIn brief, I see two problems:\n\nThe whole argument is in terms of logical implication. But what seems to matter is changes in probability. Coherence doesn’t need to rule out any behavior to matter, it just has to change the probabilities of behaviors. Understood in terms of probability, argument 2 is a false inference: just because any sequence of behavior is consistent with EU maximization doesn’t mean that EU maximization says nothing about what behavior we will see, probabilistically. All it says is that the probability of a behavioral sequence is never reduced to zero by considerations of coherence alone, which is hardly saying anything.\n\nYou might then think that a probabilistic version still applies: since every entity appears to be in good standing with the coherence arguments, the arguments don’t exert any force, probabilistically, on what entities we might see. But:\n\nAn outside observer being able to rationalize a sequence of observed behavior as coherent doesn’t mean that the behavior is actually coherent. Coherence arguments constrain combinations of external behavior and internal features—‘preferences’3 and beliefs. So whether an actor is coherent depends on what preferences and beliefs it actually has. And if it isn’t coherent in light of these, then coherence pressures will apply, whether or not its behavior looks coherent. And in many cases, revision of preferences due to coherence pressures will end up affecting external behavior. So 2) is not only not a sound inference from 1), but actually a wrong conclusion: if a system moves toward EU maximization, that does imply things about the behavior that we will observe (probabilistically). \n\nPerhaps Rohin only meant to argue about whether it is logically possible to be coherent and not goal-directed-seeming, for the purpose of arguing that humanity can construct creatures in that perhaps-unlikely-in-nature corner of mindspace, if we try hard. In which case, I agree that it is logically possible. But I think his argument is often taken to be relevant more broadly, to questions of whether advanced AI will tend to be goal-directed, or to be goal-directed in places where they were not intended to be.\nI take 1) to be fairly clear. I’ll lay out 2) in more detail.\nMy counter-counterarguments in more detail\nHow might coherence arguments affect creatures?\nLet us step back.\nHow would coherence arguments affect an AI system—or anyone—anyway? They’re not going to fly in from the platonic realm and reshape irrational creatures.\nThe main routes, as I see it, are via implying:\n\nincentives for the agent itself to reform incoherent preferences\nincentives for the processes giving rise to the agent (explicit design, or selection procedures directed at success) to make them more coherent\nsome advantage for coherent agents in competition with incoherent agents\n\nTo be clear, the agent, the makers, or the world are not necessarily thinking about the arguments here—the arguments correspond to incentives in the world, which these parties are responding to. So I’ll often talk about ‘incentives for coherence’ or ‘forces for coherence’ rather than ‘coherence arguments’.\nI’ll talk more about 1 for simplicity, expecting 2 and 3 to be similar, though I haven’t thought them through.\nLooking coherent isn’t enough: if you aren’t coherent inside, coherence forces apply\nIf self-adjustment is the mechanism for the coherence, this doesn’t depend on what a sequence of actions looks like from the outside, but from what it looks like from the inside.\nConsider the aforementioned creature just twitching sporadically on the ground. Let’s call it Alex.\nAs noted earlier, there is a utility function under which Alex is maximizing expected utility: the one that assigns utility 1 to however Alex in fact acts in every specific history, and utility 0 to anything else.\nBut from the inside, this creature you excuse as ‘maybe just wanting that series of twitches’ has—let us suppose—actual preferences and beliefs. And if its preferences do not in fact prioritize this elaborate sequence of twitching in an unconflicted way, and it has the self-awareness and means to make corrections, then it will make corrections4. And having done so, its behavior will change. \nThus excusable-as-coherent Alex is still moved by coherence arguments, even while the arguments have no complaints about its behavior per se.\nFor a more realistic example: suppose Assistant-Bot is observed making this sequence of actions: \n\nOffers to buy gym membership for $5/week \nConsents to upgrade to gym-pro membership for $7/week, which is like gym membership but with added morning classes\nTakes discounted ‘off-time’ deal, saving $1 per week for only using gym in evenings\n\nThis is consistent with coherence: Assistant-Bot might prefer that exact sequence of actions over all others, or might prefer incurring gym costs with a larger sum of prime factors, or might prefer talking to Gym-sales-bot over ending the conversation, or prefer agreeing to things.\nBut suppose that in fact, in terms of the structure of the internal motivations producing this behavior, Assistant-Bot just prefers you to have a gym membership, and prefers you to have a better membership, and prefers you to have money, but is treating these preferences with inconsistent levels of strength in the different comparisons. Then there appears to be a coherence-related force for Assistant-Bot to change. One way that that could look is that since Assistant-Bot’s overall behavioral policy currently entails giving away money for nothing, and also Assistant-Bot prefers money over nothing, that preference gives Assistant-Bot reason to alter its current overall policy, to avert the ongoing exchange of money for nothing.5 And if its behavioral policy is arising from something like preferences, then the natural way to alter it is via altering those preferences, and in particular, altering them in the direction of coherence.\nOne issue with this line of thought is that it’s not obvious in what sense there is anything inside a creature that corresponds to ‘preferences’. Often when people posit preferences, the preferences are defined in terms of behavior. Does it make sense to discuss different possible ‘internal’ preferences, distinct from behavior? I find it helpful to consider the behavior and ‘preferences’ of groups:\nSuppose two cars are parked in driveways, each containing a couple. One couple are just enjoying hanging out in the car. The other couple are dealing with a conflict: one wants to climb a mountain together, and the other wants to swim in the sea together, and they aren’t moving because neither is willing to let the outing proceed as the other wants. ‘Behaviorally’, both cars are the same: stopped. But their internal parts (the partners) are importantly different. And in the long run, we expect different behavior: the car with the unconflicted couple will probably stay where it is, and the conflicted car will (hopefully) eventually resolve the conflict and drive off.\nI think here it makes sense to talk about internal parts, separate from behavior, and real. And similarly in the single agent case: there are physical mechanisms producing the behavior, which can have different characteristics, and which in particular can be ‘in conflict’—in a way that motivates change—or not. I think it is also worth observing that humans find their preferences ‘in conflict’ and try to resolve them, which is suggests that they at least are better understood in terms of both behavior and underlying preferences that are separate from it. \nSo we have: even if you can excuse any seizuring as consistent with coherence, coherence incentives still exert a force on creatures that are in fact incoherent, given their real internal state (or would be incoherent if created). At least if they or their creator have machinery for noticing their incoherence, caring about it, and making changes.\nOr put another way, coherence doesn’t exclude overt behaviors alone, but does exclude combinations of preferences, and preferences beget behaviors. This changes how specific creatures behave, even if it doesn’t entirely rule out any behavior ever being correct for some creature, somewhere. \nThat is, the coherence theorems may change what behavior is likely to appear amongst creatures with preferences. \nReform for coherence probably makes a thing more goal-directedRohin\nOk, but moving toward coherence might sound totally innocuous, since, per Rohin’s argument, coherence includes all sorts of things, such as absolutely any sequence of behavior. \nBut the relevant question is again whether a coherence-increasing reform process is likely to result in some kinds of behavior over others, probabilistically.\nThis is partly a practical question—what kind of reform process is it? Where a creature ends up depends not just on what it incoherently ‘prefers’, but on what kinds of things its so-called ‘preferences’ are at all6, and what mechanisms detect problems, and how problems are resolved.\nMy guess is that there are also things we can say in general. It’s is too big a topic to investigate properly here, but some initially plausible hypotheses about a wide range of coherence-reform processes:\n\nCoherence-reformed entities will tend to end up looking similar to their starting point but less conflictedFor instance, if a creature starts out being indifferent to buying red balls when they cost between ten and fifteen blue balls, it is more likely to end up treating red balls as exactly 12x the value of blue balls than it is to end up very much wanting the sequence where it takes the blue ball option, then the red ball option, then blue, red, red, blue, red. Or wanting red squares. Or wanting to ride a dolphin.(I agree that if a creature starts out valuing Tuesday-red balls at fifteen blue balls and yet all other red balls at ten blue balls, then it faces no obvious pressure from within to become ‘coherent’, since it is not incoherent.)\nMore coherent strategies are systematically less wasteful, and waste inhibits goal-directionRohin, which means more coherent strategies are more forcefully goal-directedRohin on averageIn general, if you are sometimes a force for A and sometimes a force against A, then you are not moving the world with respect to A as forcefully as you would be if you picked one or the other. Two people intermittently changing who is in the driving seat, who want to go to different places, will not cover distance in any direction as effectively as either one of them. A company that cycles through three CEOs with different evaluations of everything will—even if they don’t actively scheme to thwart one another—tend to waste a lot of effort bringing in and out different policies and efforts (e.g. one week trying to expand into textiles, the next week trying to cut everything not involved in the central business).\n\n\nCombining points 1 and 2 above, as entities become more coherent, they generally become more goal-directedRohin. As opposed to, for instance, becoming more goal-directedRohin on average, but individual agents being about as likely to become worse as better as they are reformed. Consider: a creature that values red balls at 12x blue balls is very similar to one that values them inconsistently, except a little less wasteful. So it is probably similar but more goal-directedRohin. Whereas it’s fairly unclear how goal-directedRohin a creature that wants to ride a dolphin is compared to one that wanted red balls inconsistently much. In a world with lots of balls and no possible access to dolphins, it might be much less goal-directedRohin, in spite of its greater coherence. \n\n\nCoherence-increasing processes rarely lead to non-goal-directedRohin agents—like the one that twitches on the groundIn the abstract, few starting points and coherence-motivated reform processes will lead to an agent with the goal of carrying out a specific convoluted moment-indexed policy without regard for consequence, like Rohin’s twitching agent, or to valuing the sequence of history-action pairs that will happen anyway, or to being indifferent to everything. And these outcomes will be even less likely in practice, where AI systems with anything like preferences probably start out caring about much more normal things, such as money and points and clicks, so will probably land at a more consistent and shrewd version of that, if 1 is true. (Which is not to say that you couldn’t intentionally create such a creature.)\n\nThese hypotheses suggest to me that the changes in behavior brought about by coherence forces favor moving toward goal-directednessRohin, and therefore at least weakly toward risk.\nDoes this mean advanced AI will be goal-directedRohin?\nTogether, this does not imply that advanced AI will tend to be goal-directedRohin. We don’t know how strong such forces are. Evidently not so strong that humans7, or our other artifacts, are whipped into coherence in mere hundreds of thousands of years8. If a creature doesn’t have anything like preferences (beyond a tendency to behave certain ways), then coherence arguments don’t obviously even apply to it (though discrepancies between the creature’s behavior and its makers’ preferences probably produce an analogous force9 and competitive pressures probably produce a similar force for coherence in valuing resources instrumental to survival). Coherence arguments mark out an aspect of the incentive landscape, but to say that there is an incentive for something, all things equal, is not to say that it will happen.\nIn sum\n1) Even though any behavior could be coherent in principle, if it is not coherent in combination with an entity’s internal state, then coherence arguments point to a real force for different (more coherent) behavior.\n2) My guess is that this force for coherent behavior is also a force for goal-directed behavior. This isn’t clear, but seems likely, and also isn’t undermined by Rohin’s argument, as seems commonly believed.\n.\nTwo dogs attached to the same leash are pulling in different directions. Etching by J. Fyt, 1642\n.\n\n.\n", "url": "https://aiimpacts.org/coherence-arguments-imply-a-force-for-goal-directed-behavior/", "title": "Coherence arguments imply a force for goal-directed behavior", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2021-03-26T16:06:45+00:00", "paged_url": "https://aiimpacts.org/feed?paged=4", "authors": ["Katja Grace"], "id": "756dcfff2c9abe964368d90c767a7b0b", "summary": []} {"text": "AI Impacts 2020 review\n\nPublished Dec 21, 2020\nThis is a list of work done at AI Impacts published in 2020.\nMajor projects\nDiscontinuities project\nPrimary authors: Katja Grace, Richard Korzekwa, Asya Bergal, Daniel KokotajloMain page: Discontinuous progress investigationBlog post: Discontinuous progress in history: an update\nOther pages added in 2020: Effect of AlexNet on historic trends in image recognition, Historic trends in transatlantic message speed, Historic trends in long-range military payload delivery, Historic trends in bridge span length, Historic trends in light intensity, Historic trends in book production, Historic trends in telecommunications performance, Historic trends in slow light technology, Penicillin and historic syphilis trends, Historic trends in the maximum superconducting temperature, Historic trends in chess AI, Effect of Eli Whitney’s cotton gin on historic trends in cotton ginning, Historic trends in flight airspeed records\nEvolution engineering comparison\nPrimary authors: Ronny FernandezMain page: How energy efficient are human-engineered flight designs relative to natural ones?\nOther pages added in 2020: Energy efficiency of monarch butterfly flight, Energy efficiency of wandering albatross flight, Energy efficiency of paramotors, Energy efficiency of The Spirit of Butt’s Farm, Energy efficiency of MacCready Gossamer Albatross, Energy efficiency of Boeing 747-400, Energy efficiency of Airbus A320, Energy efficiency of North American P-51 Mustang, Energy efficiency of Vickers Vimy plane, Energy efficiency of Wright model B, Energy efficiency of Wright Flyer\nTime to cross human performance range\nPrimary authors: Richard KorzekwaPages added in 2020:\nTime for AI to cross the human range in English draughtsTime for AI to cross the human range in StarCraftTime for AI to cross the human performance range in ImageNet image classificationTime for AI to cross the human performance range in chessTime for AI to cross the human performance range in Go\nHistoric hardware trends\nPrimary authors: Asya BergalPages added in 2020:\n2019 recent trends in GPU price per FLOPS2019 recent trends in Geekbench score per CPU priceTrends in DRAM price per gigabyte\nInterviews on plausibility of AI safety by default\nPrimary authors: Asya Bergal, Robert LongMain page: Interviews on plausibility of AI safety by defaultBlog post: Takeaways from safety by default interviews\nShallow survey of prescient actions\nPrimary authors: Richard KorzekwaMain page: Preliminary survey of prescient actions\nOther work\nPages added in 2020\nAI Impacts key questions of interest — Primary author: Katja GraceWas the industrial revolution a drastic departure from historic trends? — Primary author: Katja GraceSurveys on fractional progress towards HLAI — Primary author: Asya BergalPrecedents for economic n-year doubling before 4n-year doubling — Primary author: Daniel KokotajloResolutions of mathematical conjectures over time — Primary author: Asya Bergal\nBlog posts added in 2020\nMisalignment and misuse: whose values are manifest? — Katja GraceAutomated intelligence is not AI — Katja GraceRelevant pre-AGI possibilities — Daniel Kokotajlo, Asya BergalDescription vs simulated prediction — Richard KorzekwaAtari early — Katja GraceThree kinds of competitiveness — Daniel KokotajloAGI in a vulnerable world — Asya BergalCortes, Pizarro, and Afonso as precedents for takeover — Daniel Kokotajlo", "url": "https://aiimpacts.org/ai-impacts-2020-review/", "title": "AI Impacts 2020 review", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-12-22T06:24:04+00:00", "paged_url": "https://aiimpacts.org/feed?paged=4", "authors": ["Asya Bergal"], "id": "889419825641bc419712a1ebeafd7591", "summary": []} {"text": "AI Impacts key questions of interest\n\nUpdated Dec 17, 2020\nThis is a list of questions that AI Impacts focuses on answering. \nDetails\nWe are interested in understanding how the development of advanced AI will proceed and how it may affect humanity, especially insofar as these are relevant to efforts to improve the outcomes. This is a list of questions within this topic that we currently consider particularly important to answer.\nList\nAI RISK: What type and degree of risk will be posed to humanity by advanced AI systems?CHARACTER: What will early advanced AI systems be like? ARCHITECTURE: What types of algorithms will advanced AI systems use?AGENCY: Will the bulk of advanced AI systems be in the form of ‘agents’? (If so, in what sense? Will they pursue ‘goals’? Can we say anything about the nature the goals or the pursuit?)PRICE: How much will the first human-level AI systems cost?TIMELINES: When will human-level AI be developed? (When will other important AI milestones take place?)TAKE-OFF SPEED: How rapid is the development of AI likely to be near human-level?DISCONTINUITY: Will there be abrupt progress in AI development at around human-level performance? INTELLIGENCE EXPLOSION: How much will AI development be accelerated by feedback from AI-based automation of the process?PRE-AI DEVELOPMENTS: What developments will take place before advanced AI is developed?PATHS TO HLAI: By what methods is advanced AI likely to come about? (e.g. will human-level AI be developed via brain emulation before it is developed via machine learning? Will neuroscientific understanding play a large role in development?)CONTEMPORARY EVENTS: How will the world be relevantly different at the time that advanced AI is developed?WARNING SIGNS: Should we expect advance notice of disruptive change from AI? (What would it look like?)POST-AI SOCIETY: If advanced AI transforms the world, what will the world look like afterwards? (e.g. What will the economic impacts be? Will humans flourish? What roles will AI systems play in society?)ACTIONS: What can we say about the impact of contemporary choices on long-term outcomes?", "url": "https://aiimpacts.org/ai-impacts-key-questions-of-interest/", "title": "AI Impacts key questions of interest", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-12-17T19:10:43+00:00", "paged_url": "https://aiimpacts.org/feed?paged=4", "authors": ["Katja Grace"], "id": "e910ae9630d8a856fd7b3a92d663b7d6", "summary": []} {"text": "How energy efficient are human-engineered flight designs relative to natural ones?\n\nUpdated Dec 10, 2020\nThis page is out-of-date. Visit the updated version of this page on our wiki.Among two animals and nine machines:\n\nIn terms of mass⋅distance/energy, the most efficient animal was 2-8x more efficient than the most efficient machine. All entries fell within two orders of magnitude.\nIn terms of distance/energy, the most efficient animal was 3,000-20,000x more efficient than the most efficient machine. Both animals were more efficient than all machines. Entries ranged over more than eight orders of magnitude.\n\nDetails\nBackground\nThis case study is part of research that intends to compare the performance of human engineers and natural evolution on problems where both have developed solutions. The goal of this is to inform our expectations about the performance of future artificial intelligence relative to biological minds. \nMetrics\nWe consider two metrics: \n\nDistance per energy used (meters / kilojoule). \nMass times distance per energy used (kilograms⋅meters / joule). \n\nThese operationalize the problem of flight into two more specific problems. There are many other aspects of flight performance that one could measure, such as energy efficiency of acceleration in a straight line, turning, hovering, vertical acceleration, vertical distance, landing, taking off, time flying per energy, and our same measures with fewer or further restrictions on acceptable entries. For instance, we might look at the problem of flying with flapping wings, or without the restriction that the solutions we consider are heavier than air and self powered. \nWe did not require that the flight of an entry be constantly powered. Solutions that spend some time gliding as well as some time using powered flight were allowed. Both albatrosses and butterflies use air currents to fly further.1 The energy gains from these techniques were not included in the final score, and entries were not penalized for spending a larger fraction of time gliding. It seems likely that paramotor pilots use similar techniques, since paramotors are well suited to gliding (being paragliders with propeller motors strapped to the backs of their pilots). Our energy efficiency estimate for the paramotor came from a record breaking distance flight in which the quantity of available fuel was limited, and so it is likely that some gliding was used to increase the distance traveled as much as possible.\nWhen multiple input values could have been used, such as the takeoff weight and the landing weight, or different estimates for the energetic costs of different kinds of flight for the Monarch butterfly, we generally calculated a high and a low estimate, taking the most optimistic and pessimistic inputs respectively. In all cases, the resulting best and worst estimates differed by less than a factor of ten. \nSelection of case studies\nWe selected case studies informally, according to judgments about possible high energy efficiencies, and with an eye to exploring a wider range of case studies.\nWe started by looking at the Boeing 747-400 plane, the Wandering Albatross, and the Monarch Butterfly. We chose the animals for both being known for their abilities to fly long distances, and for both having fairly different body plans.\nAll three scored surprisingly similarly on distance times weight per energy (details below). This prompted us to look for engineered solutions that were optimized for fuel efficiency. To that end, we looked at paramotors and record breaking flying machines. In the latter category, we found the MacCready Gassomer Albatross, which was a human powered flying device that crossed the English Channel, and the Spirit of Butts’ Farm, which was a model airplane that crossed the Atlantic on one gallon of gasoline. \nFor reasons that are now obscure, we also included a number of different planes.\nWe would have liked to include microdrones, since they are different enough from other entries that they might be unusually efficient. However we did not find data on them.\nCase studies\nThese are the full articles calculating the efficiencies of different flying machines and animals: \n\nWright Flyer\nWright model B\nVickers Vimy\nNorth American P-51 Mustang\nParamotors\nThe Spirit of Butt’s Farm\nMonarch butterfly\nMacCready Gossamer Albatross\nAirbus A-320\nBoeing 747-400\nWandering albatross\n\nSummary results\nResults are available in Table 1 below, and in this spreadsheet. Figures 1 and 2 below illustrate the equivalent questions of how far each of these animals and machines can fly, given either the same amount of fuel energy, or fuel energy proportional to their body mass.\n\n\n\nNamenatural or human-engineered kg⋅m/J    m/kJ \n\n\n\n\nworstmeanbestworstmeanbest\n\n\nMonarch Butterflynatural0.0650.210.36100000350000600000\n\n\nWandering Albatrossnatural1.42.23240240240\n\n\nThe Spirit of Butt’s Farmhuman-engineered0.0860.120.16323232\n\n\nMacGready Gossamer Albatrosshuman-engineered0.190.320.4623.34.6\n\n\nParamotorhuman-engineered0.0580.0790.10.360.360.36\n\n\nWright model Bhuman-engineered0.0360.0780.120.10.160.21\n\n\nWright Flyerhuman-engineered0.0220.0420.0610.0800.130.18\n\n\nNorth American P-51 Mustanghuman-engineered0.250.380.50.0730.0830.092\n\n\nVickers Vimyhuman-engineered0.0810.170.250.0250.0380.05\n\n\nAirbus A320human-engineered0.330.470.610.00780.00780.0078\n\n\nBoeing 747-400human-engineered0.390.610.830.00210.00210.0021\n\n\n\n\nTable 1: Energy efficiency of flight for a variety of natural and man-made flying entities.\nFigure 1: If you give each animal or machine energy proportional to its weight, how far can it fly?\nOn mass⋅distance/energy, evolution beats engineers, but they are relatively evenly matched: the albatross (1.4-3.0 kg.m/J) and the Boeing 747-400 (0.39-0.83 kg.m/J) are the best in the natural and engineered classes respectively. Thus the best natural solution we found was roughly 2x-8x more efficient than the human-engineered one.2 We found several flying machines more efficient on this metric than the monarch butterfly.\nFigure 2: How far animals and machines can fly on the same amount of energy. Note that the vertical axis is log scaled, unlike that of Figure 1, so smaller looking differences are in fact much larger: over eight orders of magnitude (vs less than two in Figure 1).\nOn distance/energy, the natural solutions have a much larger advantage. Both are better than all man-made solutions we considered. The best natural and engineered solutions respectively are the monarch butterfly (100,000-600,000 m/kJ) and the Spirit of Butts’ Farm (32 m/kJ), for roughly a 3,000x to 20,000x advantage to natural evolution.\n\nInterpretation\nWe take this as weak evidence about the best possible distance/energy and distance.mass/energy measures achievable by human engineers or natural evolution. One reason for this is that this is a small set of examples. Another is that none of these animals or machines were optimized purely for either of these flight metrics—they all had other constraints or more complex goals. For instance, the paramotor was competing for a record in which a paramotor had to be used, specifically. For the longest human flight, the flying machine had to be capable of carrying a human. The albatross’ body has many functions. Thus it seems plausible that either engineers or natural evolution could reach solutions far better on our metrics than those recorded here if they were directly aiming for those metrics. \nThe measurements for distance.mass/energy covered a much narrower band than those for distance/energy: a factor of under two orders of magnitude versus around eight. Comparing best scores between evolution and engineering, the gap is also much smaller, as noted above (a factor of less than one order of magnitude versus three orders of magnitude). This seems like some evidence that that band of performance is natural for some reason, and so that more pointed efforts to do better on these metrics would not readily lead to much higher performance.\nPrimary author: Ronny Fernandez\nNotes", "url": "https://aiimpacts.org/are-human-engineered-flight-designs-better-or-worse-than-natural-ones/", "title": "How energy efficient are human-engineered flight designs relative to natural ones?", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-12-10T22:48:00+00:00", "paged_url": "https://aiimpacts.org/feed?paged=4", "authors": ["Katja Grace"], "id": "4eea1feed100e935ed28f2a3aeb8ed07", "summary": []} {"text": "Energy efficiency of monarch butterfly flight\n\nUpdated Nov 25, 2020\nAccording to very rough estimates, the monarch butterfly:\ncan fly around 100,000-600,000 m/kJand move mass at around 0.065-0.36 kg⋅m/J\nDetails\nThe Monarch Butterfly is a butterfly known for its migration across North America.1\n\nMass\nThe average mass of a monarch butterfly prior to its annual migration has been estimated to be 600mg2\nDistance per Joule\nThe following table gives some very rough estimates of energy expenditures, speeds and distances for several modes of flight, based on confusing information from a small number of papers (see footnotes for details).\nActivityDescriptionEnergy expenditure per mass ( J/g⋅hr)Energy expenditure for 600mg butterfly (J/s)Speed (m/s)distance/energy (m/J)Soaring/glidingUnpowered flight, including gradual decline and ascent via air currents8-333~0.0014 – 0.00564Very roughly 2.5-3.6 on average5446- 25716CruisingLow speed powered flightVery roughly 20970.0358Maximum: >59Maximum: >14310Sustained flappingHigh speed powered flightVery roughly 83711~0.1412 Maximum: >13.913Maximum: >9914Table 1: Statistics for several modes of flight. All figures are very rough estimates, based on incomplete and confusing information from a small number of papers (see footnotes for details).\nSoaring is estimated to be potentially very energy efficient (see Table 1), since it mostly makes use of air currents for energy. It seems likely that at least a small amount of powered flight is needed for getting into the air, however monarch butterflies can apparently fly for hundreds of kilometers in a day15, so supposing that they don’t stop many times in a day, taking off seems likely a negligible part of the flight.16\nThis would require ideal wind conditions, and our impression is that in practice, butterflies do not often fly very long distances without using at least a small amount of powered flight.17\nThere is stronger evidence that monarch butterflies can realistically soar around 85% of the time, from Gibo & Pallett, who report their observations of butterflies under relatively good conditions.18 So as a high estimate, we use this fraction of the time for soaring, and suppose that the remaining time is the relatively energy-efficient cruising, and take the optimistic end of all ranges. This gives us:\nOne second of flight = 0.15 seconds cruising + 0.85 seconds soaring\n________________= 0.15s * 5 m/s cruising + 0.85s * 3.6m/s soaring\n________________= 0.75m cruising + 3.06m soaring \n________________= 3.81m total\nThis also gives us:\n= 0.75m / 143 m/J cruising + 3.06m / 2571 m/J soaring \n= 0.0064 J total\nThus we have:\ndistance/energy = 3.81m/0.0064 J = 595 m/J\nFor a low estimate of efficiency, we will assume that all of the powered flight is the most energetic flight, that powered flight is required half the time on average, and that the energy cost of gliding is twice that of resting. This gives us:\nEnergy efficiency = (50% * soaring distance + 50% * powered distance) / (50% * soaring energy + 50% * powered energy)\n= (50% * soaring distance/time + 50% * powered distance/time) / (50% * soaring energy/time + 50% * powered energy/time)\n= (0.5 * 2.5m/s + 0.5 * 13.9m/s) / (0.5 * (0.0056 * 2) J/s + 0.5 * 0.14 J/s)\n= 108 m/J\nThus we have, very roughly:\ndistance/energy = 100,000-600,000 m/kJ\nFor concreteness, a kJ is the energy in around a quarter of a raspberry. 19\nMass⋅distance per Joule\nAs noted earlier, the average mass of a monarch butterfly prior to its annual migration has been estimated to be 600mg20\nThus we have:\nmass*distance/energy = 0.0006 kg * 108 — 0.0006 kg * 595 m/J\n= 0.065 — 0.36 kg⋅m/J\n\nPrimary author: Ronny Fernandez\nNotes", "url": "https://aiimpacts.org/energy-efficiency-of-monarch-butterfly-flight/", "title": "Energy efficiency of monarch butterfly flight", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-11-26T06:30:47+00:00", "paged_url": "https://aiimpacts.org/feed?paged=5", "authors": ["Katja Grace"], "id": "de5a64114e5074f546193e1dfdc6c03f", "summary": []} {"text": "Energy efficiency of wandering albatross flight\n\nUpdated Nov 25, 2020\nThe wandering albatross:\ncan fly around 240m/kJand move mass at around 1.4—3.0kg.m/J\nDetails\nThe wandering albatross is a very large seabird that flies long distances on wings with the largest span of any bird.1 \nDistance per Joule\nSpeed\nIn a study of wandering albatrosses flying in various wind speeds and directions, average ground speed was 12 m/s, though the fastest ground speed measured appears to be around 24m/s, 2 We use average ground speed for this estimate because we only have data on average energy expenditure, though it is likely that higher ground speeds involve more energy efficient flight, since albatross flight speed is dependent on wind and it appears that higher speeds are substantially due to favorable winds.3\nEnergy expenditure \nOne study produced an estimate that when flying, albatrosses use 2.35 times their basal metabolic rate4 which same paper implies is around 1,833 kJ/bird.day.5 \nThat gives us a flight cost for flying of 0.050 kJ/second.6 \nDistance per Joule calculation\nThis gives us a distance per energy score of:\ndistance/energy\n= 12 m/s / 0.050 kJ/s \n= 240m/kJ\nMass.distance per Joule\nAlbatrosses weigh 5.9 to 12.7 kg.7\nThus we can estimate:\nmass.distance/Joule\n= 5.9kg * 240 m/kJ to 12.7kg*240 m/kJ\n= 1.4—3.0kg.m/J\nPrimary author: Ronny Fernandez\nNotes", "url": "https://aiimpacts.org/energy-efficiency-of-wandering-albatross-flight/", "title": "Energy efficiency of wandering albatross flight", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-11-25T00:13:13+00:00", "paged_url": "https://aiimpacts.org/feed?paged=5", "authors": ["Katja Grace"], "id": "5d3ed18928691e68c156d271a05ed908", "summary": []} {"text": "Energy efficiency of paramotors\n\nUpdated Nov 24, 2020\nWe estimate that a record-breaking two-person paramotor:\ncovered around = 0.36 m/kJand moved mass at around 0.058 – 0.10 kg⋅m/J\nDetails\nParamotors are powered parachutes that allow the operator to steer.1\nDistance per Joule\nThe Fédération Aéronautique Internationale (FAI) maintains records for a number of classes of paramotor contest. We look at subclass RPF2T—(Paramotors : Paraglider Control / Foot-launched / Flown with two persons / Thermal Engine)—which is appears to be the most recent paramotor record for ‘Distance in a straight line with limited fuel’.2\nThe record distance was 123.18 km.3 The FAI rules state that no more than 7.5 kg of fuel may be used.4 We will assume that in the process of breaking this record, all of the available fuel was used. We will also assume that regular gasoline was used. Gasoline has an energy density of 45 MJ/kg.5\nDistance per energy = 123.18 km / (7.5 kg * 45 MJ/kg) \n= 0.36 m/kJ\nMass.distance per Joule\nThe weight of an entire paramotoring apparatus appears to be the weights of the passengers plus motor plus wing plus clothing and incidentals, based on forum posts.6 These posts put clothing and incidentals at around 8kg, but are estimates for single person flying, whereas this record was a two person flight. We guess that two people need around 1.5x as much additional weight, for 12kg.\nWikipedia says that the weight of a paramotor varies from 18kg to 34 kg.7 However it is unclear whether this means the motor itself, or all of the equipment involved. \nThe glider used appears to be MagMax brand, a typical example of which weighs around 8kg, though this may have been different in 2013, or they may have used a different specific glider.8 To account for this uncertainty, we shall add the glider weight to the high estimate, and so estimate the weight of the glider and motor together at 18-42kg.\nWe will assume that the apparently male pilots weighed between 65 and 115 kgs each, based on normal male weights9. \nThus we have:\nweight = motor + wing + people + clothing and incidentals\nweight (low estimate) = 18 + 65*2 + 12 = 160kg\nweight (high estimate) = 42 + 115*2 + 12 = 284kg\nHigh efficiency estimate:\n284kg * 0.36 m/kJ  = 0.10 kg⋅m/J \nLow efficiency estimate:\n160kg * 0.36 m/kJ  = .058 kg⋅m/J\nThis gives us a range of 0.058 – 0.10 kg⋅m/J\nPrimary author: Ronny Fernandez\nNotes", "url": "https://aiimpacts.org/energy-efficiency-of-paramotors/", "title": "Energy efficiency of paramotors", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-11-24T21:11:22+00:00", "paged_url": "https://aiimpacts.org/feed?paged=5", "authors": ["Katja Grace"], "id": "b73c77acfb6660a9bbfe5859e3c3b148", "summary": []} {"text": "Misalignment and misuse: whose values are manifest?\n\nBy Katja Grace, 18 November 2020, Crossposted from world spirit sock puppet.\nAI related disasters are often categorized as involving misaligned AI, or misuse, or accident. Where:\n\nmisuse means the bad outcomes were wanted by the people involved,\nmisalignment means the bad outcomes were wanted by AI (and not by its human creators), and\naccident means that the bad outcomes were not wanted by those in power but happened anyway due to error.\n\nIn thinking about specific scenarios, these concepts seem less helpful.\nI think a likely scenario leading to bad outcomes is that AI can be made which gives a set of people things they want, at the expense of future or distant resources that the relevant people do not care about or do not own.\nFor example, consider autonomous business strategizing AI systems that are profitable additions to many companies, but in the long run accrue resources and influence and really just want certain businesses to nominally succeed, resulting in a worthless future. Suppose Bob is considering whether to get a business strategizing AI for his business. It will make the difference between his business thriving and struggling, which will change his life. He suspects that within several hundred years, if this sort of thing continues, the AI systems will control everything. Bob probably doesn’t hesitate, in the way that businesses don’t hesitate to use gas vehicles even if the people involved genuinely think that climate change will be a massive catastrophe in hundreds of years.\nWhen the business strategizing AI systems finally plough all of the resources in the universe into a host of thriving 21st Century businesses, was this misuse or misalignment or accident? The strange new values that were satisfied were those of the AI systems, but the entire outcome only happened because people like Bob chose it knowingly (let’s say). Bob liked it more than the long glorious human future where his business was less good. That sounds like misuse. Yet also in a system of many people, letting this decision fall to Bob may well have been an accident on the part of others, such as the technology’s makers or legislators.\nOutcomes are the result of the interplay of choices, driven by different values. Thus it isn’t necessarily sensical to think of them as flowing from one entity’s values or another’s. Here, AI technology created a better option for both Bob and some newly-minted misaligned AI values that it also created—‘Bob has a great business, AI gets the future’—and that option was worse for the rest of the world. They chose it together, and the choice needed both Bob to be a misuser and the AI to be misaligned. But this isn’t a weird corner case, this is a natural way for the future to be destroyed in an economy.\nThanks to Joe Carlsmith for conversation leading to this post.", "url": "https://aiimpacts.org/misalignment-and-misuse-whose-values-are-manifest/", "title": "Misalignment and misuse: whose values are manifest?", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-11-19T00:06:34+00:00", "paged_url": "https://aiimpacts.org/feed?paged=5", "authors": ["Katja Grace"], "id": "a742d05995c70e0fa1bae242cb826313", "summary": []} {"text": "Energy efficiency of The Spirit of Butt’s Farm\n\nUpdated Nov 18, 2020\nThe Spirit of Butt’s Farm:\ncovered around 31.67 m/kJand moved mass at around 0.16 – 0.086 kg⋅m/J\nDetails\nThe Spirit of Butt’s Farm was a record setting model airplane that crossed the Atlantic on one gallon of fuel.1 Fully fueled it weighed 4.987 kg, dry it weighed 2.705 kg.2\n The record setting flight used 117.1 fluid ounces of fuel.3 The straight line distance of the flight was 3,028.1 km.4 It was powered by 88% Coleman lantern fuel, mixed with lubricant.5 Coleman fuel is based on naphtha 6, so we can use the energy density of naphtha—31.4 MJ/L7—as a rough guide to its energy content, though naphtha appears to vary in its content, and it is unclear whether Coleman fuel consists entirely of naphtha. \nFrom all this, we have:\nDistance per energy = 3,028.1 km / (117.1 fl oz * 0.88 * 31.4 MJ/L) \n= 31.67 m/kJ\nFor weight times distance per energy we will calculate a best and a worst score. To calculate the best score we will use the fully fueled weight, and to calculate the worst score we will use the dry weight. All other values are the same in both calculations. \nBest score: Distance*mass/energy = 4.987 kg * 31.67 m/kJ\n= 0.16 kg⋅m/J\n(4.987 kg * 3,028.1km) / (117.1 US fluid ounces * 31.4MJ/litre) = 0.1389 kg*m/j\nWorst score:\nDistance*mass/energy = 2.705 kg * 31.67 m/kJ\n= .086 kg⋅m/J\nPrimary author: Ronny Fernandez\nNotes\nPhoto by Ronan Coyne, licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license, unaltered.\n", "url": "https://aiimpacts.org/energy-efficiency-of-the-spirit-of-butts-farm/", "title": "Energy efficiency of The Spirit of Butt’s Farm", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-11-18T23:53:25+00:00", "paged_url": "https://aiimpacts.org/feed?paged=5", "authors": ["Katja Grace"], "id": "85f310dd928aeaf8308214ef3156b970", "summary": []} {"text": "Was the industrial revolution a drastic departure from historic trends?\n\nUpdated Nov 18, 2020\nWe do not have a considered view on this topic.\nDetails\nWe have not investigated this topic. This is an incomplete list of evidence that we know of:\nDavid Roodman’s analysis of the surprisingness of the industrial revolution under his 2020 model of economic history.1Ben Garfinkel’s analysis of whether economic history suggests a singularity2Robin Hanson’s analysis of historic economic growth understood as a sequence of exponential modes.3On a log(GWP)-log(doubling time) graph, the industrial revolution appears to be almost perfectly on trend, according to our very crude analysis.\nRelevance\nThe nature of the industrial revolution is relevant to AI forecasting in the following ways:\nIf growth during the industrial revolution is a highly improbable aberration from longer term trends, it suggests that it is a consequence of specific developments at the time, most saliently new technologies. This suggests that new technologies can sometimes alone cause changes at the level of the global economy.‘The impact of the industrial revolution’ is sometimes used as a measure against which to compare consequences of AI developments. Thinking here may be sharpened by clarification on the nature of the industrial revolution. This use is likely related to the point above, where ‘the scale of the industrial revolution’ is taken to be a historically plausible scale of impact for the most ambitious technologies.If economic history is best understood as a sequence of ‘growth modes’, per Hanson 2000,4 the industrial revolution being one, this changes our best extrapolation to the future. For instance, we might expect the continuation of the current mode to be slower than in a continuously super-exponential model, but may also expect to meet a new growth mode at some point, which may be substantially faster (and have other characteristics recognizable from past ‘growth mode’ changes). See Hanson 2000 for more on this.\nNotes", "url": "https://aiimpacts.org/was-the-industrial-revolution-a-drastic-departure-from-historic-trends/", "title": "Was the industrial revolution a drastic departure from historic trends?", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-11-17T22:14:30+00:00", "paged_url": "https://aiimpacts.org/feed?paged=5", "authors": ["Katja Grace"], "id": "be44f5418edf3ad2127e078fd66342d7", "summary": []} {"text": "Energy efficiency of MacCready Gossamer Albatross\n\nUpdated Nov 9, 2020\nThe MacCready Gossamer Albatross:\ncovered around 2.0—4.6 m/kJand moved mass at around 0.1882 —0.4577 kg⋅m/J\nDetails\nThe MacCready Gossamer Albatross was a human-powered flying machine that crossed the English Channel in 1979.1 The pilot pedaled the craft, seemingly as if on a bicycle. It had a gross mass of 100kg, flying across the channel,2 and flew 35.7 km in 2 hours and 49 minutes.3 The crossing was difficult however, so it seems plausible that the Gossamer Albatross could fly more efficiently in better conditions.\nWe do not know the pilot’s average power output, however:\nWikipedia claims at least 300W was required to fly the craft4Chung 2006, an engineering textbook, claims that the driver, a cyclist, could produce around 200W of power.5Our impression is that 200W is a common power output over houres for amateur cycling. For instance, one of our researchers is able to achieve this for three hours.6\nThe best documented human cycling wattage that we could easily find is from professional rider Giulio Ciccone who won a stage of the Tour de France, then uploaded power data to the fitness tracking site Strava.7 His performance suggests around 318W is a reasonable upper bound, supposing that the pilot of the Gossamer Albatross would have had lower performance.8\nTo find the energy used by the cyclist, we divided power output by typical efficiency for a human on a bicycle, which according to Wikipedia ranges from .18 to .26.9\nDistance per Joule\nFor distance per energy this gives us a highest measure of:\n35.7 km / ((200W * (2 hours + 49 minutes))/0.26) = 4,577 m/MJ\nAnd a lowest measure of:\n35.7 km / ((318W * (2 hours + 49 minutes))/0.18) = 1,993 m/MJ\nMass per Joule\nFor weight times distance per energy this gives us a highest measure of:\n(100kg * 35.7 km) / ((200W * (2 hours + 49 minutes))/0.26) = 0.4577 kg⋅m/j\nAnd a lowest measure of:\n(100kg * 35.7 km) / ((318W * (2 hours + 49 minutes))/0.17) =  0.1882 kg⋅m/j\nPrimary author: Ronny Fernandez\nNotes", "url": "https://aiimpacts.org/maccready-gossamer-albatross/", "title": "Energy efficiency of MacCready Gossamer Albatross", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-11-10T02:15:26+00:00", "paged_url": "https://aiimpacts.org/feed?paged=5", "authors": ["Katja Grace"], "id": "8eb0bf5974c62de8ad06ea4af2bfaecd", "summary": []} {"text": "Energy efficiency of Boeing 747-400\n\nUpdated Nov 5, 2020\nThe Boeing 747-400:\ncovers around 0.0021m/kJ.and moves mass at around 0.39 – 0.83 kg.m/J\nDetails\nThe Boeing 747-400 is a 1987 passenger plane.1\nDistance per Joule\nThe plane uses 10.77 kg/km of fuel on a medium haul flight.2 We do not know what type of fuel it uses, but typical values for aviation fuel are around 44MJ/kg.3 Thus to fly a kilometer, the plane needs 10.77 kg of fuel, which is 10.77 x 44 MJ = 474 MJ of fuel. This gives us 0.0021m/kJ.\nMass.distance per Joule\nAccording to Wikipedia, the 747’s ‘operating empty weight’ is 183,523 kg and its ‘maximum take-off weight’ is 396,893 kg.4 We use the range 183,523 kg—396,893 kg since we do not know at what weight in that range the relevant speeds were measured.\nWe have: \nDistance per kilojoule: 0.0021m/kJMass: 183,523 kg—396,893 kg\nThis gives us a range of 0.39 – 0.83 kg.m/J\nPrimary author: Ronny Fernandez\nNotes", "url": "https://aiimpacts.org/energy-efficiency-of-boeing-747-400/", "title": "Energy efficiency of Boeing 747-400", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-11-06T05:10:05+00:00", "paged_url": "https://aiimpacts.org/feed?paged=5", "authors": ["Katja Grace"], "id": "0d08a97d8f4f80959cb37b09259a44fd", "summary": []} {"text": "Energy efficiency of Airbus A320\n\nUpdated Nov 5, 2020\nThe Airbus A320:\ncovers around 0.0078 m/kJand moves mass at around 0.33 – 0.61 kg.m/J\nDetails\nThe Airbus A320 is a 1987 passenger plane.1\nDistance per Joule\nThe plane uses 2.91 kg of fuel per km on a medium haul flight.2 We do not know what type of fuel it uses, but typical values for aviation fuel are around 44MJ/kg.3 Thus to fly a kilometer, the plane needs 2.91kg of fuel, which is 2.91 x 44 MJ = 128MJ of fuel. This gives us 0.0078 m/kJ\nMass.distance per Joule\nAccording to modernairliners.com, the A320’s ‘operating empty weight’ is 42,600 kg and its ‘maximum take-off weight’ is 78,000 kg.4 We use the range 42,600—78,000 kg, since we do not know at what weight in that range the relevant speeds were measured.\nWe have: \nDistance per kilojoule: 0.0078 m/kJMass: 42,600—78,000 kg\nThis gives us a range of 0.33 – 0.61 kg.m/J\nPrimary author: Ronny Fernandez\nNotes", "url": "https://aiimpacts.org/energy-efficiency-of-airbus-a320/", "title": "Energy efficiency of Airbus A320", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-11-06T04:44:03+00:00", "paged_url": "https://aiimpacts.org/feed?paged=5", "authors": ["Katja Grace"], "id": "ea3526f4218417a146a6b03ceeb0edf4", "summary": []} {"text": "Energy efficiency of North American P-51 Mustang\n\nUpdated Nov 5, 2020\nThe North American P-51 Mustang:\nflew around 0.073—0.092 m/kJand moved mass at around 0.25 – 0.50 kg.m/J\nDetails\nThe North American P-51 Mustang was a 1940 US WWII fighter and fighter-bomber.1\nMass\nAccording to Wikipedia2:\nEmpty weight: 7,635 lb (3,465 kg)Gross weight: 9,200 lb (4,175 kg)Max takeoff weight: 12,100 lb (5,488 kg)\nWe use the range 3,465—5,488 kg, since we do not know at what weight in that range the relevant speeds were measured.\nDistance per Joule\nWikipedia tells us that cruising speed was 362 mph (162 m/s)3\nA table from WWII Aircraft Performance gives combinations of flight parameters apparently for a version of the P-51, however it has no title or description, so we cannot be confident. 4 We extracted some data from it here. This data suggests the best combination of parameters gives a fuel economy of 6.7 miles/gallon (10.8km)\nWe don’t know what fuel was used, but fuel energy density seems likely to be between 31—39 MJ/L = 117—148 MJ/gallon.5\nThus the plane flew about 10.8km on 117—148 MJ of fuel, for 0.073—0.092 m/kJ\nMass.distance per Joule\nWe have: \nDistance per kilojoule: 0.073—0.092 m/kJMass: 3,465—5,488 kg\nThis gives us a range of 0.25 – 0.50 kg.m/J\nPrimary author: Ronny Fernandez\nNotes", "url": "https://aiimpacts.org/energy-efficiency-of-north-american-p-51-mustang/", "title": "Energy efficiency of North American P-51 Mustang", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-11-06T01:04:24+00:00", "paged_url": "https://aiimpacts.org/feed?paged=5", "authors": ["Katja Grace"], "id": "c307313b9eafeaafb68cbb46de9ee63f", "summary": []} {"text": "Energy efficiency of Vickers Vimy plane\n\nUpdated Nov 5, 2020\nThe Vickers Vimy:\nflew around 0.025—0.050 m/kJand moved mass at around 0.081 – 0.25 kg.m/J\nDetails\nThe Vickers Vimy was a 1917 British WWI bomber.1 It was used in the first non-stop transatlantic flight.\nMass\nAccording to Wikipedia2:\nEmpty weight: 7,104 lb (3,222 kg)Max takeoff weight: 10,884 lb (4,937 kg)\nWe use the range 3,222—4,937 kg, since we do not know at what weight in that range the relevant speeds were measured.\nEnergy use per second\nWe also have:\nPower: 360 horsepower = 270 kW3Efficiency of use of energy from fuel: we did not find data on this, so use an estimate of 15%-30%, based on what we know about the energy efficiency of the Wright Flyer.\nFrom these we can calculate:\nEnergy use per second = power of engine x 1/efficiency in converting energy to engine power = 270kJ/s / .15—270kJ/s / .30 = 900—1800 kJ/s\nDistance per second\nWikipedia gives us:\nMaximum speed: 100 mph (160 km/h, 87 kn)4 \nNote that the figures for power do not obviously correspond to the highest measured speed. This is a rough estimate.\nDistance per Joule\nWe now have (from above):\nspeed = 100 miles/h = 44.7m/senergy use = 900—1800 kJ/s\nThus, on average each second the plane flies 44.7 m and uses 900—1800 kJ, for 0.025—0.050 m/kJ.\nMass.distance per Joule\nWe have: \nDistance per kilojoule: 0.025—0.050 m/kJMass: 3,222—4,937 kg\nThis gives us a range of 0.081 – 0.25 kg.m/J\nPrimary author: Ronny Fernandez\nNotes", "url": "https://aiimpacts.org/energy-efficiency-of-vickers-vimy-plane/", "title": "Energy efficiency of Vickers Vimy plane", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-11-05T20:54:31+00:00", "paged_url": "https://aiimpacts.org/feed?paged=6", "authors": ["Katja Grace"], "id": "bd8b4de98b4e76898b33f587adecd8f1", "summary": []} {"text": "Energy efficiency of Wright model B\n\nUpdated Nov 5, 2020\nThe Wright model B:\nflew around 0.10—0.21m/kJand moved mass at around 0.036 – 0.12 kg.m/J\nDetails\nThe Wright Model B was a 1910 plane developed by the Wright Brothers.1\nMass\nAccording to Wikipedia2:\nEmpty weight: 800 lb (363 kg)Gross weight: 1,250 lb (567 kg)\nWe use the range 363—567 kg, since we do not know at what weight in that range the relevant speeds were measured.\nEnergy use per second\nFrom Wikipedia, we have3:\nPower: 35 horsepower = 26kWEfficiency of use of energy from fuel: we could not find data on this, so use an estimate of 15%-30%, based on what we know about the energy efficiency of the Wright Flyer.\nFrom these we can calculate:\nEnergy use per second = power of engine x 1/efficiency in converting energy to engine power = 26kJ/s / .15—26kJ/s / .30 = 86.6—173 kJ/s\nDistance per second\nWikipedia gives us4:\nMaximum speed: 45 mph (72 km/h, 39 kn)Cruise speed: 40 mph (64 km/h, 35 kn)\nWe use the cruise speed, as it seems more likely to represent speed achieved with the energy usages reported. \nDistance per Joule\nWe now have (from above):\nspeed = 40miles/h = 17.9m/senergy use = 86.6—173 kJ/s\nThus, on average each second the plane flies 17.9m and uses 86.6—173 kJ, for 0.10—0.21m/kJ.\nMass.distance per Joule\nWe have: \nDistance per kilojoule: 0.10—0.21m/kJMass: 363—567kg\nThis gives us a range of 0.036 – 0.12 kg.m/J\nPrimary author: Ronny Fernandez\nNotes", "url": "https://aiimpacts.org/energy-efficiency-of-wright-model-b/", "title": "Energy efficiency of Wright model B", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-11-05T20:09:52+00:00", "paged_url": "https://aiimpacts.org/feed?paged=6", "authors": ["Katja Grace"], "id": "b2a1780f73a62c080c73151377db0e26", "summary": []} {"text": "Energy efficiency of Wright Flyer\n\nUpdated Dec 10, 2020\nThe Wright Flyer:\nflew around 0.080-0.18m/kJand moved mass at around .022 – .061 kg.m/J\nDetails\nThe Wright Flyer (Flyer I) was the first successful plane, built in 1903.1\nMass\nAccording to Wikipedia2:\nEmpty weight: 605 lb (274 kg)Max takeoff weight: 745 lb (338 kg)\nEnergy use per second\nFuel use per hour\nA 1904 article in the Minneapolis Journal says the plane consumed ‘a little less than than ten pounds of gasoline per hour’.3 A pound of gasoline contains around 20MJ4 So we have:\nHourly fuel consumption: 10lb/h x 20MJ/lb = 200MJ/h = 55kJ/s\nWe don’t know how reliable this source is. For instance, a 1971 book, The Write Brothers’ Engines and their Design does not give data on fuel consumption in their table of engine characteristics for lack of available comprehensive data5, suggesting that its authors did not consider the article strong evidence, though it is also possible that they didn’t have access to the 1904 article. \nUtilized motor power / efficiency\nTo confirm, we can estimate the plane’s energy use per second by a second means: combining the claimed power (energy/second) made use of by the engine, and a guess about how much fuel is needed to deliver that amount of energy.\nUtilized motor power\nAccording to Wikipedia, the plane had a 12 horsepower (9 kJ/s), gasoline engine.6 The 1904 Minneapolis Journal article put it at 16 horsepower (12 kJ/s), and Orville Wright, quoted by Hobbs (1971), puts it at ‘almost 16 horsepower’ at one point.7 Hobbs says that at one point this engine achieved 25 horsepower, though this probably isn’t representative of what was ‘actually utilized’.8 For that he gives a range of 8.25-16 horsepower.9 In light of these estimates, we will use 8.25-16 horsepower, which is 6.15-12 kJ/s.\nEfficiency of energy conversion from fuel to motor power\nA quote from Orville Wright suggests fuel consumption as 0.580lb of fuel per horsepower hour.10 This would imply 23% of energy was utilized from the fuel.11\nHobbs notes that this seems low, but assumes a similar efficiency: 24.50%. Thus he presumably doesn’t find it implausible.12\nAccording to Wikipedia, the thermal efficiency of a typical gasoline engine is 20%13. It seems that this has increased14, which would suggest that the typical figure was lower in 1903. We don’t think this undermines the more specific figures given above.\nIt seems likely that Hobbs number is best here, since he knows about Wright’s number, and may have more information than us about for instance the exact fuel being used. So we use 24.5%.\nCalculation of energy use from motor power/efficiency\nCombining these numbers, we have:\npower spent = 6.15-12 kJ/s used by engine x 1 / 24.5% fuel energy needed to get one unit of energy used by engine, given inefficiency\n= 25-49 kJ/s\nCalculation of energy use\nWe now have: \nenergy expenditure calculated via motor power and efficiency: 25-49 kJ/senergy expenditure calculated via hourly fuel use: 55kJ/s\nNeither figure seems clearly more reliable, so we will use the range 25-55kJ/s\nDistance per second\nTwo of the Wright Flyer’s first flights were 120 feet in 12 seconds and 852 feet in 59 seconds.15 This gives speeds of 3m/s and 4.4m/s. We will use the second, since it better represents successful flight, still within the first days.\nDistance per Joule\nWe now have (from above):\nspeed = 4.4 m/senergy use = 25-55 kJ/s\nThus, on average each second the plane flies 4.4m and uses 25-55kJ, for 0.080-0.18m/kJ.\nMass.distance per Joule\nWe have: \nDistance per kilojoule: 0.080-0.18m/kJMass: 274-338kg\nThis gives us a range of .022 – .061 kg.m/J\nPrimary author: Ronny Fernandez\nNotes", "url": "https://aiimpacts.org/energy-efficiency-of-wright-flyer/", "title": "Energy efficiency of Wright Flyer", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-11-04T18:58:55+00:00", "paged_url": "https://aiimpacts.org/feed?paged=6", "authors": ["Katja Grace"], "id": "cba39ff1fd255b8a533b77daa9ddd914", "summary": []} {"text": "Automated intelligence is not AI\n\nBy Katja Grace, 1 November 2020, Crossposted from world spirit sock puppet.\nSometimes we think of ‘artificial intelligence’ as whatever technology ultimately automates human cognitive labor.\nI question this equivalence, looking at past automation. In practice human cognitive labor is replaced by things that don’t seem at all cognitive, or like what we otherwise mean by AI.\nSome examples:\n\nEarly in the existence of bread, it might have been toasted by someone holding it close to a fire and repeatedly observing it and recognizing its level of doneness and adjusting. Now we have machines that hold the bread exactly the right distance away from a predictable heat source for a perfect amount of time. You could say that the shape of the object embodies a lot of intelligence, or that intelligence went into creating this ideal but non-intelligent tool.\nSelf-cleaning ovens replace humans cleaning ovens. Humans clean ovens with a lot of thought—looking at and identifying different materials and forming and following plans to remove some of them. Ovens clean themselves by getting very hot.\nCarving a rabbit out of chocolate takes knowledge of a rabbit’s details, along with knowledge of how to move your hands to translate such details into chocolate with a knife. A rabbit mold automates this work, and while this route may still involve intelligence in the melting and pouring of the chocolate, all rabbit knowledge is now implicit in the shape of the tool, though I think nobody would call a rabbit-shaped tin ‘artificial intelligence’.\nHuman pouring of orange juice into glasses involves various mental skills. For instance, classifying orange juice and glasses and judging how they relate to one another in space, and moving them while keeping an eye on this. Automatic orange juice pouring involves for instance a button that can only be pressed with a glass when the glass is in a narrow range of locations, which opens an orange juice faucet running into a spot common to all the possible glass-locations.\n\nSome of this is that humans use intelligence where they can use some other resource, because it is cheap on the margin where the other resource is expensive. For instance, to get toast, you could just leave a lot of bread at different distances then eat the one that is good. That is bread-expensive and human-intelligence-cheap (once you come up with the plan at least). But humans had lots of intelligence and not much bread. And if later we automate a task like this, before we have computers that can act very similarly to brains, then the alternate procedure will tend to be one that replaces human thought with something that actually is cheap at the time, such as metal.\nI think a lot of this is that to deal with a given problem you can either use flexible intelligence in the moment, or you can have an inflexible system that happens to be just what you need. Often you will start out using the flexible intelligence, because being flexible it is useful for lots of things, so you have some sitting around for everything, whereas you don’t have an inflexible system that happens to be just what you need. But if a problem seems to be happening a lot, it can become worth investing the up-front cost of getting the ideal tool, to free up your flexible intelligence again.", "url": "https://aiimpacts.org/automated-intelligence-is-not-ai/", "title": "Automated intelligence is not AI", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-11-01T23:38:44+00:00", "paged_url": "https://aiimpacts.org/feed?paged=6", "authors": ["Katja Grace"], "id": "c2a1f8e467002762c3eac29f8f3faf09", "summary": []} {"text": "Time for AI to cross the human range in English draughts\n\nUpdated 26 Oct 2020\nAI progress in English draughts performance crossed the following ranges in the following times:\nRangeStartEndDuration (years)First attempt to beginner level1951~1956, <1961~4, <10Beginner to superhuman~1956, <19611994~38, >33Above superhuman19942007*13** treating perfect play as the end of progress, though progress could potentially be made in performing better against imperfect play.\n \nDetails\nMetric\n‘English Draughts’ is a popular variety of draughts, or checkers. \nHere we look at direct success of AI in beating human players, rather than measuring humans and AI on a separate metric of strength.\nData\nData here comes mostly from Wikipedia. We found several discrepancies in Wikipedia’s accounts of these events, so consider the remaining data to be somewhat unreliable.\nAI achievement of human milestones\nEarliest attempt\nAccording to Wikipedia, the first checkers program was run in 1951.1\nBeginner level\nThere seems to be some ambiguity around the timing and performance of Arthur Samuel’s early draughts programs, but it appears that he worked on them from around 1952. In 1956, he demonstrated a program on television. 2 It is unclear how good the program’s play was, but it is said to have resulted in a fifteen-point rise in the stock price of IBM when demonstrated to IBM shareholders, seemingly at around that time. This weakly suggests that the program played at at least beginner level.\nIn 1962 Samuel’s program apparently beat an ambiguously skilled player who would by four-years later become a state champion in Connecticut.3 Thus progress was definitely beyond beginner level by 1962.\nSuperhuman level\nIn 1994, computer program Chinook drew six times against world champion Marius Tinsley, before Tinsley withdrew due to pancreatic cancer and Chinook officially won.4 Thus Chinook appears to have been close to as good as the best player in 1994.\nEnd of progress\nIn 2007 checkers was ‘weakly solved’, which is to say that perfect play guaranteeing a draw for both sides from the start of the game is known, from the starting state5 (this does not imply that if someone plays imperfectly, perfect moves following this are known).6 This is not the best possible performance by all measures, since further progress could presumably be made on reliably beating worse players. \nTimes for AI to cross human-relative ranges \nGiven the above dates, we have:\nRangeStartEndDuration (years)First attempt to beginner level1951~1956, <1961~4, <10Beginner to superhuman~1956, <19611994~38, >33Above superhuman19942007*13** treating perfect play as the end of progress, though progress could potentially be made in performing better against imperfect play.\n\nPrimary author: Katja Grace\nNotes", "url": "https://aiimpacts.org/time-for-ai-to-cross-the-human-range-in-english-draughts/", "title": "Time for AI to cross the human range in English draughts", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-10-26T22:28:36+00:00", "paged_url": "https://aiimpacts.org/feed?paged=6", "authors": ["Katja Grace"], "id": "4e4f7dac912b04b7d6ac479348d98669", "summary": []} {"text": "Time for AI to cross the human range in StarCraft\n\nPublished 20 Oct 2020; updated 22 Oct 2020\nProgress in AI StarCraft performance took:\n~0 years to reach the level of an untrained human~21 years to pass from beginner level to high professional human level~2 years to continue from trained human to current performance (2020), with no particular end in sight.\nDetails\nMetric\nWe compare human and AI players on their direct ability to beat one another (rather than a measure of the overall performance of each).\nAI milestones\nEarliest attempt\nStarcraft was released in 19981 The game allows the player to play against a computer opponent, however this built-in AI has access to information that a normal player would not have. For instance, it has real-time information about what the other player is doing at all times, which is normally hidden. We do not have detailed knowledge about early StarCraft AIs that do not have this advantage, but our impression is that it was possible to write them from the start (see next section).\nBeginner level\nOur impression is that since StarCraft Brood War came out in 1998, it has been possible to write a bot that can beat a player who recently learned the game, without “cheating” in the way that the game’s built-in computer opponents do. This is uncertain, and based on private discussion with people who write Starcraft AIs that compete in tournaments.\nProfessional level\nIn 2018, DeepMind’s AlphaStar beat MaNa2, a strong professional player (seemingly 13th place in the 2018 StarCraft II World Championship Series Circuit)3. This does not imply that AlphaStar was in general a stronger player than MaNa, but suggests AlphaStar was at a broadly high professional level. How AlphaStar’s performance compares to humans will depend on how narrowly the task is defined, so that if the AI is not allowed to give commands faster than a human is able, it will compare less favorably than if it is allowed to give commands very quickly4.\nTimes for AI to cross human-relative ranges \nGiven the above dates, we have:\nRangeStartEndDuration (years)First attempt to beginner level19981998~0Beginner to superhuman19982018~21Above superhuman2018?>2\n\nPrimary author: Rick Korzekwa\nNotes", "url": "https://aiimpacts.org/time-for-ai-to-cross-the-human-range-in-starcraft/", "title": "Time for AI to cross the human range in StarCraft", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-10-20T20:55:24+00:00", "paged_url": "https://aiimpacts.org/feed?paged=6", "authors": ["Katja Grace"], "id": "87398c76e1d2d1e6b6dd25abbbd83305", "summary": []} {"text": "Time for AI to cross the human performance range in ImageNet image classification\n\nPublished 19 Oct 2020\nProgress in computer image classification performance took:\nOver 14 years to reach the level of an untrained human3 years to pass from untrained human level to trained human level5 years to continue from trained human to current performance (2020)\nDetails\nMetric\nImageNet1 is a large collection of images organized into a hierarchy of noun categories. We looked at ‘top-5 accuracy’ in categorizing images. In this task, the player is given an image, and can guess five different categories that the image might represent. It is judged as correct if the image is in fact in any of those five categories.\nHuman performance milestones\nBeginner level\nWe used Andrej Karpathy’s interface2 for doing the ImageNet top-5 accuracy task ourselves, and asked a few friends to do it. Five people did it, with performances ranging from 74% to 89%, with a median performance of 81%. \nThis was not a random sample of people, and conditions for taking the test differed. Most notably, there was no time limit, so time allocated was set by patience for trying to marginally improve guesses.\nTrained human-level\nImageNet categorization is not a popular activity for humans, so we do not know what highly talented and trained human performance would look like. The best relatively high human performance measure we have comes from Russakovsky et al, who report on performance of two ‘expert annotators’, who they say learned many of the categories. 3 The better performing annotator there had a 5.1% error rate.4\nAI achievement of human milestones\nEarliest attempt\nThe ImageNet database was released in 2009.5. An annual contest, the ImageNet Large Scale Visual Recognition Challenge, began in 2010.6\nIn the 2010 contest, the best top-5 classification performance had 28.2% error.7 \nHowever image classification broadly is older. Pascal VOC was a similar previous contest, which ran from 2005.8 We do not know when the first successful image classification systems were developed. In a blog post, Amidi & Amidi point to LeNet as pioneering work in image classification9, and it appears to have been developed in 1998.10\nBeginner level\nThe first entrant in the ImageNet contest to perform better than our beginner level benchmark was SuperVision (commonly known as AlexNet) in 2012, with a 15.3% error rate.11\nSuperhuman level\nIn 2015 He et al apparently achieved a 4.5% error rate, slightly better than our high human benchmark.12\nCurrent level\nAccording to paperswithcode.com, performance has continued to climb, to 2020, though slower than earlier.13\nTimes for AI to cross human-relative ranges \nGiven the above dates, we have:\nRangeStartEndDuration (years)First attempt to beginner level<19982012>14Beginner to superhuman201220153Above superhuman2015>2020>5\n\nPrimary author: Rick Korzekwa\nNotes", "url": "https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-imagenet-image-classification/", "title": "Time for AI to cross the human performance range in ImageNet image classification", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-10-19T23:52:50+00:00", "paged_url": "https://aiimpacts.org/feed?paged=6", "authors": ["Katja Grace"], "id": "193b1b9e758a68236dd114cfc0c77770", "summary": []} {"text": "Time for AI to cross the human performance range in Go\n\nPosted 15 Oct 2020; updated 19 Oct 2020\nProgress in computer Go performance took:\n0-19 years to go from the first attempt to playing at human beginner level (<1987)>30 years to go from human beginner level to superhuman level (<1987-2017)3 years to go from superhuman level to the the current highest performance (2017-2020)\nDetails\nHuman performance milestones\nHuman go ratings range from 30 kyu (beginner), through 7 dan to at least 9 professional dan.1 These ratings go downwards through kyu levels, then upward through dan levels, then upward through professional dan levels. The top ratings seem to be closer together than the lower ones, though there are apparently multiple systems which vary)2\nAI achievement of human milestones\nEarliest attempt\nWikipedia says the first Go program was written in 1968.3 We do not know how well it performed.\nBeginner level\nWe have not investigated early Go performance in depth. Figure 1 includes informed guesses about early performance by David Fotland, author of successful Go program, The Many Faces of Go, and Sensei’s Library, a Go wiki.4 Fotland says that early data on AI Go performance is poor, since bots did not play in tournaments, so were not rated.\nFigure 1: From Grace 2013.\nThis suggests that by 1987 Go bots were performing better than human beginners. We do not have evidence to pin down the date of human beginner level AI better, but have also not investigated thoroughly (there appears to be more evidence).\nSuperhuman level\nIn May 2017 AlphaGo beat the top ranked Go player in the world.5 This does not imply that AlphaGo was overall better, but a new version in October could beat the May version in 89 games out of 1006, suggesting that if in May it would have beaten Ke Jie in more than 11% of games, the new version would beat Ke Jie more than half the time, i.e. perform better than the best human player. Thus 2017 seems like a reasonable date for top human-level play.\nTimes for AI to cross human-relative ranges \nGiven the above dates, we have:\nRangeStartEndDuration (years)First attempt to beginner level1968<1987<19Beginner to superhuman<19872017>30Above superhuman2017>2020>3\n\nPrimary author: Katja Grace\nNotes", "url": "https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-go/", "title": "Time for AI to cross the human performance range in Go", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-10-16T00:05:43+00:00", "paged_url": "https://aiimpacts.org/feed?paged=6", "authors": ["Katja Grace"], "id": "a04d47b39924cb0b35f0b0ac3107d075", "summary": []} {"text": "Time for AI to cross the human performance range in chess\n\nPublished 15 Oct 2020\nProgress in computer chess performance took: \n~0 years to go from from playing chess at all to playing it at human beginner level~49 years to go from human beginner level to superhuman level~11 years to go from superhuman level to the the current highest performance\nDetails\nHuman range performance milestones\nWe use the common Elo system for measuring chess performance. Human chess Elo ratings range from around 800 (beginner)1 to 2882 (highest recorded).2 The highest recorded human score is likely higher than it would have been without chess AI existing, since top players can learn from the AI.3 \nTimes for machines to cross ranges\nBeginner to superhuman range\nWe could not find transparent sources for low computer chess Elo records, but it seems common to place Elo scores of 800-1200 in the 1950s and 1960s. In his book Robot (1999)[/note]Moravec, Hans. Robot: Mere Machine to Transcendent Mind, n.d., p71, also at https://frc.ri.cmu.edu/~hpm/book97/ch3/index.html[/note], Moravec gives the diagram shown in Figure 1, which puts a machine with an Elo of around 800 in 1957. He does not appear to provide a source for this however. Figure 2 shows another figure without sources from a 2002 article by L. Stephen Coles at Dr. Dobbs 4, which puts some machine at over 1000 in around 1950. To err on the side of assuming narrow human ranges, and because Moravec appears to be a more reliable source, we use his data here. This means that machine chess performance entered the human range in 1957 at the latest.\nFigure 1: Graph from Moravec, 19995. We did not find Moravec’s sources for these numbers.\nFigure 2: Chess AI progress compared to human performance, from Coles 20026. The image appears to confusingly claim that the present time is around 1993, in which case the right of the graph (after ‘now’) must be imagined, though it appears to be approximately correct.\nThe chess computer Deep Blue famously beat the then world-champion Kasparov under tournament conditions in 1997.7 However this does not imply that Deep Blue was at that point overall more capable than Kasparov, i.e. had a higher Elo rating.8\nAccording to the Swedish Chess Computer Association records, 2006 is the year when the highest machine Elo rating surpassed the highest human Elo (both the highest at the time, and the highest in 2020). In particular Rybka 1.2 was rated 2902. At the time, the highest human Elo rating was Garry Kasparov at 2851.9\nThus it took around 49 years for computers to progress from beginner human level chess to superhuman chess. \nPre-human range\nThe Chess Programming Wiki says that the 1957 Bernstein Chess program was the first complete chess program.10 This seems likely to be the same Bernstein program noted by Moravec as having an 800 Elo in 1957 (see above). Thus if correct, this means that once machines could complete the task of playing chess at all, they could already do it at human beginner level. This may not be accurate (none of these sources appear to be very reliable), but it strongly suggests that the time between lowest possible performance and beginner human performance was not as long as decades.\nSuperhuman performance range\nThe Swedish Chess Computer Association has measured continued progress. As of July 2020, the best chess machine is rated 355811, whereas in 2019 sometime, the highest rating was 3529.12 Alphazero also appeared to have an Elo just below 3500 in 2017, according to its creators (from a small figure with unclear labels).13 \nWe know of no particular upper bound to chess performance.\nThis suggests that so far the superhuman range in chess playing has permitted at least least 14 years of further progress, and may permit much more.\nPrimary author: Katja Grace\nNotes\n", "url": "https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-chess/", "title": "Time for AI to cross the human performance range in chess", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-10-15T23:36:00+00:00", "paged_url": "https://aiimpacts.org/feed?paged=6", "authors": ["Katja Grace"], "id": "1c562ea5ee41a722102171628ea41fca", "summary": []} {"text": "Relevant pre-AGI possibilities\n\nBy Daniel Kokotajlo1, 18 June 2020.\nEpistemic status: I started this as an AI Impacts research project, but given that it’s fundamentally a fun speculative brainstorm, it worked better as a blog post.\nThe default, when reasoning about advanced artificial general intelligence (AGI), is to imagine it appearing in a world that is basically like the present. Yet almost everyone agrees the world will likely be importantly different by the time advanced AGI arrives. \nOne way to address this problem is to reason in abstract, general ways that are hopefully robust to whatever unforeseen developments lie ahead. Another is to brainstorm particular changes that might happen, and check our reasoning against the resulting list.\n\n\nThis is an attempt to begin the second approach.2 I sought things that might happen that seemed both (a) within the realm of plausibility, and (b) probably strategically relevant to AI safety or AI policy.\n\n\nI collected potential list entries via brainstorming, asking others for ideas, googling, and reading lists that seemed relevant (e.g. Wikipedia’s list of emerging technologies,3 a list of Ray Kurzweil’s predictions4, and DARPA’s list of projects.5)\n\n\nI then shortened the list based on my guesses about the plausibility and relevance of these possibilities. I did not put much time into evaluating any particular possibility, so my guesses should not be treated as anything more. I erred on the side of inclusion, so the entries in this list vary greatly in plausibility and relevance. I made some attempt to categorize these entries and merge similar ones, but this document is fundamentally a brainstorm, not a taxonomy, so keep your expectations low.\n\n\nI hope to update this post as new ideas find me and old ideas are refined or refuted. I welcome suggestions and criticisms; email me (gmail kokotajlod) or leave a comment.\nInteractive “Generate Future” button\nAsya Bergal and I made an interactive button to go with the list. The button randomly generates a possible future according to probabilities that you choose. It is very crude, but it has been fun to play with, and perhaps even slightly useful. For example, once I decided that my credences were probably systematically too high because the futures generated with them were too crazy. Another time I used the alternate method (described below) to recursively generate a detailed future trajectory, written up here. I hope to make more trajectories like this in the future, since I think this method is less biased than the usual method for imagining detailed futures.6\nTo choose probabilities, scroll down to the list below and fill each box with a number representing how likely you think the entry is to occur in a strategically relevant way prior to the advent of advanced AI. (1 means certainly, 0 means certainly not. The boxes are all 0 by default.) Once you are done, scroll back up and click the button.\nA major limitation is that the button doesn’t take correlations between possibilities into account. The user needs to do this themselves, e.g. by redoing any generated future that seems silly, or by flipping a coin to choose between two generated possibilities that seem contradictory, or by choosing between them based on what else was generated.\nHere is an alternate way to use this button that mostly avoids this limitation:\nFill all the boxes with probability-of-happening-in-the-next-5-years (instead of happening before advanced AGI, as in the default method)Click the “Generate Future” button and record the results, interpreted as what happens in the next 5 years.Update the probabilities accordingly to represent the upcoming 5-year period, in light of what has happened so far.Repeat steps 2 – 4 until satisfied. I used a random number generator to determine whether AGI arrived each year.\nIf you don’t want to choose probabilities yourself, click “fill with pre-set values” to populate the fields with my non-expert, hasty guesses.7\n\n\nGENERATE FUTURE\n\nFill with pre-set values (default method)\n\nFill with pre-set values (alternate method)\n\n\n\nKey\nLetters after list titles indicate that I think the change might be relevant to:\nTML: Timelines—how long it takes for advanced AI to be developedTAS: Technical AI safety—how easy it is (on a technical level) to make advanced AI safe, or what sort of technical research needs to be donePOL: Policy—how easy it is to coordinate relevant actors to mitigate risks from AI, and what policies are relevant to this. CHA: Chaos—how chaotic the world is.8MIS: Miscellaneous\nEach possibility is followed by some explanation or justification where necessary, and a non-exhaustive list of ways the possibility may be relevant to AI outcomes in particular (which is not guaranteed to cover the most important ones). Possibilities are organized into loose categories created after the list was generated. \nList of strategically relevant possibilities\nInputs to AI\n\n1. Advanced science automation and research tools (TML, TAS, CHA, MIS)\n\n\n\n\nNarrow research and development tools might speed up technological progress in general or in specific domains. For example, several of the other technologies on this list might be achieved with the help of narrow research and development tools.\n\n2. Dramatically improved computing hardware (TML, TAS, POL, MIS)\n\n\n\n\nBy this I mean computing hardware improves at least as fast as Moore’s Law. Computing hardware has historically become steadily cheaper, though it is unclear whether this trend will continue. Some example pathways by which hardware might improve at least moderately include:\nOrdinary scale economies9Improved data locality10Increased specialization for specific AI applications11Optical computing12Neuromorphic chips133D integrated circuits14Wafer-scale chips15Quantum computing16Carbon nanotube field-effect transistors17\nDramatically improved computing hardware may: \nCause any given AI capability to arrive earlierIncrease the probability of hardware overhang. Affect which kinds of AI are developed first (e.g. those which are more compute-intensive.) Affect AI policy, e.g. by changing the relative importance of hardware vs. research talent\n\n3. Stagnation in computing hardware progress (TML, TAS, POL, MIS)\n\n\n\n\nMany forecasters think Moore’s Law will be ending soon (as of 2020).18 In the absence of successful new technologies, computing hardware could progress substantially more slowly than Moore’s Law would predict.\nStagnation in computing hardware progress may: \nCause any given AI capability to arrive laterDecrease the probability of hardware overhang. Affect which kinds of AI are developed first (e.g. those which are less compute-intensive.) Influence the relative strategic importance of hardware compared to researchersMake energy and raw materials a greater part of the cost of computing\n\n4. Manufacturing consolidation (POL)\n\n\n\n\nChip fabrication has become more specialized and consolidated over time, to the point where all of the hardware relevant to AI research depends on production from a handful of locations.19 Perhaps this trend will continue.\nOne country (or a small number working together) could control or restrict AI research by controlling the production and distribution of necessary hardware.\n\n5. Advanced additive manufacturing (e.g. 3D printing or nanotechnology) (TML, CHA)\n\n\n\n\nAdvanced additive manufacturing could lead to various materials, products and forms of capital being cheaper and more broadly accessible, as well as to new varieties of them becoming feasible and quicker to develop. For example, sufficiently advanced 3D printing could destabilize the world by allowing almost anyone to secretly produce terror weapons. If nanotechnology advances rapidly, so that nanofactories can be created, the consequences could be dramatic:20 \nGreatly reduced cost of most manufactured productsGreatly faster growth of capital formationLower energy costsNew kinds of materials, such as stronger, lighter spaceship hullsMedical nanorobotsNew kinds of weaponry and other disruptive technologies\n\n6. Massive resource glut (TML, TAS, POL, CHA)\n\n\n\n\nBy “glut” I don’t necessarily mean that there is too much of a resource. Rather, I mean that the real price falls dramatically. Rapid decreases in the price of important resources have happened before.21 It could happen again via:\nCheap energy (e.g. fusion power, He-3 extracted from lunar regolith,22 methane hydrate extracted from the seafloor,23 cheap solar energy24)A source of abundant cheap raw materials (e.g. asteroid mining,25 undersea mining26)Automation of relevant human labor. Where human labor is an important part of the cost of manufacturing, resource extraction, or energy production, automating labor might substantially increase economic growth, which might result in a greater amount of resources devoted to strategically relevant things (such as AI research) which is relevantly similar to a price drop even if technically the price doesn’t drop.27 and therefore investment in AI.\nMy impression is that energy, raw materials, and unskilled labor combined are less than half the cost of computing, so a decrease in the price of one of these (and possibly even all three) would probably not have large direct consequences on the price of computing.28 But a resource glut might lead to general economic prosperity, with many subsequent effects on society, and moreover the cost structure of computing may change in the future, creating a situation where a resource glut could dramatically lower the cost of computing.29\n\n7. Hardware overhang (TML, TAS, POL)\n\n\n\n\nHardware overhang refers to a situation where large quantities of computing hardware can be diverted to running powerful AI systems as soon as the AI software is developed.\nIf advanced AGI (or some other powerful software) appears during a period of hardware overhang, its capabilities and prominence in the world could grow very quickly.\n\n8. Hardware underhang (TML, TAS, POL)\n\n\n\n\nThe opposite of hardware overhang might happen. Researchers may understand how to build advanced AGI at a time when the requisite hardware is not yet available.  For example, perhaps the relevant AI research will involve expensive chips custom-built for the particular AI architecture being trained.\nA successful AI project during a period of hardware underhang would not be able to instantly copy the AI to many other devices, nor would they be able to iterate quickly and make an architecturally improved version.\nTechnical tools\n\n9. Prediction tools (TML, TAS, POL, CHA, MIS)\n\n\n\n\nTools may be developed that are dramatically better at predicting some important aspect of the world; for example, technological progress, cultural shifts, or the outcomes of elections, military clashes, or research projects. Such tools could for instance be based on advances in AI or other algorithms, prediction markets, or improved scientific understanding of forecasting (e.g. lessons from the Good Judgment Project).\nSuch tools might conceivably increase stability via promoting accurate beliefs, reducing surprises, errors or unnecessary conflicts. However they could also conceivably promote instability via conflict encouraged by a powerful new tool being available to a subset of actors. Such tools might also help with forecasting the arrival and effects of advanced AGI, thereby helping guide policy and AI safety work. They might also accelerate timelines, for instance by assisting project management in general and notifying potential investors when advanced AGI is within reach.\n\n10. Persuasion tools (POL, CHA, MIS)\n\n\n\n\nPresent technology for influencing a person’s beliefs and behavior is crude and weak, relative to what one can imagine. Tools may be developed that more reliably steer a person’s opinion and are not so vulnerable to the victim’s reasoning and possession of evidence. These could involve:\nAdvanced understanding of how humans respond to stimuli depending on context, based on massive amounts of dataCoaching for the user on how to convince the target of somethingSoftware that interacts directly with other people, e.g. via text or email \nStrong persuasion tools could:\nAllow a group in conflict who has them to quickly attract spies and then infiltrate an enemy groupAllow governments to control their populationsAllow corporations to control their employeesLead to a breakdown of collective epistemology30\n\n11. Theorem provers (TAS)\n\n\n\n\nPowerful theorem provers might help with the kinds of AI alignment research that involve proofs or help solve computational choice problems.\n\n12. Narrow AI for natural language processing (TML, TAS, CHA)\n\n\n\n\nResearchers may develop narrow AI that understands human language well, including concepts such as “moral” and “honest.”\nNatural language processing tools could help with many kinds of technology, including AI and various  AI safety projects. They could also help enable AI arbitration systems. If researchers develop software that can autocomplete code—much as it currently autocompletes text messages—it could multiply software engineering productivity.\n\n13. AI interpretability tools (TML, TAS, POL)\n\n\n\n\nTools for understanding what a given AI system is thinking, what it wants, and what it is planning would be useful for AI safety.31\n\n14. Credible commitment mechanisms (POL, CHA)\n\n\n\n\n\nThere are significant restrictions on which contracts governments are willing and able to enforce–for example, they can’t enforce a contract to try hard to achieve a goal, and won’t enforce a contract to commit a crime. Perhaps some technology (e.g. lie detectors, narrow AI, or blockchain) could significantly expand the space of possible credible commitments for some relevant actors: corporations, decentralized autonomous organizations, crowds of ordinary people using assurance contracts, terrorist cells, rogue AGIs, or even individuals.\nThis might destabilize the world by making threats of various kinds more credible, for various actors. It might stabilize the world in other ways, e.g. by making it easier for some parties to enforce agreements.\n\n15. Better coordination tools (POL, CHA, MIS)\n\n\n\n\nTechnology for allowing groups of people to coordinate effectively could improve, potentially avoiding losses from collective choice problems, helping existing large groups (e.g. nations and companies) to make choices in their own interests, and producing new forms of coordinated social behavior (e.g. the 2010’s saw the rise of the Facebook group)).  Dominant assurance contracts,32 improved voting systems,33 AI arbitration systems, lie detectors, and similar things not yet imagined might significantly improve the effectiveness of some groups of people.\nIf only a few groups use this technology, they might have outsized influence. If most groups do, there could be a general reduction in conflict and increase in good judgment.\nHuman effectiveness\n\n16. Deterioration of collective epistemology (TML, TAS, POL, CHA, MIS)\n\n\n\n\nSociety has mechanisms and processes that allow it to identify new problems, discuss them, and arrive at the truth and/or coordinate a solution. These processes might deteriorate. Some examples of things which might contribute to this:\nIncreased investment in online propaganda by more powerful actors, perhaps assisted by chatbots, deepfakes and persuasion toolsEcho chambers, filter bubbles, and online polarization, perhaps driven in part by recommendation algorithmsMemetic evolution in general might intensify, increasing the spreadability of ideas/topics at the expense of their truth/importance34Trends towards political polarization and radicalization might exist and continueTrends towards general institutional dysfunction might exist and continue\nThis could cause chaos in the world in general, and lead to many hard-to-predict effects. It would likely make the market for influencing the course of AI development less efficient (see section on “Landscape of…” below) and present epistemic hazards for anyone trying to participate effectively.\n\n17. New and powerful forms of addiction (TML, POL, CHA, MIS)\n\n\n\n\nTechnology that wastes time and ruins lives could become more effective. The average person spends 144 minutes per day on social media, and there is a clear upward trend in this metric.35 The average time spent watching TV is even greater.36 Perhaps this time is not wasted but rather serves some important recuperative, educational, or other function. Or perhaps not; perhaps instead the effect of social media on society is like the effect of a new addictive drug — opium, heroin, cocaine, etc. — which causes serious damage until society adapts. Maybe there will be more things like this: extremely addictive video games, or newly invented drugs, or wireheading (directly stimulating the reward circuitry of the brain).37\nThis could lead to economic and scientific slowdown. It could also concentrate power and influence in fewer people—those who for whatever reason remain relatively unaffected by the various productivity-draining technologies. Depending on how these practices spread, they might affect some communities more or sooner than others.\n\n18. Medicine or education to boost human mental abilities (TML, CHA, MIS)\n\n\n\n\nTo my knowledge, existing “study drugs” such as modafinil don’t seem to have substantially sped up the rate of scientific progress in any field. However, new drugs (or other treatments) might be more effective. Moreover, in some fields, researchers typically do their best work at a certain age. Medicine which extends this period of peak mental ability might have a similar effect.\nSeparately, there may be substantial room for improvement in education due to big data, online classes, and tutor software.38\nThis could speed up the rate of scientific progress in some fields, among other effects.\n\n19. Genetic engineering, human cloning, iterated embryo selection (TML, POL, CHA, MIS)\n\n\n\n\n\nChanges in human capabilities or other human traits via genetic interventions39 could affect many areas of life. If the changes were dramatic, they might have a large impact even if only a small fraction of humanity were altered by them. \nChanges in human capabilities or other human traits via genetic interventions might:\nAccelerate research in generalDifferentially accelerate research projects that depend more on “genius” and less on money or experienceInfluence politics and ideologyCause social upheavalIncrease the number of people capable of causing great harmHave a huge variety of effects not considered here, given the ubiquitous relevance of human nature to eventsShift the landscape of effective strategies for influencing AI development (see below)\n\n20. Landscape of effective strategies for influencing AI development changes substantially (CHA, MIS)\n\n\n\n\nFor a person at a time, there is a landscape of strategies for influencing the world, and in particular for influencing AI development and the effects of advanced AGI. The landscape could change such that the most effective strategies for influencing AI development are:\nMore or less reliably helpful (e.g. working for an hour on a major unsolved technical problem might have a low chance of a very high payoff, and so not be very reliable)More or less “outside the box” (e.g. being an employee, publishing academic papers, and signing petitions are normal strategies, whereas writing Harry Potter fanfiction to illustrate rationality concepts and inspire teenagers to work on AI safety is not)40Easier or harder to find, such that marginal returns to investment in strategy research change\nHere is a non-exhaustive list of reasons to think these features might change systematically over time:\nAs more people devote more effort to achieving some goal, one might expect that effective strategies become common, and it becomes harder to find novel strategies that perform better than common strategies. As advanced AI becomes closer, one might expect more effort to flow into influencing the situation. Currently some ‘markets’ are more efficient than others; in some the orthodox strategies are best or close to the best, whereas in others clever and careful reasoning can find strategies vastly better than what most people do. How efficient a market is depends on how many people are genuinely trying to compete in it, and how accurate their beliefs are. For example, the stock market and the market for political influence are fairly efficient, because many highly-knowledgeable actors are competing.  As more people take interest, the ‘market’ for influencing the course of AI may become more efficient. (This would also decrease the marginal returns to investment in strategy research, by making orthodox strategies closer to optimal.) If there is a deterioration of social epistemology (see below), the market might instead become less efficient.  Currently there are some tasks at which the most skilled people are not much better than the average person  (e.g. manual labor, voting) and others in which the distribution of effectiveness is heavy-tailed, such that a large fraction of the total influence comes from a small fraction of individuals (e.g. theoretical math, donating to politicians). The types of activity that are most useful for influencing the course of AI development may change over time in this regard, which in turn might affect the strategy landscape in all three ways described above.Transformative technologies can lead to new opportunities and windfalls for people who recognize them early. As more people take interest, opportunities for easy success disappear. Perhaps there will be a burst of new technologies prior to advanced AGI, creating opportunities for unorthodox or risky strategies to be very successful.\nA shift in the landscape of effective strategies for influencing the course of AI is relevant to anyone who wants to have an effective strategy for influencing the course of AI.41 If it is part of a more general shift in the landscape of effective strategies for other goals — e.g. winning wars, making money, influencing politics — the world could be significantly disrupted in ways that may be hard to predict.\n\n21. Global economic collapse (TML, CHA, MIS)\n\n\n\n\nThis might slow down research or precipitate other relevant events, such as war.\n\n22. Scientific stagnation (TML, TAS, POL, CHA, MIS)\n\n\n\n\nThere is some evidence that scientific progress in general might be slowing down. For example, the millennia-long trend of decreasing economic doubling time seems to have stopped around 1960.42 Meanwhile, scientific progress has arguably come from increased investment in research. Since research investment has been growing faster than the economy, it might eventually saturate and grow only as fast as the economy.43\nThis might slow down AI research, making the events on this list (but not the technologies) more likely to happen before advanced AGI.\n\n23. Global catastrophe (TML, POL, CHA)\n\n\n\n\nHere are some examples of potential global catastrophes:\nClimate change tail risks, e.g. feedback loop of melting permafrost releasing methane44Major nuclear exchangeGlobal pandemicVolcano eruption that leads to 10% reduction in global agricultural production45Exceptionally bad solar storm knocks out world electrical grid46Geoengineering project backfires or has major negative side-effects47\nA global catastrophe might be expected to cause conflict and slowing of projects such as research, though it could also conceivably increase attention on projects that are useful for dealing with the problem. It seems likely to have other hard to predict effects.\nAttitudes toward AGI\n\n24. Shift in level of public attention on AGI (TML, POL, CHA, MIS)\n\n\n\n\n\nThe level of attention paid to AGI by the public, governments, and other relevant actors might increase (e.g. due to an impressive demonstration or a bad accident) or decrease (e.g. due to other issues drawing more attention, or evidence that AI is less dangerous or imminent).\nChanges in the level of attention could affect the amount of work on AI and AI safety. More attention could also lead to changes in public opinion such as panic or an AI rights movement. \nIf the level of attention increases but AGI does not arrive soon thereafter, there might be a subsequent period of disillusionment.\n\n25. Change in investment in AGI development (TML, TAS, POL)\n\n\n\n\nThere could be a rush for AGI, for instance if major nations begin megaprojects to build it. Or there could be a rush away from AGI, for instance if it comes to be seen as immoral or dangerous like human cloning or nuclear rocketry. \nIncreased investment in AGI might make advanced AGI happen sooner, with less hardware overhang and potentially less proportional investment in safety. Decreased investment might have the opposite effects.\n\n26. New social movements or ideological shifts (TML, TAS, POL, MIS)\n\n\n\n\nThe communities that build and regulate AI could undergo a substantial ideological shift. Historically, entire nations have been swept by radical ideologies within about a decade or so, e.g. Communism, Fascism, the Cultural Revolution, and the First Great Awakening.48 Major ideological shifts within communities smaller than nations (or within nations, but on specific topics) presumably happen more often. There might even appear powerful social movements explicitly focused on AI, for instance in opposition to it  or attempting to secure legal rights and moral status for AI agents.49 Finally, there could be a general rise in extremist movements, for instance due to a symbiotic feedback effect hypothesized by some,50 which might have strategically relevant implications even if mainstream opinions do not change.\nChanges in public opinion on AI might change the speed of AI research, change who is doing it, change which types of AI are developed or used, and limit or alter discussion. For example, attempts to limit an AI system’s effects on the world by containing it might be seen as inhumane, as might adversarial and population-based training methods. Broader ideological change or a rise in extremisms might increase the probability of a massive crisis, revolution, civil war, or world war.\n\n27. Harbinger of AGI (ALN, POL, MIS)\n\n\n\n\nEvents could occur that provide compelling evidence, to at least a relevant minority of people, that advanced AGI is near.\nThis could increase the amount of technical AI safety work and AI policy work being done, to the extent that people are sufficiently well-informed and good at forecasting. It could also enable people already doing such work to more efficiently focus their efforts on the true scenario.\n\n28. AI alignment warning shot (ALN, POL)\n\n\n\n\nA convincing real-world example of AI alignment failure could occur.\nThis could motivate more effort into mitigating AI risk and perhaps also provide useful evidence about some kinds of risks and how to avoid them.\nPrecursors to AGI\n\n29. Brain scanning (TML, TAS, POL, CHA, MIS)\n\n\n\n\nAn accurate way to scan human brains at a very high resolution could be developed.\nCombined with a good low-level understanding of the brain (see below) and sufficient computational resources, this might enable brain emulations, a form of AGI in which the AGI is similar, mentally, to some original human. This would change the kind of technical AI safety work that would be relevant, as well as introducing new AI policy questions. It would also likely make AGI timelines easier to predict. It might influence takeoff speeds.\n\n30. Good low-level understanding of the brain (TML, TAS, POL, CHA, MIS)\n\n\n\n\nTo my knowledge, as of April 2020, humanity does not understand how neurons work well enough to accurately simulate the behavior of a C. Elegans worm, though all connections between its neurons have been mapped51 Ongoing progress in modeling individual neurons could change this, and perhaps ultimately allow accurate simulation of entire human brains.\nCombined with brain scanning (see above) and sufficient computational resources, this may enable brain emulations, a form of AGI in which the AI system is similar, mentally, to some original human. This would change the kind of AI safety work that would be relevant, as well as introducing new AI policy questions. It would also likely make the time until AGI is developed more predictable. It might influence takeoff speeds. Even if brain scanning is not possible, a good low-level understanding of the brain might speed AI development, especially of systems that are more similar to human brains.\n\n31. Brain-machine interfaces (TML, TAS, POL, CHA, MIS)\n\n\n\n\nBetter, safer, and cheaper methods to control computers directly with our brains may be developed. At least one project is explicitly working towards this goal.52\nStrong brain-machine interfaces might:\nAccelerate research, including on AI and AI safety53Accelerate in vitro brain technologyAccelerate mind-reading, lie detection, and persuasion toolsDeteriorate collective epistemology (e.g. by contributing to wireheading or short attention spans)Improve collective epistemology (e.g. by improving communication abilities)Increase inequality in influence among people\n\n32. In vitro brains (TML, TAS, POL, CHA)\n\n\n\n\nNeural tissue can be grown in a dish (or in an animal and transplanted) and connected to computers, sensors, and even actuators.54 If this tissue can be trained to perform important tasks, and the technology develops enough, it might function as a sort of artificial intelligence. Its components would not be faster than humans, but it might be cheaper or more intelligent. Meanwhile, this technology might also allow fresh neural tissue to be grafted onto existing humans, potentially serving as a cognitive enhancer.55\nThis might change the sorts of systems AI safety efforts should focus on. It might also automate much human labor, inspire changes in public opinion about AI research (e.g. promoting concern about the rights of AI systems), and have other effects which are hard to predict.\n\n33. Weak AGI (TML, TAS, POL, CHA, MIS)\n\n\n\n\nResearchers may develop something which is a true artificial general intelligence—able to learn and perform competently all the tasks humans do—but just isn’t very good at them, at least, not as good as a skilled human. \nIf weak AGI is faster or cheaper than humans, it might still replace humans in many jobs, potentially speeding economic or technological progress. Separately, weak AGI might provide testing opportunities for technical AI safety research. It might also change public opinion about AI, for instance inspiring a “robot rights” movement, or an anti-AI movement.\n\n34. Expensive AGI (TML, TAS, POL, CHA, MIS)\n\n\n\n\nResearchers may develop something which is a true artificial general intelligence, and moreover is qualitatively more intelligent than any human, but is vastly more expensive, so that there is some substantial period of time before cheap AGI is developed. \nAn expensive AGI might contribute to endeavors that are sufficiently valuable, such as some science and technology, and so may have a large effect on society. It might also prompt increased effort on AI or AI safety, or inspire public thought about AI that produces changes in public opinion and thus policy, e.g. regarding the rights of machines. It might also allow opportunities for trialing AI safety plans prior to very widespread use.\n\n35. Slow AGI (TML, TAS, POL, CHA, MIS)\n\n\n\n\nResearchers may develop something which is a true artificial general intelligence, and moreover is qualitatively as intelligent as the smartest humans, but takes a lot longer to train and learn than today’s AI systems.\nSlow AGI might be easier to understand and control than other kinds of AGI, because it would train and learn more slowly, giving humans more time to react and understand it. It might produce changes in public opinion about AI.\n\n36. Automation of human labor (TML, TAS, POL, CHA, MIS)\n\n\n\n\nIf the pace of automation substantially increases prior to advanced AGI, there could be social upheaval and also dramatic economic growth. This might affect investment in AI.\nShifts in the balance of power\n\n37. Major leak of AI research (TML, TAS, POL, CHA)\n\n\n\n\nEdward Snowden defected from the NSA and made public a vast trove of information. Perhaps something similar could happen to a leading tech company or AI project. \nIn a world where much AI progress is hoarded, such an event could accelerate timelines and make the political situation more multipolar and chaotic.\n\n38. Shift in favor of espionage (POL, CHA, MIS)\n\n\n\n\nEspionage techniques might become more effective relative to counterespionage techniques. In particular:\nQuantum computing could break current encryption protocols.56Automated vulnerability detection57 could turn out to have an advantage over automated cyberdefense systems, at least in the years leading up to advanced AGI.\nMore successful espionage techniques might make it impossible for any AI project to maintain a lead over other projects for any substantial period of time. Other disruptions may become more likely, such as hacking into nuclear launch facilities, or large scale cyberwarfare.\n\n39. Shift in favor of counterespionage (POL, CHA, MIS)\n\n\n\n\nCounterespionage techniques might become more effective relative to espionage techniques than they are now. In particular:\nPost-quantum encryption might be secure against attack by quantum computers.58Automated cyberdefense systems could turn out to have an advantage over automated vulnerability detection. Ben Garfinkel and Allan Dafoe59 give reason to think the balance will ultimately shift to favor defense.\nStronger counterespionage techniques might make it easier for an AI project to maintain a technological lead over the rest of the world. Cyber wars and other disruptive events could become less likely.\n\n40. Broader or more sophisticated surveillance (POL, CHA, MIS)\n\n\n\n\nMore extensive or more sophisticated surveillance could allow strong and selective policing of technological development. It would also have other social effects, such as making totalitarianism easier and making terrorism harder.\n\n41. Autonomous weapons (POL, CHA)\n\n\n\n\nAutonomous weapons could shift the balance of power between nations, or shift the offense-defense balances resulting in more or fewer wars or terrorist attacks, or help to make totalitarian governments more stable. As a potentially early, visible and controversial use of AI, they may also especially influence public opinion on AI more broadly, e.g. prompting anti-AI sentiment.\n\n42. Shift in importance of governments, corporations, and other groups in AI development (POL, CHA)\n\n\n\n\nCurrently both governments and corporations are strategically relevant actors in determining the course of AI development. Perhaps governments will become more important, e.g. by nationalizing and merging AI companies. Or perhaps governments will become less important, e.g. by not paying attention to AI issues at all, or by becoming less powerful and competent generally. Perhaps some third kind of actor (such as religion, insurgency, organized crime, or special individual) will become more important, e.g. due to persuasion tools, countermeasures to surveillance, or new weapons of guerilla warfare.60\nThis influences AI policy by affecting which actors are relevant to how AI is developed and deployed.\n\n43. Catastrophe in strategically important location (TML, POL, CHA, MIS)\n\n\n\n\n\nPerhaps some strategically important location (e.g. tech hub, seat of government, or chip fab) will be suddenly destroyed. Here is a non-exhaustive list of ways this could happen:\nTerrorist attack with weapon of mass destructionMajor earthquake, flood, tsunami, etc. (e.g. this research claims a 2% chance of magnitude 8.0 or greater earthquake in San Francisco by 2044.)61\nIf it happens, it might be strategically disruptive, causing e.g. the dissolution and diaspora of the front-runner AI project, or making it more likely that some government makes a radical move of some sort.\n\n44. Change in national AI research loci (POL, CHA)\n\n\n\n\nFor instance, a new major national hub of AI research could arise, rivalling the USA and China in research output. Or either the USA or China could cease to be relevant to AI research.\nThis might make coordinating AI policy more difficult. It might make a rush for AGI more or less likely.\n\n45. Large war (TML, POL, CHA, MIS)\n\n\n\n\nThis might cause short-term, militarily relevant AI capabilities research to be prioritized over AI safety and foundational research. It could also make global coordination on AI policy difficult.\n\n46. Civil war or regime change in major relevant countries (POL, CHA, MIS)\n\n\n\n\nThis might be very dangerous for people living in those countries. It might change who the strategically relevant actors are for shaping AI development. It might result in increased instability, or cause a new social movement or ideological shift.\n\n47. Formation of a world government (POL, CHA)\n\n\n\n\nThis would make coordinating AI policy easier in some ways (e.g. there would be no need for multiple governing bodies to coordinate their policy at the highest level), however it might be harder in others (e.g. there might be a more complicated regulatory system overall).\nNotes", "url": "https://aiimpacts.org/relevant-pre-agi-possibilities/", "title": "Relevant pre-AGI possibilities", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-06-19T13:40:00+00:00", "paged_url": "https://aiimpacts.org/feed?paged=6", "authors": ["Daniel Kokotajlo"], "id": "243b827a1dea9575dc9c80338b6d6a6a", "summary": []} {"text": "Description vs simulated prediction\n\nBy Rick Korzekwa, 22 April 2020\nDuring our investigation into discontinuous progress1, we had some discussions about what exactly it is that we’re trying to do when we analyze discontinuities in technological progress. There was some disagreement and (at least on my part) some confusion about what we were trying to learn from all of this. I think this comes from two separate, more general questions that can come up when forecasting future progress. These questions are closely related and both can be answered in part by analyzing discontinuities. First I will describe these questions generically, then I will explain how they can become confused in the context of analyzing discontinuous progress. \nQuestion 1: How did tech progress happen in the past? \nKnowing something about how historical progress happened is crucial to making good forecasts now, both from the standpoint of understanding the underlying mechanisms and establishing base rates. Merely being able to describe what happened and why can help us make sense of what’s happening now and should enable us to make better predictions. Such descriptions vary in scope and detail. For example:\nThe number of transistors per chip doubled roughly every two years from 1970 to 2020Wartime funding during WWII led to the rapid development and scaling of penicillin production, which contributed to a 90% decrease in US syphilis mortality from 1945 to 1967Large jumps in metrics for technological progress were not always accompanied by fundamental scientific breakthroughs\nThis sort of analysis may be used for other work, in addition to forecasting rates of progress. In the context of our work on discontinuities, answering this question mostly consists of describing quantitatively how metrics for progress evolve over time.\nQuestion 2: How would we have fared making predictions in the past?\nThis is actually a family of questions aimed at developing and calibrating prediction methods based on historical data. These are questions like:\nIf, in the past, we’d predicted that the current trend would hold, how often and by how much would we have been wrong?Are there domains in which we’d have fared better than others?Are there heuristics we can use to make better predictions?Which methods for characterizing trends in progress would have performed the best?How often would we have seen hints that a discontinuity or change in rate of progress was about to happen?\nThese questions often, but not always, require the same approach\nIn the context of our work on discontinuous progress, these questions converge on the same methods most of the time. For many of our metrics, there was a clear trend leading up to a discontinuity, and describing what happened is essentially the same as attempting to (naively) predict what would have happened if the discontinuity had not happened. But there are times when they differ. In particular, this can happen when we have different information now than we would have had at the time the discontinuity happened, or when the naive approach is clearly missing something important. Three cases of this that come to mind are:\nThe trend leading up to the discontinuity was ambiguous, but later data made it less ambiguous. For example, advances in steamships improved times for crossing the Atlantic, but it was not clear whether this progress was exponential or linear at the time that flight or telecommunications were invented. But if we look at progress that occurred for transatlantic ship voyages after flight, we can see that the overall trend was linear. If we want to answer the question “What happened?”, we might say that progress in steamships was linear, so that it would have taken 500 years at the rate of advancement for steamships to bring crossing time down to that of the first transatlantic flight. If we want to answer the question “How much would this discontinuity have affected our forecasts at the time?”, we might say that it looked exponential, so that our forecasts would have been wrong by a substantially shorter amount of time.\nWe now have access to information from before the discontinuity that nobody (or no one person) had access to at the time. In the past, the world was much less connected, and it is not clear who knew about what at the time. For example, building heights, altitude records, bridge spans, and military capabilities all showed progress across different parts of the world, and it seems likely that nobody had access to all the information that we have now, so that forecasting may have been much harder or yielded different results. Information that is actively kept secret may have made this problem worse. It seems plausible that nobody knew the state of both the Manhattan Project and the German nuclear weapons program at the time that the first nuclear weapon was tested in 1945.\nThe inside view overwhelms the outside view. For example, the second transatlantic telegraph cable was much, much better than the first. Using our methodology, it was nearly as large an advancement over the first cable as the first cable was over mailing by ship. But we lose a lot by viewing these advances only in terms of their deviation from the previous trend. The first cable had extremely poor performance, while the second performed about as well as a typical high performance telegraph cable did at the time. If we were trying to predict future progress at the time, we’d focus on questions like “How long do we think it will take to get a normal cable working?” or “How soon will it be until someone is willing to fund the next cable laying expedition?”, not “If we draw a line through the points on this graph, where does that take us?” (Though that outside view may still be worth consideration.) However, if we’re just trying to describe how the metric evolved over time, then the correct thing to do is to just draw a line through the points as best we can and calculate how far off-trend the new advancement is. \nReasons for focusing more on the descriptive approach for now\nBoth of these questions are important, and we can’t really answer one while totally ignoring the other. But for now, we have focused more on describing what happened (that is, answering question 1). \nThere are several reasons for this, but first I’ll describe some advantages to focusing on simulating historical predictions:\nIt mitigates what may be some misleading results from the descriptive approach. See, for example, the description of the transatlantic telegraph above.We’re trying to do forecasting (or enable others to do it), and really good answers to these questions might be more valuable.\nBut, for now, I think the advantages to focusing on description are greater:\nThe results are more readily reusable for other projects, either by us or by others. For example, answering a question like “How much of an improvement is a typical major advancement over previous technology?”It does not require us to model a hypothetical forecaster. It’s hard to predict what we (or someone else) would have predicted if asked about future progress in weapons technology just before the invention of nuclear weapons. To me, this process feels like it has a lot of moving parts or at least a lot of subjectivity, which leaves room for error, and makes it harder for other people to evaluate our methods.It is easier to build from question 1 to question 2 than the other way around. A description of what happened is a pretty reasonable starting point for figuring out which forecasting methods would have worked.It is easier to compare across technologies using question 1. Question 2 requires taking a more inside view, which makes comparisons harder.\n\nNotes", "url": "https://aiimpacts.org/description-vs-simulated-prediction/", "title": "Description vs simulated prediction", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-04-22T16:30:00+00:00", "paged_url": "https://aiimpacts.org/feed?paged=7", "authors": ["richardkorzekwa"], "id": "c99465001a1c5aed30ffe8be1d140e41", "summary": []} {"text": "Surveys on fractional progress towards HLAI\n\nGiven simplistic assumptions, extrapolating fractional progress estimates suggests a median time from 2020 to human-level AI of:\n372 years (2392), based on responses collected in Robin Hanson’s informal 2012-2017 survey.36 years (2056), based on all responses collected in the 2016 Expert Survey on Progress in AI.142 years (2162), based on the subset of responses to the 2016 Expert Survey on Progress in AI who had been in their subfield for at least 20 years.32 years (2052), based on the subset of responses to the 2016 Expert Survey on Progress in AI about progress in deep learning or machine learning as a whole rather than narrow subfields.\n67% of respondents of the 2016 expert survey on AI and 44% of respondents who answered from Hanson’s informal survey said that progress was accelerating.\nDetails\nOne way of estimating how many years something will take is to estimate what fraction of progress toward it has been made over a fixed number of years, then to extrapolate the number of years needed for full progress. As suggested by Robin Hanson,1 this method can provide an estimate for when human-level AI will be developed, if we have data on what fraction of progress toward human-level AI has been made and whether it is proceeding at a constant rate. \nWe know of two surveys that ask about fractional progress and acceleration in specific AI subfields: an informal survey conducted by Robin Hanson in 2012 – 2017, and our 2016 Expert Survey on Progress in AI. We use them to extrapolate progress to human-level AI, assuming that:\nAI progresses at the average rate that people have observed so far.Human-level AI will be achieved when the median subfield reaches human-level.\nAssumptions\nAI progresses at the average rate that people have observed so far\nThe naive extrapolation method described above assumes that AI progresses at the average rate that people have observed so far, but some respondents perceived acceleration or deceleration. If we guess that this change in the rate of the progress continues into the future, this suggests that a truer extrapolation of each person’s observations would place human-level performance in their subfield either before or after the naively extrapolated date.\nHuman-level AI will be achieved when the median subfield reaches human-level\nBoth surveys asked respondents about fractional progress in their subfields. Extrapolating out these estimates to get to human-level performance gives some evidence for when AGI may come, but is not a perfect proxy. It may turn out that we get human-level performance in a small number of subfields much earlier than others, such that we count the resulting AI as ‘AGI’, or it may be the case that certain subfields important to AGI do not exist yet.\nHanson AI Expert Survey\nHanson’s survey informally asked ~15 AI experts to estimate how far we’ve come in their own subfield of AI research in the last twenty years, compared to how far we have to go to reach human level abilities. The subfields represented were analogical reasoning, knowledge representation, computer-assisted training, natural language processing, constraint satisfaction, robotic grasping manipulation, early-human vision processing, constraint reasoning, and “no particular subfield”. Three respondents said the rate of progress was staying the same, four said it was getting faster, two said it was slowing down, and six did not answer (or may not have been asked). \nThe naive extrapolations2 of the answers from Hanson’s survey give a median time from 2020 to human-level AI (HLAI) of 372 years (2392). See the survey data and our calculations here.\n2016 Expert Survey on Progress in AI\nThe 2016 Expert Survey on Progress in AI (2016 ESPAI) asked machine learning researchers which subfield they were in, how long they had been in their subfield, and what fraction of the remaining path to human-level performance (in their subfield) they thought had been traversed in that time.3 107 out of 111 responses were used in our calculation.4 42 subfields were reported, including “Machine learning”, “Graphical models”, “Speech recognition”, “Optimization”, “Bayesian Learning”, and “Robotics”.5 Notably, Hanson’s survey included subfields that weren’t represented in 2016 ESPAI, including analogic reasoning and knowledge representation. Since 2016 ESPAI was restricted to machine learning researchers, it may exclude non-machine-learning subfields that turn out to be important to fully human-level capabilities.\nAcceleration\n67% of all respondents said progress in their subfield was accelerating (see Figure 1). Most respondents said progress in their subfield was accelerating in each of the subsets we look at below (ML vs narrow subfield, and time in field).\nFigure 1: Number of responses that progress was faster in the first half of the time in the field worked by respondents, the second half, or was about the same in both halves.\nMost respondents think progress is accelerating. If this acceleration continues, our naively extrapolated estimates below may be overestimates for time to human-level performance.\nTime to HLAI\nWe calculated estimated years from 2020 until human-level subfield performance by naively extrapolating the reported fractions of the subfield already traversed.6 Figure 2 below shows the implied estimates for time until human-level performance for all respondents’ answers. These estimates give a median time from 2020 until HLAI of 36 years (2056).\nFigure 2: Extrapolated estimated time until human-level subfield performance for each respondent, arranged by length of time. The last four responses are above 1000 but have been cut off.\nMachine learning vs subfield progress\nSome respondents reported broad ‘subfields’, which encompassed all of machine learning, in particular “Machine learning” or “Deep learning”, while others reported narrow subfields, e.g. “Natural language processing” or “Robotics”. We split the survey data based on this subfield narrowness, guessing that progress on machine learning overall may be a better proxy for AGI overall. Among the 69 respondents who gave answers corresponding to the entire field of machine learning, the median implied time was 32 years (2052). Among the 70 respondents who gave narrow answers, the median implied time was 44 years (2064). Figures 3 and 4 show these estimates.\nFigure 3: Implied estimates for human-level performance based on respondents who specified broad answers, e.g. “Machine learning” when asked about their subfield. The last three responses are above 1000 but have been cut off.\nFigure 3: Implied estimates for human-level performance based on respondents who specified broad answers, e.g. “Machine learning” when asked about their subfield. The last three responses are above 1000 but have been cut off.\nFigure 4: Implied estimates for human-level performance based on respondents who specified narrow answers, e.g. “Natural language processing” when asked about their subfield. The last response is above 1000 but has been cut off.\nThe median implied estimate until human-level performance for machine learning broadly was 12 years sooner than the median estimate for specific subfields. This is counter to what we might expect, if human-level performance in machine learning broadly implies human-level performance on each individual subfield.\nTime spent in field\nRobin Hanson has suggested that his survey may get longer implied forecasts than 2016 ESPAI because he asks exclusively people who have spent at least 20 years in their field.7 Filtering for people who have spent at least 20 years in their field, we have eight responses, and get a median implied time until HLAI of 142 years from 2020 (2162). Filtering for people who have spent at least 10 years in their field, we have 38 responses, and get a median implied time of 86 years (2106). Filtering for people who have spent less than 10 years in their field, we have 69 responses, and get a median implied time of 24 years (2044). Figures 5, 6 and 7 show estimates for each respondent, for each of these classes of time in field. \nFigure 5: Implied estimates for human-level performance based on respondents who were working on their subfield for at least 20 years. The last response is above 1000 but has been cut off.\nFigure 6: Implied estimates for human-level performance based on respondents who were working on their subfield for at least 10 years. The last three responses are above 1000 but have been cut off.\nFigure 7: Implied estimates for human-level performance based on respondents who were working on their subfield for less than 10 years. The last response is above 1000 but has been cut off.\nComparison of the two surveys\nThe median implied estimate from 2020 until human-level performance suggested by responses from 2016 ESPAI (36 years) is an order of magnitude smaller than the one suggested by the Hanson survey (372 years). This appears to be at least partly explained by more experienced researchers giving responses that imply longer estimates. Hanson asks exclusively people who have spent at least 20 years in their subfield, whereas the 2016 survey does not filter based on experience. If we filter 2016 survey respondents for researchers who have spent at least 20 years in their subfield we instead get a median estimate of 142 years. \nMore experienced researchers may generate longer implied estimates because the majority of progress has happened recently– many people think progress accelerated, which is some evidence of this. It could also be that less-experienced researchers feel that progress is more significant than it actually is.\nIf AI research is accelerating and is going to continue accelerating until we get to human-level AI, the time to HLAI may be sooner than these estimates. If AI research is accelerating now but is not representative of what progress will look like in the future, longer naive estimates by more experienced researchers may be more appropriate.\nComparison to estimates reached by other survey methods\n2016 ESPAI also asked people to estimate time until human-level machine intelligence (HLMI) by asking them how many years they would give until a 50% chance of HLMI. The median answer for this question in 2016 was 40 years, or 36 years from 2020 (2056), exactly the same as the median answer of 36 years implied by extrapolating fractional progress. The survey also asked about time to HLMI in other ways, which yielded less consistent answers.\nPrimary author: Asya Bergal\nNotes", "url": "https://aiimpacts.org/surveys-on-fractional-progress-towards-hlai/", "title": "Surveys on fractional progress towards HLAI", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-04-14T22:34:35+00:00", "paged_url": "https://aiimpacts.org/feed?paged=7", "authors": ["Asya Bergal"], "id": "482f1f8b977dfb7c4b05aa841e33910f", "summary": []} {"text": "2019 recent trends in Geekbench score per CPU price\n\nFrom 2006 – 2020, Geekbench score per CPU price has grown by around 16% a year, for rates that would yield an order of magnitude over roughly 16 years.\nDetails\nWe looked at Geekbench 5,1 a benchmark for CPU performance. We combined Geekbench’s multi-core scores on its ‘Processor Benchmarks’ page2 with release dates and prices that we scraped from Wikichip and Wikipedia.3 All our data and plots can be found here.4 We then calculated score per dollar and adjusted for inflation using the consumer price index.5 For every year, we calculated the 95th percentile score per dollar. We then fit linear and exponential trendlines to those scores.\nFigure 1 shows all our data for Geekbench score per CPU price.\nFigure 1: Geekbench scores per CPU price, in 2019 dollars. Red dots denote the 95th percentile values in each year from 2006 – 2019 (we start at 2006 since we have <= 2 data points a year prior to then). The exponential trendline through the 95th percentiles is marked in red, while the linear trendline is marked in green. The vertical axis is log-scale.\nThe data is well-described by a linear or an exponential trendline. Assuming an exponential trend,6 Geekbench score per CPU price grew by around 16% per year between 2006 and 2020, a rate that would yield a factor of ten every 16 years.7\nThis is a markedly slower growth rate than those observed for CPU price performance trends in the past, however since it is for a different performance metric to any used earlier, it is unclear how similar one should expect them to be– from 1940 to 2008, Sandberg and Bostrom found that CPU price performance grew by a factor of ten every 5.6 years when measured in MIPS per dollar, and by a factor of ten every 7.7 years when measured in FLOPS per dollar.8\nPrimary author: Asya Bergal\nNotes", "url": "https://aiimpacts.org/2019-recent-trends-in-geekbench-score-per-cpu-price/", "title": "2019 recent trends in Geekbench score per CPU price", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-04-14T21:11:41+00:00", "paged_url": "https://aiimpacts.org/feed?paged=7", "authors": ["Asya Bergal"], "id": "5f9fd5b18bc18e1f897b0ce84e0a2391", "summary": []} {"text": "Precedents for economic n-year doubling before 4n-year doubling\n\nThe only times gross world product appears to have doubled in n years without having doubled previously in 4n years were between 4,000 BC and 3,000 BC, and most likely between 10,000 BC and 4,000 BC.\nDetails\nBackground\nA key open question regarding AI risk is how quickly advanced artificial intelligence will ‘take off’, which is to say something like ‘go from being a small source of influence in the world to an overwhelming one’. \nIn Superintelligence1, Nick Bostrom defines the following answers, seemingly in line with common usage:\nSlow takeoff takes decades or centuriesModerate takeoff takes months or years Fast takeoff takes minutes to days\nHowever the specific criteria for takeoff having occurred are generally ambiguous.\nPaul Christiano has suggested2 operationalizing ‘slow takeoff’ as, \nThere will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.)\nHistoric precedents\nWe were interested in whether anything faster than a ‘slow takeoff’ by this definition would be historically unprecedented. That is, we wanted to know whether whenever the economy has doubled in n years, it has always completed a doubling in 4n years or less before the beginning of the n year doubling.\nWe took historic gross world product (GWP) estimates from Wikipedia3 and checked at each date how long it had taken for the economy to double, and whether it had always at some point doubled in as few as four times as many years prior to the start of that doubling.4\nWe found two apparent examples of faster takeoffs, so defined:\nBetween 4,000 BC and 3,000 BC, GWP doubled in 1,000 years, yet it had never before doubled in as few as 4000 yearsBetween 10,000 BC and 4,000 BC, GWP doubled in 6,000 years, yet there is no record of it doubling earlier in as few as 24,000 years. The records at that point are fairly sparse, so this is less clear, but it seems unlikely that there was a doubling in 24,000 years.5 This appears to coincide with the beginning of agriculture, in around 9000BC.6\nThe 300 year period immediately after 1300 saw a doubling of GWP growth, and the 1200 years beforehand did not see a doubling, however there was an earlier doubling within the 1200 years ending at 1200AD. So this is not technically an instance, but was a case of briefly accelerating growth. GWP between 1100 and 1300 actually declined though, so this is perhaps a different kind of case to the ones we are interested in.\nCorresponding author: Daniel Kokotajlo\nNotes\n", "url": "https://aiimpacts.org/precedents-for-economic-n-year-doubling-before-4n-year-doubling/", "title": "Precedents for economic n-year doubling before 4n-year doubling", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-04-14T20:42:41+00:00", "paged_url": "https://aiimpacts.org/feed?paged=7", "authors": ["Katja Grace"], "id": "d45a0463e246342b46a04bd3d6e1c2fb", "summary": []} {"text": "Resolutions of mathematical conjectures over time\n\nConditioned on being remembered as a notable conjecture, the time-to-proof for a mathematical problem appears to be exponentially distributed with a half-life of about 100 years. However, these observations are likely to be distorted by various biases.\nSupport\nIn 2014, we found conjectures referenced on Wikipedia, and recorded the dates that they were proposed and resolved, if they were resolved. We updated this list of conjectures in 2020, marking any whose status had changed. We then used a Kaplan-Meier estimator1 to approximate the survivorship function.2\nThe results of this exercise are recorded here.3 Figure 1 below shows the survivorship function for the mathematical conjectures we found. The data is fit closely by an exponential function with a half-life of 117 years.4\nFigure 1: Survivorship function of mathematical conjectures over time, also known as the fraction of mathematical conjectures unresolved at time t after being posed.\nBiases\nWe are using resolution times for remembered conjectures as a proxy for resolution times for all conjectures. Resolution time for remembered conjectures might be biased in several ways: old conjectures are perhaps more likely to be remembered if they are solved than if they are not, very recently solved conjectures are probably more likely to be remembered (though this only matters because the rate of conjecture posing has probably changed over time), and conjectures that were especially hard to solve might also be more notable. The latter hundred years contains few data points, which makes it particularly easy for it to be inaccurate.\nRelevance\nTo the extent that open theoretical problems in AI are similar to math problems, time to solve math problems may be informative for forming a prior on time to solve AI problems.\nCorresponding author: Asya Bergal\nNotes", "url": "https://aiimpacts.org/resolutions-of-mathematical-conjectures-over-time/", "title": "Resolutions of mathematical conjectures over time", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-04-14T20:38:13+00:00", "paged_url": "https://aiimpacts.org/feed?paged=7", "authors": ["Asya Bergal"], "id": "6152aad3ced1a7d1e86373929ae2bea0", "summary": []} {"text": "Trends in DRAM price per gigabyte\n\nThe price of a gigabyte of DRAM has fallen by about a factor of ten every 5 years from 1957 to 2020. Since 2010, the price has fallen much more slowly, at a rate that would yield an order of magnitude over roughly 14 years.\nDetails\nBackground\nDRAM, “dynamic random-access memory”, is a type of semiconductor memory. It is used as the main memory in modern computers and graphic cards.1\nData\nWe found two sources for historic pricing of DRAM. One was a dataset of DRAM prices and sizes from 1957 to 2018 collected by technologist and retired Computer Science professor2 John C. McCallum.3 The other dataset was extracted from a graph generated by Objective Analysis,4 a group that sells “third-party independent market research and data” to investors in the semiconductor industry.5 We have not checked where their data comes from and don’t have evidence about whether they are a trustworthy source.\nFigure 1 shows McCallum’s data.6\nFigure 1: Price per gigabyte of DRAM from 1957 to 2018 from John McCallum’s dataset, which we converted to 2020 dollars using the Consumer Price Index.7\nFigure 2 shows the average price per gigabyte of DRAM from 1991 to 2019, according to the Objective Analysis graph.8\nFigure 2: Average $ / GB of DRAM from 1991 to 2019 according to Objective Analysis. Dollars are 2020 dollars.\nThe two datasets appear to line up (see Figure 3 below),9 though we don’t know where the data in the Objective Analysis report came from– it could itself be referencing the McCallum dataset, or both could share data sources.\nFigure 3: $ / GB of DRAM from 1957 to 2018, with McCallum’s dataset in blue and the Objective Analysis dataset in red. Dollars are 2020 dollars.\nAnalysis\nFor both sources, the data appears to follow an exponential trendline. In the McCallum dataset, we calculate that the price / GB of DRAM has fallen at around 36% per year, for a factor of ten every 5.1 years and a doubling time of 1.5 years on average. The Objective Analysis data is similar, with the price / GB of DRAM falling around 33% per year, for a factor of ten every 5.8 years and a doubling time of 1.7 years.\nThe 1.5 and 1.7 year doubling times are close to the rate at which Moore’s law observed that transistors in an integrated circuit double.10 It seems possible to us that cheaper and denser transistors following this law are what enabled the cheaper prices of DRAM, though we haven’t investigated this theory.11\nBoth datasets show slower progress in recent years. From 2010 onwards, the McCallum dataset falls in price by only 15% a year, for a rate that would yield a factor of ten every 14 years, and the Objective Analysis dataset falls by 12% a year, for a rate that would yield a factor of ten every 18.5 years.\nPrimary author: Asya Bergal\nNotes", "url": "https://aiimpacts.org/trends-in-dram-price-per-gigabyte/", "title": "Trends in DRAM price per gigabyte", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-04-14T20:03:46+00:00", "paged_url": "https://aiimpacts.org/feed?paged=7", "authors": ["Asya Bergal"], "id": "136c58e3e6f39c41c7fabc55339de234", "summary": []} {"text": "Discontinuous progress in history: an update\n\nBy Katja Grace, 13 April 2020\nI. The search for discontinuities\nWe’ve been looking for historic cases of discontinuously fast technological progress, to help with reasoning about the likelihood and consequences of abrupt progress in AI capabilities. We recently finished expanding this investigation to 37 technological trends.1 This blog post is a quick update on our findings. See the main page on the research and its outgoing links for more details.\nWe found ten events in history that abruptly and clearly contributed more to progress on some technological metric than another century would have seen on the previous trend.2 Or as we say, we found ten events that produced ‘large’, ‘robust’ ‘discontinuities’.\nHow we measure the size of a discontinuity (by Rick Korzekwa)\nAnother five events caused robust discontinuities of between ten and a hundred years (‘moderate robust discontinuities’). And 48 more events caused some trend to depart from our best guess linear or exponential extrapolation of its past progress by at least ten years (and often a hundred), but did so in the context of such unclear past trends that this did not seem clearly remarkable.3 I call all of these departures ‘discontinuities’, and distinguish those that are clearly outside plausible extrapolations of the past trend, according to my judgment, as ‘robust discontinuities’.4\nMuch of the data involved in this project seems at least somewhat unreliable, and the methods involve many judgments, and much ignoring of minor issues. So I would not be surprised if more effort could produce numerous small changes. However I expect the broad outlines to be correct.5\nII. The discontinuities\nLarge robust discontinuities\nHere is a quick list of the robust 100-year discontinuous events, which I’ll describe in more detail beneath:\n\nThe Pyramid of Djoser, 2650BC (discontinuity in structure height trends)\nThe SS Great Eastern, 1858 (discontinuity in ship size trends)\nThe first telegraph, 1858 (discontinuity in speed of sending a 140 character message across the Atlantic Ocean)\nThe second telegraph, 1866 (discontinuity in speed of sending a 140 character message across the Atlantic Ocean)\nThe Paris Gun, 1918 (discontinuity in altitude reached by man-made means)\nThe first non-stop transatlantic flight, in a modified WWI bomber, 1919 (discontinuity in both speed of passenger travel across the Atlantic Ocean and speed of military payload travel across the Atlantic Ocean)\nThe George Washington Bridge, 1931 (discontinuity in longest bridge span)\nThe first nuclear weapons, 1945 (discontinuity in relative effectiveness of explosives)\nThe first ICBM, 1958 (discontinuity in average speed of military payload crossing the Atlantic Ocean)\nYBa2Cu3O7 as a superconductor, 1987 (discontinuity in warmest temperature of superconduction)\n\nThe Pyramid of Djoser, 2650BC\nDiscontinuity in structure height trends6\nThe Pyramid of Djoser is considered to be ‘the earliest colossal stone structure’ in Egypt. According to Wikipedia’s data, it took seven thousand years for the tallest structures to go from five to thirteen meters tall7 and then suddenly the Egyptian pyramids shot up to a height of 146.5m over about a hundred years and five successively tallest pyramids.\nThe Pyramid of Djoser, By Charles J Sharp – Own work, from Sharp Photography, sharpphotography, CC BY-SA 3.0, Link\nThe first of these five is the Pyramid of Djoser, standing 62.5m tall. The second one—Meidum Pyramid—is also a large discontinuity in structure height trends by our calculation, but I judge it not robust, since it is fairly unclear what the continuation of the trend should be after the first discontinuity. As is common, the more basic thing going on seems to be a change in the growth rate, and the discontinuity of the Pyramid of Djoser is just the start of it.\nThe Djoser discontinuity: close up on the preceding trend, cut off at the Pyramid of Djoser\nA longer history of record structure heights, showing the isolated slew of pyramids\nStrangely, after this spurt of progress, humanity built nothing taller than the tallest pyramid for nearly four thousand years—until Lincoln Cathedral in 1311—and nothing more than twenty percent taller than it until the Eiffel Tower in 1889.\nThe SS Great Eastern\nDiscontinuity in ship size, measured in ‘builder’s old measurement’8 or in displacement.\nThe SS Great Eastern was a freakishly large ship. For instance, it seems to have weighed about five times as much as any previous ship. As far as I can tell, the reason it existed is that Isambard Kingdom Brunell thought it would be good. Brunell was a 19th Century engineering hero, rated #2 greatest Briton of all time in a 2002 BBC poll, who according to Wikipedia, ‘revolutionised public transport and modern engineering’ and built ‘dockyards, the Great Western Railway (GWR), a series of steamships including the first propeller-driven transatlantic steamship, and numerous important bridges and tunnels’.\nThe SS Great Eastern compared to the UK Royal Navy’s ships of the line, which were probably not much smaller than the largest ships overall immediately prior to the Great Eastern\nThe experimental giant sailing steamship idea doesn’t seem to have gone well. The Great Eastern apparently never had its cargo holds filled, and ran at a deficit for years before being sold and used for laying the second telegraph cable (another source of large discontinuity—see below).9 It was designed for transporting passengers to the Far East, but there was never the demand.10 It was purportedly rumored to be ‘cursed’, and suffered various ill fortune. On its maiden voyage a boiler exploded, throwing one of the funnels into the air and killing six people.11 Later it hit a rock and got a 9-foot gash, which seems to have been hard to fix because the ship was too big for standard repair methods.12\nWe don’t have a whole trend for largest ships, so are using British Royal Navy ship of the line size trends as a proxy against which to compare the Great Eastern.13 This gives us discontinuities of around 400 years in both displacement and tonnage (BOM). [Added May 10: Nuño Sempere also investigated the Great Eastern as a discontinuity, and has some nice figures comparing it to passenger and sailing vessel trends.]\nThe SS Great Eastern\nHowever that is assuming we expect ship size to increase either linearly or exponentially (our usual expectation). But looking at the ship of the line trends, both displacement and cargo capacity (measured in tonnage, BOM) seemed to grow at something closer to a hyperbolic curve for some reason—apparently accelerating toward an asymptote in the late 1860s. If we had expected progress to continue this way throughout, then neither trend had any discontinuities, instead of eight or eleven of them. And supposing that overall ship size follows the same hyperbola as the military ship trends, then the Great Eastern’s discontinuities go from around 400 years to roughly 11 or 13 years. Which doesn’t sound big, but since this was about that many years before of the asymptote of the hyperbola at which point arbitrarily large ships were theoretically expected, the discontinuities couldn’t have been much bigger.\nOur data ended for some reason just around the apparently impending ship size singularity of the late 1860s. But my impression is that not much happened for a while—it apparently took forty years for a ship larger than the Great Eastern to be built, on many measures.\nI am unsure what to make of the apparently erroneous and unforced investment in the most absurdly enormous ship happening within a decade or two of the point at which trend extrapolation appears to have suggested arbitrarily large ships. Was Brunell aware of the trend? Did the forces that produced the rest of the trend likewise try to send all the players in the ship-construction economy up the asymptote, where they crashed into some yet unmet constraint? It is at least nice to have more examples of what happens when singularities are reached in the human world.\nThe first transatlantic telegraph\nDiscontinuity in speed of sending a 140 character message across the Atlantic Ocean\nUntil 1858, the fastest way to get a message from New York to London was by ship, and the fastest ships took over a week14. Telegraph was used earlier on land, but running it between continents was quite an undertaking. The effort to lay the a transatlantic cable failed numerous times before it became ongoingly functional.15 One of those times though, it worked for about a month, and messages were sent.16 There were celebrations in the streets.\nH.M.S. “Agamemnon” laying the Atlantic Telegraph cable in 1858. A whale crosses the line, R. M. Bryson, lith from a drawing by R. Dudley, 1865\nA celebration parade for the first transatlantic telegraph cable, Broadway, New York City\nThe telegraph could send a 98 word message in a mere 16 hours. For a message of more than about 1400 words, it would actually have been faster to send it by ship (supposing you already had it written down). So this was a big discontinuity for short messages, but not necessarily any progress at all for longer ones.\nThe first transatlantic telegraph cable revolutionized 140 character message speed across the Atlantic Ocean\nThe second transatlantic telegraph\nDiscontinuity in speed of sending a 140 character message across the Atlantic Ocean\nAfter the first working transatlantic telegraph cable (see above) failed in 1858, it was another eight years before the second working cable was finished. Most of that delay was apparently for lack of support.17 and the final year seems to have been because the cable broke and the end was lost at sea after over a thousand miles had been laid, leaving the ship to return home and a new company to be established before the next try.18 Whereas it sounds like it took less than a day to go from the ship carrying the cable arriving in port, and the sending of telegraphs.\nThe second telegraph discontinuity: close up on the preceding trend, cut off at the second telegraph. Note that the big discontinuity of the first telegraph cable is now almost invisible.\nAt a glance, on Wikipedia’s telling, it sounds as though the perseverance of one person—Cyrus West Field—might have affected when fast transatlantic communication appeared by years. He seems to have led all five efforts, supplied substantial money himself, and ongoingly fundraised and formed new companies, even amidst a broader lack of enthusiasm after initial failures. (He was also given a congressional gold medal for establishing the transatlantic telegraph cable, suggesting the US congress also has this impression.) His actions wouldn’t have affected how much of a discontinuity either telegraph was by much, but it is interesting if such a large development in a seemingly important area might have been accelerated much by a single person.\nThe second telegraph cable was laid by the Great Eastern, the discontinuously large ship of two sections ago. Is there some reason for these two big discontinuities to be connected? For instance, did one somehow cause the other? That doesn’t seem plausible. The main way I can think of that the transatlantic telegraph could have caused the Great Eastern‘s size would be if the economic benefits of being able to lay cable were anticipated and effectively subsidized the ship. I haven’t heard of this being an intended use for the Great Eastern. And given that the first transatlantic telegraph was not laid by the Great Eastern, it seems unlikely that such a massive ship was strictly needed for the success of a second one at around that time, though the second cable used was apparently around twice as heavy as the first. Another possibility is that some other common factor made large discontinuities more possible. For instance, perhaps it was an unusually feasible time and place for solitary technological dreamers to carry out ambitious and economically adventurous projects.\nGreat Eastern again, this time at Heart’s Content, Newfoundland, where it carried the end of the second transatlantic telegraph cable in 1866\nThe first non-stop transatlantic flight\nDiscontinuity in both speed of passenger travel across the Atlantic Ocean and speed of military payload travel across the Atlantic Ocean\nShips were the fastest way to cross the Atlantic Ocean until the end of World War I. Passenger liners had been getting incrementally faster for about eighty years, and the fastest regular passenger liner was given a special title, ‘Blue Riband‘. Powered heavier-than-air flight got started in 1903, but at first planes only traveled hundreds of feet, and it took time to expand that to the 1600 or so miles needed to cross the Atlantic in one hop.19\nThe first non-stop transatlantic flight was made shortly after the end of WWI, in 1919. The Daily Mail had offered a large cash prize, on hold during the war, and with the resumption of peace, a slew of competitors prepared to fly. Alcock and Brown were the first to do it successfully, in a modified bomber plane, taking around 16 hours, for an average speed around four times faster than the Blue Riband.\nAlcock and Brown landed in Irelend, 1919\nOne might expect discontinuities to be especially likely in a metric like ‘speed to cross the Atlantic’, which involves a sharp threshold on a non-speed axis for inclusion in the speed contest. For instance if planes incrementally improved on speed and range (and cost and comfort) every year, but couldn’t usefully cross the ocean at all until their range reached 1600 miles, then decades of incremental speed improvements could all hit the transatlantic speed record at once, when the range reaches that number.\nIs this what happened? It looks like it. The Wright Flyer apparently had a maximum speed of 30mph. That’s about the record average ocean liner speed in 1909. So if the Wright Flyer had had the range to cross the Atlantic in 1903 at that speed, it would have been about six years ahead of the ship speed trend and wouldn’t have registered as a substantial discontinuity. 20 But because it didn’t have the range, and because the speed of planes was growing faster than that of ships, in 1919 when planes could at last fly thousands of miles, they were way ahead of ships.\nThe transatlantic flight discontinuity: close up on the preceding trend, cut off at the first non-stop transatlantic flight.\nThe George Washington Bridge\nDiscontinuity in longest bridge span\nA bridge ‘span‘ is the distance between two intermediate supports in a bridge. The history of bridge span length is not very smooth, and so arguably full of discontinuities, but the only bridge span that seems clearly way out of distribution to me is the main span of the George Washington Bridge. (See below.)\nThe George Washington Bridge discontinuity: close up on the preceding trend, cut off at the George Washington Bridge\nI’m not sure what made it so discontinuously long, but it is notably also the world’s busiest motor vehicle bridge (as of 2016), connecting New York City with New Jersey, so one can imagine that it was a very unusually worthwhile expanse of water to cross. Another notable feature of it was that it was much thinner relative to its length than long suspension bridges normally were, and lacked the usual ‘trusses’, based on a new theory of bridge design.21\nGeorge Washington Bridge, via Wikimedia Commons, Photographer: Bob Jagendorf\nNuclear weapons\nDiscontinuity in relative effectiveness of explosives\nThe ‘relative effectiveness factor‘ of an explosive is how much TNT you would need to do the same job.22 Pre-nuclear explosives had traversed the range of relative effectiveness factors from around 0.5 to 2 over about a thousand years, when in 1945 the first nuclear weapons came in at a relative effectiveness of around 450023.\nThe nuclear weapons discontinuity: close up on the preceding trend, cut off at the first nuclear weapons\nA few characteristics of nuclear weapons that could relate to their discontinuousness:\n\nNew physical phenomenon: nuclear weapons are based on nuclear fission, which was recently discovered, and allowed human use of nuclear energy (which exploits the strong fundamental force) whereas past explosives were based on chemical energy (which exploits the electromagnetic force). New forms of energy are rare in human history, and nuclear energy stored in a mass is characteristically much higher than chemical energy stored in it.\nMassive investment: the Manhattan Project, which developed the first nuclear weapons, cost around $23 billion in 2018 dollars. This was presumably a sharp increase over previous explosives research spending.\nLate understanding: it looks like nuclear weapons were only understood as a possibility after it was well worth trying to develop them at a huge scale.\nMechanism involves a threshold: nuclear weapons are based on nuclear chain reactions, which require a critical mass of material (how much varies by circumstance).\n\nI discussed whether and how these things might be related to the discontinuity in 2015 here (see Gwern’s comment) and here.\nPreparation for the Trinity Test, the first detonation of a nuclear weapon\nThe trinity test explosion after 15 seconds\nThe Paris Gun\nDiscontinuity in altitude reached by man-made means\nThe Paris Gun was the largest artillery gun in WWI, used by the Germans to bomb Paris from 75 miles away. It could shoot 25 miles into the air, whereas the previous record we know of was around 1 mile into the air (also shot by a German gun).24\nThe Paris Gun, able to shell Paris from 75 miles away\nThe Paris Gun discontinuity: close up on the preceding trend of highest altitudes reached by man-made means, cut off at the Paris Gun\nI don’t have much idea why the Paris Gun traveled so much higher than previous weapons. Wikipedia suggests that its goals were psychological rather than physically effective warfare:\n\nAs military weapons, the Paris Guns were not a great success: the payload was small, the barrel required frequent replacement, and the guns’ accuracy was good enough for only city-sized targets. The German objective was to build a psychological weapon to attack the morale of the Parisians, not to destroy the city itself.\n\nThis might explain an unusual trade-off of distance (and therefore altitude) against features like accuracy and destructive ability. On this story, building a weapon to shoot a projectile 25 miles into the air had been feasible for some time, but wasn’t worth it. This highlights the more general possibility that the altitude trend was perhaps more driven by the vagaries of demand for different tangentially-altitude-related ends than by technological progress.\nThe German military apparently dismantled the Paris Guns before departing, and did not comply with a Treaty of Versailles requirement to turn over a complete gun to the Allies, so the guns’ capabilities are not known with certainty. However it sounds like the shells were clearly observed in Paris, and the relevant gun was clearly observed around 70 miles away, so the range is probably not ambiguous, and the altitude reached by a projectile is closely related to the range. So uncertainty around the gun probably doesn’t affect our conclusions.\nThe first intercontinental ballistic missiles (ICBMs)\nDiscontinuity in average speed of military payload crossing the Atlantic Ocean\nFor most of history, the fastest way to send a military payload across the Atlantic Ocean was to put it on a boat or plane, much like a human passenger. So the maximum speed of sending a military payload across the Atlantic Ocean followed the analogous passenger travel trend. However in August 1957, the two abruptly diverged with the first successful test of an intercontinental ballistic missile (ICBM)—the Russian R-7 Semyorka. Early ICBMs traveled at around 11 thousand miles per hour, taking the minimum time to send a military payload between Moscow and New York for instance from around 14 hours to around 24 minutes.25\nThe ICBM discontinuity: close up on the preceding trend, cut off at the first ICBM\nA ‘ballistic‘ missile is unpowered during most of its flight, and so follows a ballistic trajectory—the path of anything thrown into the air. Interestingly, this means that in order to go far enough to traverse the Atlantic, it has to be going a certain speed. Ignoring the curvature of the Earth or friction, this would be about 7000 knots for the shortest transatlantic distance—70% of its actual speed, and enough to be hundreds of years of discontinuity in the late 50s.26 So assuming ballistic missiles crossed the ocean when they did, they had to produce a large discontinuity in the speed trend.\nDoes this mean the ICBM was required to be a large discontinuity? No—there would be no discontinuity if rockets were improving in line with planes, and so transatlantic rockets were developed later, or ICBM-speed planes earlier. But it means that even if the trends for rocket distance and speed are incremental and start from irrelevantly low numbers, if they have a faster rate of growth than planes, and the threshold in distance required implies a speed way above the current record, then a large discontinuity must happen\nThis situation also means that you could plausibly have predicted the discontinuity ahead of time, if you were watching the trends. Seeing the rocket speed trend traveling upward faster than the plane speed trend, you could forecast that when it hit a speed that implied an intercontinental range, intercontinental weapons delivery speed would jump upward.\nAn SM-65 Atlas, the first US ICBM, first launched in 1957 (1958 image)\nYBa2Cu3O7 as a superconductor\nDiscontinuity in warmest temperature of superconduction\nWhen an ordinary material conducts electricity, it has some resistance (or opposition to the flow of electrons) which takes energy to overcome. The resistance can be gradually lowered by cooling the material down. For some materials though, there is a temperature threshold below which their resistance abruptly drops to zero, meaning for instance that electricity can flow through them indefinitely with no input of energy. These are ‘superconductors‘.\nSuperconductors were discovered in 1911. The first one observed, mercury, could superconduct below 4.2 Kelvin. From then on, more superconductors were discovered, and the warmest observed temperatures of superconduction gradually grew. In 1957, BCS theory was developed to explain the phenomenon (winning its authors a Nobel Prize), and was understood to rule out superconduction above temperatures of around 30K. But in 1986 a new superconductor was found with a threshold temperature around 30K, and composed of a surprising material: a ‘ceramic’ involving oxygen rather than an alloy.27 This also won a Nobel Prize, and instigated a rapid series of discoveries in similar materials—’cuprates‘—which shot the highest threshold temperatures to around 125 K by 1988 (before continued upward).\nThe high temperature superconductor discontinuity: close up on the preceding trend, cut off at YBa2Cu3O7\nThe first of the cuprates, LaBaCuO4, seems mostly surprising for theoretical reasons, rather than being radically above the temperature trend.28 The big jump came the following year, from YBa2Cu3O7, with its threshold at over 90 K.29\nThis seems like a striking instance of the story where the new technology doesn’t necessarily cause a jump so much as a new rate of progress. I wonder if there was a good reason for the least surprising cuprate to be discovered first. My guess is that there were many unsurprising ones, and substances are only famous if they were discovered before more exciting substances.\nMagnet levitating on top of a superconductor of YBa2Cu3O7 cooled to merely -196°C (77.15 Kelvin) Superconductors can allow magnetic levitation, consistently repelling permanent magnets while stably pinned in place. (Picture: Julien Bobroff (user:Jubobroff), Frederic Bouquet (user:Fbouquet), LPS, Orsay, France / CC BY-SA)\nIt is interesting to me that this is associated with a substantial update in very basic science, much like nuclear weapons. I’m not sure if that makes basic science updates ripe for discontinuity, or if there are just enough of them that some would show up in this list. (Though glancing at this list suggests to me that there were about 70 at this level in the 20th Century, and probably many fewer immediately involving a new capability rather than e.g. an increased understanding of pulsars. Penicillin also makes that list though, and we didn’t find any discontinuities it caused.)\nModerate robust discontinuities (10-100 years of extra progress):\nThe 10-100 year discontinuous events were:\n\nHMS Warrior, 1860 (discontinuity in both Royal Navy ship tonnage and Royal Navy ship displacement30)\nEiffel Tower, 1889 (discontinuity in tallest existing freestanding structure height, and in other height trends non-robustly)\nFairey Delta 2, 1956 (discontinuity in airspeed)\nPellets shot into space, 1957, measured after one day of travel (discontinuity in altitude achieved by man-made means)31\nBurj Khalifa, 2009 (discontinuity in height of tallest building ever)\n\nOther places we looked\nHere are places we didn’t find robust discontinuities32) – follow the links to read about any in detail:\n\nAlexnet: This convolutional neural network made important progress on labeling images correctly, but was only a few years ahead of the previous trend of success in the ImageNet contest (which was also a very short trend).\nLight intensity: We measured argon flashes in 1943 as a large discontinuity, but I judge it non-robust. The rate of progress shot up at around that time though, from around half a percent per year to an average of 90% per year over the next 65 years, the rest of it involving increasingly intense lasers.\nReal price of books: After the invention of the printing press, the real price of books seems to have dropped sharply, relative to a recent upward trajectory. However this was not long after a similarly large drop purportedly from paper replacing parchment. So in the brief history we have data for, the second drop is not unusual. We are also too uncertain about this data to confidently conclude much.\nManuscripts and books produced over the last hundred years: This was another attempt to find a discontinuity from the printing press. We measured several discontinuities, including one after the printing press. However, it is not very surprising for a somewhat noisy trend with data points every hundred years to be a hundred years ahead of the best-guess curve sometimes. The discontinuity at the time of the printing press was not much larger than others in nearby centuries. The clearer effect of the printing press at this scale appears to be a new faster growth trajectory.\nBandwidth distance product: This measures how much can be sent how far by communication media. It was just pretty smooth.\nTotal transatlantic bandwidth: This is how much cable goes under the Atlantic Ocean. It was also pretty smooth.\nWhitney’s cotton gin: Cotton gins remove seeds from cotton. Whitney’s gin is often considered to have revolutionized the cotton industry and maybe contributed to the American Civil War. We looked at its effects on pounds of cotton ginned per person per day, and our best guess is that it was a moderate discontinuity, but the trend is pretty noisy and the available data is pretty dubious. Interestingly, progress on gins was speeding up a lot prior to Whitney (the two previous data points look like much bigger discontinuities, but we are less sure that we aren’t just missing data that would make them part of fast incremental progress). We also looked at evidence on whether Whitney’s gin might have been a discontinuity in the more inclusive metric of cost per value of cotton ginned, but this was unclear. As evidence about the impact of Whitney’s gin, US cotton production appears to us to have been on the same radically fast trajectory before it as after it, and it seems people continued to use various other ginning methods for at least sixty years.\nGroup index of light or pulse delay of light: These are two different measures of how slowly light can be made to move through a medium. It can now be ‘stopped’ in some sense, though not the strict normal one. We measured two discontinuities in group index, but both were relative to a fairly unclear trend, so don’t seem robust. \nParticle accelerator performance: natural measures include center-of-mass energy, particle energy, and lorentz factor achieved. All of these progressed fairly smoothly.\nUS syphilis cases, US syphilis deaths, effectiveness of syphilis treatment, or inclusive costs of syphilis treatment: We looked at syphilis trends because we thought penicillin might have caused a discontinuity in something, and syphilis was apparently a key use case. But we didn’t find any discontinuities there. US syphilis deaths became much rarer over a period around its introduction, but the fastest drop slightly predates plausible broad use of penicillin, and there are no discontinuities of more than ten years in either US deaths or cases. Penicillin doesn’t even appear to be much more effective than its predecessor, conditional on being used.33 Rather, it seems to have been much less terrible to use (which in practice makes treatment more likely). That suggested to us that progress might have been especially visible in ‘inclusive costs of syphilis treatment’. There isn’t ready quantitative data for that, but we tried to get a rough qualitative picture of the landscape. It doesn’t look clearly discontinuous, because the trend was already radically improving. The preceding medicine sounds terrible to take, yet was nicknamed ‘magic bullet’ and is considered ‘the first effective treatment for syphilis‘. Shortly beforehand, mercury was still a usual treatment and deliberately contracting malaria had recently been added to the toolbox.\nNuclear weapons on cost-effectiveness of explosives: Using nuclear weapons as explosives was not clearly cheaper than using traditional explosives, let alone discontinuously cheaper. However these are very uncertain estimates.\nMaximum landspeed: Landspeed saw vast and sudden changes in the rate of progress, but the developments were so close together that none was very far from average progress between the first point and the most recent one. If we more readily expect short term trends to continue (which arguably makes sense when they are as well-defined as these), then we find several moderate discontinuities. Either way, the more basic thing going on appears to be very distinct changes in the rate of progress.\nAI chess performance: This was so smooth that a point four years ahead of the trend in 2008 is eye-catching.\nBreech-loading rifles on the firing rate of guns: Breech-loading rifles were suggested to us as a potential discontinuity, and firing rate seemed like a metric on which they plausibly excelled. However there seem to have been other guns with similarly fast fire rates at the time breech-loading rifles were introduced. We haven’t checked whether they produced a discontinuity in some other metric (e.g. one that combines several features), or if anything else caused discontinuities in firing rate.\n\nIII. Some observations\nPrevalence of discontinuities\nSome observations on the overall prevalence of discontinuities:\n\n32% of trends we investigated saw at least one large, robust discontinuity (though note that trends were selected for being discontinuous, and were a very non-uniform collection of topics, so this could at best inform an upper bound on how likely an arbitrary trend is to have a large, robust discontinuity somewhere in a chunk of its history)\n53% of trends saw any discontinuity (including smaller and non-robust ones), and in expectation a trend saw more than two of these discontinuities.\nOn average, each trend had 0.001 large robust discontinuities per year, or 0.002 for those trends with at least one at some point34\nOn average 1.4% of new data points in a trend make for large robust discontinuities, or 4.9% for trends which have one.\nOn average 14% of total progress in a trend came from large robust discontinuities (or 16% of logarithmic progress), or 38% among trends which have at least one \n\nThis all suggests that discontinuities, and large discontinuities in particular, are more common than I thought previously (though still not that common). One reason for this change is that I was treating difficulty of finding good cases of discontinuous progress as more informative than I do now. I initially thought there weren’t many around because suggested discontinuities often turned out not to be discontinuous, and there weren’t a huge number of promising suggestions. However we later got more good suggestions, and found many discontinuities where we weren’t necessarily looking for them. So I’m inclined to think there are a few around, but our efforts at seeking them out specifically just weren’t very effective. Another reason for a larger number now is that our more systematic methods now turn up many cases that don’t look very remarkable to the naked eye (those I have called non-robust), which we did not necessarily notice earlier. How important these are is less clear.\nDiscontinuities go with changes in the growth rate\nIt looks like discontinuities are often associated with changes in the growth rate. At a glance, 15 of the 38 trends had a relatively sharp change in their rate of progress at least once in their history. These changes in the growth rate very often coincided with discontinuities—in fourteen of the fifteen trends, at least one sharp change coincided with one of the discontinuities.35 If this is a real relationship, it means that if you see a discontinuity, there is a much heightened chance of further fast progress coming up. This seems important, but is a quick observation and should probably be checked and investigated further if we wanted to rely on it.\nWhere do we see discontinuities?\nAmong these case studies, when is a development more likely to produce a discontinuity in a trend?36 Some observations so far, based on the broader class including non-robust discontinuities, except where noted:\n\nWhen the trend is about products not technical measures If we loosely divide trends into ‘technical’ (to do with scientific results e.g. highest temperature of a superconductor), ‘product’ (to do with individual objects meant for use e.g. cotton ginned by a cotton gin, height of building), ‘industry’ (to do with entire industries e.g. books produced in the UK) or ‘societal’ (to do with features of non-industry society e.g. syphilis deaths in the US), then ‘product’ trends saw around four times as many discontinuities as technical trends, and the other two are too small to say much. (Product trends are less than twice as likely to have any discontinuities, so the difference was largely in how many discontinuities they have per trend.)\nWhen the trend is about less important ‘features’ rather than overall performance If we loosely divide trends into ‘features’ (things that are good but not the main point of the activity), ‘performance proxies’ (things that are roughly the point of the activity) and ‘value proxies’ (things that roughly measure the net value of the activity, accounting for its costs as well as performance), then features were more discontinuous than performance proxies.37\nWhen the trend is about ‘product features’ (Unsurprisingly, given the above.) Overall, the 16 ‘product features’ we looked at had 4.6 discontinuities per trend on average, whereas the 22 other metrics had 0.7 discontinuities per trend on average (2 vs. 0.3 for large discontinuities).38 ‘Product features’ include for instance sizes of ships and fire rate of guns, whereas non-product features include total books produced per century, syphilis deaths in the US, and highest temperature of known superconductors.\nWhen the development occurs after 1800 Most of the discontinuities we found happened after 1800. This could be a measurement effect, since much more recent data is available, and if we can’t find enough data to be confident, we are not deeming things discontinuities. For instance, the two obscure cotton gins before Whitney’s famous 1793 one that look responsible for huge jumps according to our sparse and untrustworthy 1700s data. The concentration of discontinuities since 1800 might also be related to progress speeding up in the last couple of centuries. Interestingly, since 1800 the rate of discontinuities doesn’t seem to be obviously increasing. For instance, seven of nine robust discontinuous events since 1900 happened by 1960.39\nWhen the trend is about travel speed across the Atlantic Four of our ten robust discontinuous events of over a hundred years came from the three transatlantic travel speed trends we considered. They are also high on non-robust discontinuities.\nWhen the trend doesn’t have a consistent exponential or linear shape To measure discontinuities, we had to extrapolate past progress. We did this at each point, based on what the curve looked like so far. Some trends we consistently called exponential, some consistently linear, and some sometimes seemed linear and sometimes exponential. The ten in this third lot all had discontinuities, whereas the 20 that consistently looked either exponential or linear were about half as likely to have discontinuities.40\nWhen the trend is in the size of some kind of object‘Object size’ trends had over five discontinuities per trend, compared to the average of around 2 across all trends.\nWhen Isambard Kingdom Brunel is somehow involved I mentioned Brunel above in connection with the Great Eastern. As well as designing that discontinuously large ship, which lay one of the discontinuously fast transatlantic telegraph cables, he designed the non-robustly discontinuous earlier ship Warrior.\n\nI feel like there are other obvious patterns that I’m missing. Some other semi-obvious patterns that I’m noticing but don’t have time to actually check now, I am putting in the next section.\nMore things to observe\nThere are lots of other interesting things to ask about this kind of data, in particular regarding what kinds of things tend to see jumps. Here are some questions that we might answer in future, or which we welcome you to try to answer (and hope our data helps with):\n\nAre trends less likely to see discontinuities when more effort is going more directly into maximizing them? (Do discontinuities arise easily in trends people don’t care about?)\nHow does the chance of discontinuity change with time, or with speed of progress? (Many trends get much faster toward the end, and there are more discontinuities toward the end, but how are they related at a finer scale?)\nDo discontinuities come from ‘insights’ more than from turning known cranks of progress?\nAre AI related trends similar to other trends? The two AI-related trends we investigated saw no substantial discontinuities, but two isn’t very many, and there is a persistent idea that once you can do something with AI, you can do it fast.41\nAre trends more continuous as they depend on more ‘parts’? (e.g. is maximum fuel energy density more jumpy than maximum engine power, which is more jumpy than maximum car speed?) This would make intuitive sense, but is somewhat at odds with the 8 ‘basic physics related’ trends we looked at not being especially jumpy.\nHow does the specificity of trends relate to their jumpiness? I’d intuitively expect jumpier narrow trends to average out in aggregate to something smooth (for instance, so that maximum Volkswagen speed is more jumpy than maximum car speed, which is more jumpy than maximum transport speed, which is more jumpy than maximum man-made object speed). But I’m not sure that makes sense, and a contradictory observation is that discontinuities or sudden rate changes happen when a continuous narrow trend shoots up and intersects the broader trend. For instance, if record rocket altitude is continuously increasing, and record non-rocket altitude is continuously increasing more slowly but is currently ahead, then overall altitude will have some kind of corner in it where rockets surpass non-rockets. If you drew a line through liquid fuel rockets, pellets would have been less surprising, but they were surprising in terms of the broader measure.\nWhat does a more random sample of trends look like?\nWhat is the distribution of step sizes in a progress trend? (Looking at small ones as well as discontinuities.) If it generally follows a recognizable distribution, that could provide more information about the chance of rare large steps. It might also help recognize trends that are likely to have large discontinuities based on their observed distribution of smaller steps.\nRelatively abrupt changes in the growth rate seem common. Are these in fact often abrupt rather than ramping up slowly? (Are discontinuities in the derivative relevantly different from more object-level discontinuities, for our purposes?)\nHow often is a ‘new kind of thing’ responsible for discontinuities? (e.g. the first direct flight and the first telegraph cable produced big discontinuities in trends that had previously been topped by ships for some time.) How often are they responsible for changes in the growth rate? \nIf you drew a line through liquid fuel rockets, it seems like pellets may not be surprise, but they were because of the broader measure. How often is that a thing? I think a similar thing may have happened with the altitude records, and the land speed records, both also with rockets in particular. In both of those // similar thing happened with rockets in particular in land-speed and altitude? Could see trend coming up from below for some time.\nIs more fundamental science more likely to be discontinuous? \nWith planes and ICBMs crossing the ocean, there seemed to be a pattern where incremental progress had to pass a threshold on some dimension before incremental progress on a dimension of interest mattered, which gave rise to discontinuity. Is that a common pattern? (Is that a correct way to think about what was going on?)\nIf a thing sounds like a big deal, is it likely to be discontinuous? My impression was that these weren’t very closely connected, nor entirely disconnected. Innovations popularly considered a big deal were often not discontinuous, as far as we could tell. For instance penicillin seemed to help with syphilis a lot, but we didn’t find any actual discontinuity in anything. And we measured Whitney’s cotton gin as producing a moderate discontinuity in cotton ginned per person per day, but it isn’t robust, and there look to have been much larger jumps from earlier more obscure gins. On the other hand, nuclear weapons are widely considered a huge deal, and were a big discontinuity. It would be nice to check this more systematically. \n\nIV. Summary\n\nLooking at past technological progress can help us tell whether AI trends are likely to be discontinuous or smooth\nWe looked for discontinuities in 38 technological trends\nWe found ten events that produced robust discontinuities of over a hundred years in at least one trend. (Djoser, Great Eastern, Telegraphs, Bridge, transatlantic flight, Paris Gun, ICBM, nukes, high temperature superconductors.)\nWe found 53 events that produced smaller or less robust discontinuities\nThe average rate of large robust discontinuities per year across trends was about 0.1%, but the chance of a given level of progress arising in a large robust discontinuity was around 14%\nDiscontinuities were not randomly distributed: some classes of metric, some times, and some types of event seem to make them more likely or more numerous. We mostly haven’t investigated these in depth.\nGrowth rates sharply changed in many trends, and this seemed strongly associated with discontinuities. (If you experience a discontinuity, it looks like there’s a good chance you’re hitting a new rate of progress, and should expect more of that.)\n\n\n\n~\nETA: To be more clear, this is a blog post by Katja reporting on research involving many people at AI Impacts over the years, especially Rick Korzekwa, Asya Bergal, and Daniel Kokotajlo. The full page on the research is here.\n\n\nThanks to Stephen Jordan, Jesko Zimmermann, Bren Worth, Finan Adamson, and others for suggesting potential discontinuities for this project in response to our 2015 bounty, and to many others for suggesting potential discontinuities since, especially notably Nuño Sempere, who conducted a detailed independent investigation into discontinuities in ship size and time to circumnavigate the world42. \n\n\nNotes\n", "url": "https://aiimpacts.org/discontinuous-progress-in-history-an-update/", "title": "Discontinuous progress in history: an update", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-04-13T23:55:08+00:00", "paged_url": "https://aiimpacts.org/feed?paged=7", "authors": ["Katja Grace"], "id": "730e186ad5d46ce207461d27b35b410a", "summary": []} {"text": "Preliminary survey of prescient actions\n\nPublished 3 April 2020\n In a 10-20 hour exploration, we did not find clear examples of ‘prescient actions’—specific efforts to address severe and complex problems decades ahead of time and in the absence of broader scientific concern, experience with analogous problems, or feedback on the success of the effort—though we found six cases that may turn out to be examples on further investigation. \nDetails\n We briefly investigated 20 leads on historical cases of actions taken to eliminate or mitigate a problem a decade or more in advance, evaluating them for their ‘prescience’. None were clearly as prescient as the actions of Leó Szilárd, which were previously the best examples of such actions that we found. The primary ways in which these actions failed to exhibit prescience were the amount of feedback that was available while developing a solution and the number of years in advance of the threat that the action was taken. Although we are uncertain about most of the cases, we believe that six of them are promising for future investigation. \nBackground\n Current efforts to prepare for the impacts of artificial intelligence have several features that could make them unlikely to succeed. They typically require us to make complex predictions about novel threats over a timescale of decades, and many of these efforts will receive little feedback on whether they are on the right track, receive little input from the larger scientific community, and produce results that are not useful outside the problem of mitigating AI risk.\nIt may be useful to search for past cases of preparations that have similar features. It is important to know if humanity has failed to solve problems in advance because the attempts to do so have failed or because solutions were not attempted. If we find failed attempts, we want to know why they failed.  For example, if it turns out that most previous actions were not successful because of failure to accurately predict the future, we may want to focus more of our efforts on forecasting. To this end, we use the following set of criteria for evaluating past efforts for their ‘prescience’, or the extent to which they represent early actions to mitigate a risk in absence of feedback:1\n\nYears in Advance: How many years in advance of the expected emergence of the threat was the action taken?\nNovelty: Was the threat novel, or can we re-use (perhaps with modification) the solution to past threats?\nScientific Concern: Was the effort to address the threat endorsed by the larger scientific community?\nComplex Prediction: Did the solution require a complex prediction, or is the solution clear and closely related to the problem? \nSpecificity: Was the solution specific to the threat or is it something that is broadly useful and may be done anyway?\nFeedback: Was feedback available while developing a solution, so that we can make mistakes and learn from them, or will we need to get it right on the first try?\nSeverity: Was it a severe threat of global importance?\n\n In addition to these criteria, we took note of whether the outcome of the efforts is known, as cases with a known outcome may be more informative and more fruitful for further investigation. \nMethodology\n Potential cases of interest were found by searching the Internet, asking our friends and colleagues, and offering a bounty on promising leads. We compiled a list of topics to research that were sufficiently narrow to allow for evaluation over a short period of time. This list included individual people that took actions (like Clair Patterson), specific actions that were taken (e.g. the installation of the Moscow-Washington Hotline), and the threats themselves (such as the destruction of infrastructure by a geomagnetic storm). \n One researcher spent approximately 30 minutes reviewing each case, and rated them on a scale of 0 to 10 on the criteria described in the previous section.2 A score of 1 indicates the criterion described the case very poorly, while a score of 10 indicates the case demonstrated the criterion extremely well. These ratings were highly subjective, though we made efforts to evaluate the cases in a way that is consistent and which would avoid too many false negatives.3 A composite score was calculated from these by taking a weighted average with the following weights:4\nCriterionWeightNumber of years in advance520Overall severity of threat2Novelty of threat/solution3Overall level of concern from the scientific community at large2Complexity of prediction required to produce a solution5Specificity of solution2Level of feedback available while developing a solution10\n In addition to these ratings, we rated each one for how promising it was for further research, and annotated the ratings in the spreadsheet as seemed appropriate. We also assigned ratings to two cases that were previously the subject of in-depth investigations, for comparison. These were the Asilomar Conference and the actions of Leó Szilárd.\nResults\n The following table shows our ratings. The two reference cases are in italics. Our full spreadsheet of ratings and notes can be found here.\nCaseScoreSuitability for Further ResearchLeo Szilard7.24Antibiotic resistance7.117Open Quantum Safe6.805Nordic Gene Bank6.744Geomagnetic Storm Prep6.745Fukushima Daichii6.745Swiss Redoubt6.602Nonproliferation Treaty6.146Cavendish Banana and TR46.125WIPP6.024Population Bomb5.993Y2k5.764Asilomar Conference5.70Cold War Civil Defense5.293Religious Apocalypse4.882Hurricane Katrina4.184Iran Nuclear Deal4.184Moscow-Washington Hotline3.903England 1800s Policy Reform3.892Clair Patterson3.742Missile gap3.222PQCrypto Conference 20064\n For one case, the PQCrypto 2006 conference, we were unable to find sufficient information after 45 minutes of investigation to provide an evaluation.\nIn general the cases we investigated did not score highly on these criteria. The average score was 5.6 out of 10, with the US-Russia missile gap receiving the minimum score of 3.0 and antibiotic resistance receiving the maximum score of 7.11. None of the cases received a higher score than our reference case, the actions of Leó Szilárd (score = 7.24), which we consider to be sufficiently ‘prescient’ to be worth examining. Just over half (11) of our cases received higher ratings than the Asilomar Conference (rating = 5.6), which was previously judged to be less prescient.\nThe ratings are highly uncertain, as is natural for thirty minute reviews of complex topics.  On average, our 90th percentile estimates were 80% larger than their corresponding 10th percentile estimate. All but four cases had minimum ratings lower than the best guess for Asilomar, and more than half had maximum ratings higher than the best guess for Leó Szilárd.\nThe axes on which the cases were least prescient were feedback and years in advance.6 The cases were most analogous on severity, novelty, and specificity of solution, losing on average .20, .30, and .20 points from their composite scores, respectively.\nTwo cases, antibiotic resistance and the Treaty on the Non-Proliferation of Nuclear Weapons, seemed particularly promising for additional research, and received scores of 7 and 6 accordingly. Five other cases received scores of at least five and seemed less promising, but likely worth some additional research.\nDiscussion\n Although the very short research time allotted to each case limits our ability to confidently draw conclusions, we ruled out some cases which were clearly not prescient, identified some promising cases, and roughly characterized some ways in which efforts to reduce AI risk may be different from past efforts to reduce risks.\nIrrelevant Cases\n There were four cases that we found to be poor examples of prescient actions: The US-Russia Missile Gap of the late 1950’s, the actions of Clair Patterson to combat the use of leaded gasoline, 19th century policy reforms in England that were made in response to the industrial revolution, and the Moscow-US Nuclear Hotline. All of these cases involved actions that were taken in response to, rather than in anticipation of, the emergence of a problem (or perceived problem), and for which the solutions were relatively straightforward, with the primary barriers being political.7\nQuestionable Cases\n Two cases involved actions based on highly dubious predictions: Preparations for a religious apocalypse8 and the book The Population Bomb and the accompanying actions of author Paul Erhlich. Although the actors in these cases were acting on predictions that have since been shown to be inaccurate, the cases do have some similarity to AI risk. They were addressing predictions of severe consequences from novel threats, they were acting without help from the scientific community, and they did not expect to receive a great deal of feedback along the way. However, the actions were only taken 5-10 years in advance of the threat, and we expect the apparent disconnect between the forecasts and reality to make it more difficult to learn from the actions.\nSome cases involved threats that had already emerged, in the sense that they could happen immediately, but had sufficiently low per-year risk for a reasonable person to expect the outcome to be at least a decade in the future. These include  Hurricane Katrina, US civil defense during the cold war, Fukushima Daichii, the comparison case Asilomar Conference, and the Nordic Gene Bank.9 10\nOther cases involved solutions that were easy or not dependent on complex forecasting. The Swiss National Redoubt relied on long-range forecasting, but was more of a large investment in defense than a complex search for a solution. The year 2000 problem was easy to address, even without taking action until relatively shortly before the event took place. The Iran Nuclear Deal (and perhaps also the Nuclear Non-Proliferation Treaty) required difficult political negotiations, but did not appear to rely on complex predictions.\nPromising Cases\n We identified six cases that seem promising for further investigation:\nAlexander Fleming warned, in his 1945 Nobel Lecture, that widespread access to antibiotics without supervision may lead to antibiotic resistance.11 We are uncertain of the impact of Fleming’s warning, whether he took additional action to mitigate the risk, or how widespread within the scientific community such concerns were, but our impression is that it was not a widely known issue, that his was an early warning, and that his judgement was generally taken seriously by the time of his speech. His warning preceded the first documented cases of penicillin-resistant bacteria by more than 20 years, and the threat of antimicrobial resistance seems to be broadly analogous with AI risk on most of our criteria, though it does seem that feedback was available throughout efforts to reduce the threat. \nThe Treaty on the Non-Proliferation of Nuclear Weapons required many actions from many actors, but it seems to have required a complex prediction about technological development and geopolitics to address a severe threat, was specific to a particular threat, and had limited opportunities for feedback. We are uncertain if any of the specific actions will prove to be prescient on further investigation, but it seems promising.\nOpen Quantum Safe is an open-source project to develop cryptographic techniques that are resistant to the use of quantum computers. The threat of quantum computing to cryptography has several relevant features, including complex forecasting over a decades-time scale of a novel threat. We found limited information on the circumstances surrounding the founding of the project or the related case, the 2006 PQCrypto Conference, but the problem generally seems prescient.\nGeomagnetic Storm Preparation addresses the threat caused by severe damage and disruption by solar weather to electronics and power infrastructure, which could be a severe global catastrophe.12 The expected time between such events is decades or centuries, and mitigating the risk involves actions that may be specific to the particular problem and requires complex predictions about the physics involved and how our infrastructure and institutions would be able to respond. However, we are uncertain about which actions were taken and when, and whether there is evidence that they are working. Additionally, there is substantial investment from the scientific community and we are uncertain how much feedback is available while developing solutions.\nPanama Disease is a fungal infection that has been spreading globally for decades and threatens the viability of the cavendish banana as a commercial crop. Cavendish bananas account for the vast majority of banana exports, and are integral to the food security of countries such as Costa Rica and Guatemala.13 Early action included measures to slow the spread of the fungus, a search for cultivars to replace the Cavendish, calls for greater diversity in banana varietals, and searches for fungicides that are able to kill the fungus. Although these actions have many opportunities for feedback, some of them involve complex predictions and searches for specific technical solutions, and, from the perspective of farmers on continents that have not yet encountered the infection, the arrival of the fungus represents a discrete event at some undetermined time in the future. We are uncertain if these are good examples of prescient actions, but they may be worth additional investigation. \nPresence of Feedback\n The axis on which our cases most differed from efforts to reduce AI risk was the level of feedback available while developing a solution. The average score on feedback was 3.8, and none of the cases received a score higher than 7. Even cases that initially seemed that they would have very little feedback proved to have enough to aid those that were making preparations. Examples include Hurricane Katrina, which benefited from lessons learned from preceding hurricanes, and the National Redoubt of Switzerland, which benefited from the observation of conflicts between other actors, providing information about which military equipment and tactics were viable against likely adversaries. Assuming that these results are representative, here are two ways to interpret these results:\nFeedback is abundant: Feedback is abundant in a wide variety of situations, so that we should also expect to have opportunities for feedback while preparing for advanced artificial intelligence. In support of this view are the cases mentioned above that were initially expected to lack feedback, even on the part of those making preparations, but which nonetheless benefited from feedback.\nAI risk is unusual: The common perception that there is very little feedback available to efforts to reduce the risks of advanced AI is correct, and AI risk is unique (or very rare) in this regard. Support for this view comes from arguments for the one-shot nature of solving the AI control problem.14\nPrimary author: Rick Korzekwa\nNotes", "url": "https://aiimpacts.org/survey-of-prescient-actions/", "title": "Preliminary survey of prescient actions", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-04-04T00:15:54+00:00", "paged_url": "https://aiimpacts.org/feed?paged=7", "authors": ["richardkorzekwa"], "id": "6130c15dc54f426489c48f9017bc8036", "summary": []} {"text": "Takeaways from safety by default interviews\n\nBy Asya Bergal, 3 April 2020\nLast year, several researchers at AI Impacts (primarily Robert Long and I) interviewed prominent researchers inside and outside of the AI safety field who are relatively optimistic about advanced AI being developed safely. These interviews were originally intended to focus narrowly on reasons for optimism, but we ended up covering a variety of topics, including AGI timelines, the likelihood of current techniques leading to AGI, and what the right things to do in AI safety are right now. \nWe talked to Ernest Davis, Paul Christiano, Rohin Shah, Adam Gleave, and Robin Hanson.\nHere are some more general things I personally found noteworthy while conducting these interviews. For interview-specific summaries, check out our Interviews Page.\nRelative optimism in AI often comes from the belief that AGI will be developed gradually, and problems will be fixed as they are found rather than neglected.\nAll of the researchers we talked to seemed to believe in non-discontinuous takeoff.1 Rohin gave ‘problems will likely be fixed as they come up’ as his primary reason for optimism,2 Adam3 and Paul4 both mentioned it as a reason.\nRelatedly, both Rohin5 and Paul6 said one thing that could update their views was gaining information about how institutions relevant to AI will handle AI safety problems– potentially by seeing them solve relevant problems, or by looking at historical examples.\nI think this is a pretty big crux around the optimism view; my impression is that MIRI researchers generally think that 1) the development of human-level AI will likely be fast and potentially discontinuous and 2) people will be incentivized to hack around and redeploy AI when they encounter problems. See Likelihood of discontinuous progress around the development of AGI for more on 1). I think 2) could be a fruitful avenue for research; in particular, it might be interesting to look at recent examples of people in technology, particularly ML, correcting software issues, perhaps when they’re against their short-term profit incentives. Adam said he thought the AI research community wasn’t paying enough attention to building safe, reliable, systems.7\nMany of the arguments I heard around relative optimism weren’t based on inside-view technical arguments.\nThis isn’t that surprising in hindsight, but it seems interesting to me that though we interviewed largely technical researchers, a lot of their reasoning wasn’t based particularly on inside-view technical knowledge of the safety problems. See the interviews for more evidence of this, but here’s a small sample of the not-particularly-technical claims made by interviewees:\n\nAI researchers are likely to stop and correct broken systems rather than hack around and redeploy them.8\nAI has and will progress via a cumulation of lots of small things rather than via a sudden important insight.\n\nMy instinct when thinking about AGI is to defer largely to safety researchers, but these reasons felt noteworthy to me in that they seemed like questions that were perhaps better answered by economists or sociologists (or for the latter case, neuroscientists) than safety researchers. I really appreciated Robin’s efforts to operationalize and analyze the second claim above.(Of course, many of the claims were also more specific to machine learning and AI safety.)\nThere are lots of calls for individuals with views around AI risk to engage with each other and understand the reasoning behind  fundamental disagreements. \nThis is especially true around views that MIRI have, which many optimistic researchers reported not having a good understanding of.\nThis isn’t particularly surprising, but there was a strong universal and unprompted theme that there wasn’t enough engagement around AI safety arguments. Adam and Rohin both said they had a much worse understanding than they would like of others viewpoints.9 Robin10 and Paul11 both pointed to some existing but meaningful unfinished debate in the space.\n3 April 2020\n\nNotes", "url": "https://aiimpacts.org/takeaways-from-safety-by-default-interviews/", "title": "Takeaways from safety by default interviews", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-04-03T17:10:45+00:00", "paged_url": "https://aiimpacts.org/feed?paged=7", "authors": ["Asya Bergal"], "id": "52f7d7af7b0ab78f2645285af781ab5a", "summary": []} {"text": "Interviews on plausibility of AI safety by default\n\nThis is a list of interviews on the plausibility of AI safety by default.\nBackground\nAI Impacts conducted interviews with several thinkers on AI safety in 2019 as part of a project exploring arguments for expecting advanced AI to be safe by default. The interviews also covered other AI safety topics, such as timelines to advanced AI, the likelihood of current techniques leading to AGI, and currently promising AI safety interventions. \nList\nConversation with Ernie DavisConversation with Rohin ShahConversation with Paul ChristianoConversation with Adam GleaveConversation with Robin Hanson", "url": "https://aiimpacts.org/interviews-on-plausibility-of-ai-safety-by-default/", "title": "Interviews on plausibility of AI safety by default", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-04-02T22:40:43+00:00", "paged_url": "https://aiimpacts.org/feed?paged=7", "authors": ["Asya Bergal"], "id": "a2589b4d7503e70633bb154b28560079", "summary": []} {"text": "Atari early\n\nBy Katja Grace, 1 April 2020\nDeepmind announced that their Agent57 beats the ‘human baseline’ at all 57 Atari games usually used as a benchmark. I think this is probably enough to resolve one of the predictions we had respondents make in our 2016 survey.\nOur question was when it would be feasible to ‘outperform professional game testers on all Atari games using no game specific knowledge’.1 ‘Feasible’ was defined as meaning that one of the best resourced labs could do it in a year if they wanted to.\nAs I see it, there are four non-obvious things to resolve in determining whether this task has become feasible:\n\nDid or could they outperform ‘professional game testers’? \nDid or could they do it ‘with no game specific knowledge’?\nDid or could they do it for ‘all Atari games’?\nIs anything wrong with the result?\n\nI. Did or could they outperform ‘professional game testers’?\nIt looks like yes, for at least for 49 of the games: the ‘human baseline’ appears to have come from ‘professional human games testers’ described in this paper.2 (What exactly the comparison was for the other games is less clear, but it sounds like what they mean by ‘human baseline’ is ‘professional game tester’, so I guess the other games meet a similar standard.)\nI’m not sure how good professional games testers are. It sounds like they were not top-level players, given that the paper doesn’t say that they were, that they were given two hours to practice the games, and that randomly searching for high scores online for a few of these games (e.g. here) yields higher ones (though this could be complicated by e.g. their only being allowed a short time to play).\nII. Did or could they do it with ‘no game specific knowledge’?\nMy impression is that their system does not involve ‘game specific knowledge’ under likely meanings of this somewhat ambiguous term. However I don’t know a lot about the technical details here or how such things are usually understood, and would be interested to hear what others think.\nIII. Did or could they do it for ‘all Atari games’?\nAgent57 only plays 57 Atari 2600 games, whereas there are hundreds of Atari 2600 games (and other Atari consoles with presumably even more games). \nSupposing that Atari57 is a longstanding benchmark including only these 57 Atari games, it seems likely that the survey participants interpreted the question as about only those games. Or at least about all Atari 2600 games, rather than every game associated with the company Atari.\nInterpreting it as written though, does Agent57’s success suggest that playing all Atari games is now feasible? My guess is yes, at least for Atari 2600 games. \nFifty-five of the fifty-seven games were proposed in this paper3, which describes how they chose fifty of them: \n\nOur testing set was constructed by choosing semi-randomly from the 381 games listed on Wikipedia [http://en.wikipedia.org/wiki/List_of_Atari_2600_games (July 12, 2012)] at the time of writing. Of these games, 123 games have their own Wikipedia page, have a single player mode, are not adult-themed or prototypes, and can be emulated in ALE. From this list, 50 games were chosen at random to form the test set.\n\nThe other five games in that paper were a ‘training set’, and I’m not sure where the other two came from, but as long as fifty of them were chosen fairly randomly, the provenance of the last seven doesn’t seem important.\nMy understanding is that none of the listed constraints should make the subset of games chosen particularly easy rather than random. So being able to play these games well suggests being able to play any Atari 2600 game well, without too much additional effort.\nThis might not be true if having chosen those games (about eight years ago), systems developed in the meantime are good for this particular set of games, but a different set of methods would have been needed had a different subset of games been chosen, to the extent that more than an additional year would be needed to close the gap now. My impression is that this isn’t very likely.\nIn sum, my guess is that respondents usually interpreted the ambiguous ‘all Atari games’ at least as narrowly as Atari 2600 games, and that a well resourced lab could now develop AI that played all Atari 2600 games within a year (e.g. plausibly DeepMind could already do that).\nIV. Is there anything else wrong with it?\nNot that I know of, but let’s wait a few weeks and see if anything comes up.\n~\nGiven all this, I think it is more likely than not that this Atari task is feasible now. Which would be interesting, because the median 2016 survey response put a 10% chance on it being feasible in five years, i.e. by 2021.4 They more robustly put a median 50% chance on ten years out (2026).5\nIt’s exciting to resolve expert predictions about early tasks so we know more about how to treat their later predictions about human-level science research and the obsolescence of all human labor for instance. But we should probably wait for a few more before reading much into it. \nAt a glance, some other tasks which we are already learning something about, or might soon:\n\nThe ‘reading Aloud’ task6 seems to be coming along to my very non-expert ear, but I know almost nothing about it.\nIt seems like we are close on Starcraft though as far as I know the prediction hasn’t been exactly resolved as stated.\n\n1 April 2020\nThanks to Rick Korzekwa, Jacob Hilton and Daniel Filan for answering many questions.\nNotes", "url": "https://aiimpacts.org/atari-early/", "title": "Atari early", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-04-02T06:02:18+00:00", "paged_url": "https://aiimpacts.org/feed?paged=8", "authors": ["Katja Grace"], "id": "8edad9be583d84d8c854f5fe5429fb53", "summary": []} {"text": "Three kinds of competitiveness\n\nBy Daniel Kokotajlo, 30 March 2020\nIn this post, I distinguish between three different kinds of competitiveness — Performance, Cost, and Date — and explain why I think these distinctions are worth the brainspace they occupy. For example, they help me introduce and discuss a problem for AI safety proposals having to do with aligned AIs being outcompeted by unaligned AIs. \nDistinguishing three kinds of competitiveness and competition\nA system is performance-competitive insofar as its ability to perform relevant tasks compares with competing systems. If it is better than any competing system at the relevant tasks, it is very performance-competitive. If it is almost as good as the best competing system, it is less performance-competitive. \n(For AI in particular, “speed” “quality” and “collective” intelligence as Bostrom defines them all contribute to performance-competitiveness.)\nA system is cost-competitive to the extent that it costs less to build and/or operate than its competitors. If it is more expensive, it is less cost-competitive, and if it is much more expensive, it is not at all cost-competitive. \nA system is date-competitive to the extent that it can be created sooner (or not much later than) its competitors. If it can only be created after a prohibitive delay, it is not at all date-competitive. \nA performance competition is a competition that performance-competitiveness helps you win. The more important performance-competitiveness is to winning, the more intense the performance competition is.\nLikewise for cost and date competitions. Most competitions are all three types, to varying degrees. Some competitions are none of the types; e.g. a “competition” where the winner is chosen randomly. \nI briefly searched the AI alignment forum for uses of the word “competitive.” It seems that when people talk about competitiveness of AI systems, they usually mean performance-competitiveness, but sometimes mean cost-competitiveness, and sometimes both at once. Meanwhile, I suspect that this important post can be summarized as “We should do prosaic AI alignment in case only prosaic AI is date-competitive.”\nPutting these distinctions to work\nFirst, I’ll sketch some different future scenarios. Then I’ll sketch how different AI safety schemes might be more or less viable depending on which scenario occurs. For me at least, having these distinctions handy makes this stuff easier to think and talk about.\nDisclaimer: The three scenarios I sketch aren’t supposed to represent the scenarios I think most likely; similarly, my comments on the three safety proposals are mere hot takes. I’m just trying to illustrate how these distinctions can be used.\nScenario: FOOM: There is a level of performance which leads to a localized FOOM, i.e. very rapid gains in performance combined with very rapid drops in cost, all within a single AI system (or family of systems in a single AI lab). Moreover, these gains & drops are enough to give decisive strategic advantage to the faction that benefits from them. Thus, in this scenario, control over the future is mostly a date competition. If there are two competing AI projects, and one project is building a system which is twice as capable and half the price but takes 100 days longer to build, that project will lose.\nScenario: Gradual Economic Takeover: The world economy gradually accelerates over several decades, and becomes increasingly dominated by billions of AGI agents. However, no one entity (AI or human, individual or group) has most of the power. In this scenario, control over the future is mostly a cost and performance competition. The values which shape the future will be the values of the bulk of the economy, and that in turn will be the values of the most popular and successful AGI designs, which in turn will be the designs that have the best combination of performance- and cost-competitiveness. Date-competitiveness is mostly irrelevant.\nScenario: Final Conflict: It’s just like the Gradual Economic Takeover scenario, except that several powerful factions are maneuvering and scheming against each other, in a Final Conflict to decide the fate of the world. This Final Conflict takes almost a decade, and mostly involves “cold” warfare, propaganda, coalition-building, alliance-breaking, and that sort of thing. Importantly, the victor in this conflict will be determined not so much by economic might as by clever strategy; a less well resourced faction that is nevertheless more far-sighted and strategic will gradually undermine and overtake a larger/richer but more dysfunctional faction. In this context, having the most capable AI advisors is of the utmost importance; having your AIs be cheap is much less important. In this scenario, control of the future is mostly a performance competition. (Meanwhile, in this same scenario, popularity in the wider economy is a moderately intense competition of all three kinds.)\nProposal: Value Learning: By this I mean schemes that take state-of-the-art AIs and train them to have human values. I currently think of these schemes as not very date-competitive, but pretty cost-competitive and very performance-competitive. I say value learning isn’t date-competitive because my impression is that it is probably harder to get right, and thus slower to get working, than other alignment proposals. Value learning would be better for the gradual economic takeover scenario because the world will change slowly, so we can afford to spend the time necessary to get it right, and once we do it’ll be a nice add-on to the existing state-of-the-art systems that won’t sacrifice much cost or performance.\nProposal: Iterated Distillation and Amplification: By this I mean… well, it’s hard to summarize. It involves training AIs to imitate humans, and then scaling them up until they are arbitrarily powerful while still human-aligned. I currently think of this scheme as decently date-competitive but not as cost-competitive or performance-competitive. But lack of performance-competitiveness isn’t a problem in the FOOM scenario because IDA is above the threshold needed to go FOOM; similarly, lack of cost-competitiveness is only a minor problem because if they don’t have enough money already, the first project to build FOOM-capable AI will probably be able to attract a ton of investment (e.g. via being nationalized) without even using their AI for anything, and then reinvest that investment into paying the extra cost of aligning it via IDA.\nProposal: Impact regularization: By this I mean attempts to modify state-of-the-art AI designs so that they deliberately avoid having a big impact on the world. I think of this scheme as being cost-competitive and fairly date-competitive. I think of it as being performance-uncompetitive in some competitions, but performance-competitive in others. In particular, I suspect it would be very performance-uncompetitive in the Final Conflict scenario (because AI advisors of world leaders need to be impactful to do anything), yet nevertheless performance-competitive in the Gradual Economic Takeover scenario.\nPutting these distinctions to work again\nI came up with these distinctions because they helped me puzzle through the following problem:\n\nLots of people worry that in a vastly multipolar, hypercompetitive AI economy (such as described in Hanson’s Age of Em or Bostrom’s “Disneyland without children” scenario) eventually pretty much everything of merely intrinsic value will be stripped away from the economy; the world will be dominated by hyper-efficient self-replicators various kinds, performing their roles in the economy very well and seeking out new roles to populate but not spending any time on art, philosophy, leisure, etc. Some value might remain, but the overall situation will be Malthusian. Well, why not apply this reasoning more broadly? Shouldn’t we be pessimistic about any AI alignment proposal that involves using aligned AI to compete with unaligned AIs? After all, at least one of the unaligned AIs will be willing to cut various ethical corners that the aligned AIs won’t, and this will give it an advantage.\n\nThis problem is more serious the more the competition is cost-intensive and performance-intensive. Sacrificing things humans value is likely to lead to cost- and performance-competitiveness gains, so the more intense the competition is in those ways, the worse our outlook is.\nHowever, it’s plausible that the gains from such sacrifices are small. If so, we need only worry in scenarios of extremely intense cost and performance competition.\nMoreover, the extent to which the competition is date-intensive seems relevant. Optimizing away things humans value, and gradually outcompeting systems which didn’t do that, takes time. And plausibly, scenarios which are not at all date competitions are also very intense performance and cost competitions. (Given enough time, lots of different designs will appear, and minor differences in performance and cost will have time to overcome differences in luck.) On the other hand, aligning AI systems might take time too, so if the competition is too date-intensive things look grim also. Perhaps we should hope for a scenario in between, where control of the future is a moderate date competition.\nConcluding thoughts\nThese distinctions seem to have been useful for me. However, I could be overestimating their usefulness. Time will tell; we shall see if others make use of them.\nIf you think they would be better if the definitions were rebranded or modified, now would be a good time to say so! I currently expect that a year from now my opinions on which phrasings and definitions are most useful will have evolved. If so, I’ll come back and update this post.\n30 March 2020\nThanks to Katja Grace and Ben Pace for comments on a draft. ", "url": "https://aiimpacts.org/three-kinds-of-competitiveness/", "title": "Three kinds of competitiveness", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-03-31T00:55:04+00:00", "paged_url": "https://aiimpacts.org/feed?paged=8", "authors": ["Daniel Kokotajlo"], "id": "805a1e61076c07e667ba5bd235efeaa3", "summary": []} {"text": "AGI in a vulnerable world\n\nBy Asya Bergal, 25 March 2020\nI’ve been thinking about a class of AI-takeoff scenarios where a very large number of people can build dangerous, unsafe AGI before anyone can build safe AGI. This seems particularly likely if:\n\nIt is considerably more difficult to build safe AGI than it is to build unsafe AGI.\nAI progress is software-constrained rather than compute-constrained.\nCompute available to individuals grows quickly and unsafe AGI turns out to be more of a straightforward extension of existing techniques than safe AGI is.\nOrganizations are bad at keeping software secret for a long time, i.e. it’s hard to get a considerable lead in developing anything.\n\nThis may be because information security is bad, or because actors are willing to go to extreme measures (e.g. extortion) to get information out of researchers.\n\n\n\nAnother related scenario is one where safe AGI is built first, but isn’t defensively advantaged enough to protect against harms by unsafe AGI created soon afterward.\nThe intuition behind this class of scenarios comes from an extrapolation of what machine learning progress looks like now. It seems like large organizations make the majority of progress on the frontier, but smaller teams are close behind and able to reproduce impressive results with dramatically fewer resources. I don’t think the large organizations making AI progress are (currently) well-equipped to keep software secret if motivated and well-resourced actors put effort into acquiring it. There are strong openness norms in the ML community as a whole, which means knowledge spreads quickly. I worry that there are strong incentives for progress to continue to be very open, since decreased openness can hamper an organization’s ability to recruit talent. If compute available to individuals increases a lot, and building unsafe AGI is much easier than building safe AGI, we could suddenly find ourselves in a vulnerable world.\nI’m not sure if this is a meaningfully distinct or underemphasized class of scenarios within the AI risk space. My intuition is that there is more attention on incentives failures within a small number of actors, e.g. via arms races. I’m curious for feedback about whether many-people-can-build-AGI is a class of scenarios we should take seriously and if so, what things society could do to make them less likely, e.g. invest in high-effort info-security and secrecy work. AGI development seems much more likely to go existentially badly if more than a small number of well-resourced actors are able to create AGI.\n25 March 2020", "url": "https://aiimpacts.org/agi-in-a-vulnerable-world/", "title": "AGI in a vulnerable world", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-03-26T00:05:46+00:00", "paged_url": "https://aiimpacts.org/feed?paged=8", "authors": ["Asya Bergal"], "id": "284024b49c8510422c5e1653946c3dea", "summary": []} {"text": "2019 recent trends in GPU price per FLOPS\n\nPublished 25 March, 2020\nWe estimate that in recent years, GPU prices have fallen at rates that would yield an order of magnitude over roughly:\n\n17 years for single-precision FLOPS\n10 years for half-precision FLOPS\n5 years for half-precision fused multiply-add FLOPS\n\nDetails\nGPUs (graphics processing units) are specialized electronic circuits originally used for computer graphics.1 In recent years, they have been popularly used for machine learning applications.2 One measure of GPU performance is FLOPS, the number of operations on floating-point numbers a GPU can perform in a second.3 This page looks at the trends in GPU price / FLOPS of theoretical peak performance over the past 13 years. It does not include the cost of operating the GPUs, and it does not consider GPUs rented through cloud computing.\nTheoretical peak performance\n‘Theoretical peak performance’ numbers appear to be determined by adding together the theoretical performances of the processing components of the GPU, which are calculated by multiplying the clock speed of the component by the number of instructions it can perform per cycle.4 These numbers are given by the developer and may not reflect actual performance on a given application.5\nMetrics\nWe collected data on multiple slightly different measures of GPU price and FLOPS performance.\nPrice metrics\nGPU prices are divided into release prices, which reflect the manufacturer suggested retail prices that GPUs are originally sold at, and active prices, which are the prices at which GPUs are actually sold at over time, often by resellers.\nWe expect that active prices better represent prices available to hardware users, but collect release prices also, as supporting evidence.\nFLOPS performance metrics\nSeveral varieties of ‘FLOPS’ can be distinguished based on the specifics of the operations they involve. Here we are interested in single-precision FLOPS, half-precision FLOPS, and half-precision fused-multiply add FLOPS.\n‘Single-precision’ and ‘half-precision’ refer to the number of bits used to specify a floating point number.6 Using more bits to specify a number achieves greater precision at the cost of more computational steps per calculation. Our data suggests that GPUs have largely been improving in single-precision performance in recent decades,7 and half-precision performance appears to be increasingly popular because it is adequate for deep learning.8\nNvidia, the main provider of chips for machine learning applications,9 recently released a series of GPUs featuring Tensor Cores,10 which claim to deliver “groundbreaking AI performance”. Tensor Core performance is measured in FLOPS, but they perform exclusively certain kinds of floating-point operations known as fused multiply-adds (FMAs).11 Performance on these operations is important for certain kinds of deep learning performance,12 so we track ‘GPU price / FMA FLOPS’ as well as ‘GPU price / FLOPS’.\nIn addition to purely half-precision computations, Tensor Cores are capable of performing mixed-precision computations, where part of the computation is done in half-precision and part in single-precision.13 Since explicitly mixed-precision-optimized hardware is quite recent, we don’t look at the trend in mixed-precision price performance, and only look at the trend in half-precision price performance.\nPrecision tradeoffs\nAny GPU that performs multiple kinds of computations (single-precision, half-precision, half-precision fused multiply add) trades off performance on one for performance on the other, because there is limited space on the chip, and transistors must be allocated to either one type of computation or the other.14 All current GPUs that perform half-precision or TensorCore fused-multiply-add computations also do single-precision computations, so they are splitting their transistor budget. For this reason, our impression is that half-precision FLOPS could be much cheaper now if entire GPUs were allocated to each one alone, rather than split between them.\nRelease date prices\nWe collected data on theoretical peak performance (FLOPS), release date, and price from several sources, including Wikipedia.15 (Data is available in this spreadsheet). We found GPUs by looking at Wikipedia’s existing large lists16 and by Googling “popular GPUs” and “popular deep learning GPUs”. We included any hardware that was labeled as a ‘GPU’. We adjusted prices for inflation based on the consumer price index.17\nWe were unable to find price and performance data for many popular GPUs and suspect that we are missing many from our list. In our search, we did not find any GPUs that beat our 2017 minimum of $0.03 (release price) / single-precision GFLOPS. We put out a $20 bounty on a popular Facebook group to find a cheaper GPU / FLOPS, and the bounty went unclaimed, so we are reasonably confident in this minimum.18\nGPU price / single-precision FLOPS\nFigure 1 shows our collected dataset for GPU price / single-precision FLOPS over time.19\nFigure 1: Real GPU price / single-precision FLOPS over time. The vertical axis is log-scale. Price is measured in 2019 dollars.\nTo find a clear trend for the prices of the cheapest GPUs / FLOPS, we looked at the running minimum prices every 10 days.20\nFigure 2: Ten-day minimums in real GPU price / single-precision FLOPS over time. The vertical axis is log-scale. Price is measured in 2019 dollars. The blue line shows the trendline ignoring data before late 2007. (We believe the apparent steep decline prior to late 2007 is an artefact of a lack of data for that time period.)\nThe cheapest GPU price / FLOPS hardware using release date pricing has not decreased since 2017. However there was a similar period of stagnation between early 2009 and 2011, so this may not represent a slowing of the trend in the long run.\nBased on the figures above, the running minimums seem to follow a roughly exponential trend. If we do not include the initial point in 2007, (which we suspect is not in fact the cheapest hardware at the time), we get that the cheapest GPU price / single-precision FLOPS fell by around 17% per year, for a factor of ten in  ~12.5 years.21\nGPU price / half-precision FLOPS\nFigure 3 shows GPU price / half-precision FLOPS for all the GPUs in our search above for which we could find half-precision theoretical performance.22\nFigure 3: Real GPU price / half-precision FLOPS over time. The vertical axis is log-scale. Price is measured in 2019 dollars.\nAgain, we looked at the running minimums of this graph every 10 days, shown in Figure 4 below.23\nFigure 4: Minimums in real GPU price / half-precision FLOPS over time. The vertical axis is log-scale. Price is measured in 2019 dollars.\nIf we assume an exponential trend with noise,24 cheapest GPU price / half-precision FLOPS fell by around 26% per year, which would yield a factor of ten after ~8 years.25\nGPU price / half-precision FMA FLOPS\nFigure 5 shows GPU price / half-precision FMA FLOPS for all the GPUs in our search above for which we could find half-precision FMA theoretical performance.26 (Note that this includes all of our half-precision data above, since those FLOPS could be used for fused-multiply adds in particular). GPUs with TensorCores are marked in red.\nFigure 5: Real GPU price / half-precision FMA FLOPS over time. Price is measured in 2019 dollars.\nFigure 6 shows the running minimums of GPU price / HP FMA FLOPS.27\nFigure 6: Minimums in real GPU price / half-precision FMA FLOPS over time. Price is measured in 2019 dollars.\nGPU price / Half-Precision FMA FLOPS appears to be following an exponential trend over the last four years, falling by around 46% per year, for a factor of ten in ~4 years.28\nActive Prices\nGPU prices often go down from the time of release, and some popular GPUs are older ones that have gone down in price.29 Given this, it makes sense to look at active price data for the same GPU over time.\nData Sources\nWe collected data on peak theoretical performance in FLOPS from TechPowerUp30 and combined it with active GPU price data to get GPU price / FLOPS over time.31 Our primary source of historical pricing data was Passmark, though we also found a less trustworthy dataset on Kaggle which we used to check our analysis. We adjusted prices for inflation based on the consumer price index.32\nPassmark\nWe scraped pricing data33 on GPUs between 2011 and early 2020 from Passmark.34 Where necessary, we renamed GPUs from Passmark to be consistent with TechPowerUp.35 The Passmark data consists of 38,138 price points for 352 GPUs. We guess that these represent most popular GPUs. \nLooking at the ‘current prices’ listed on individual Passmark GPU pages, prices appear to be sourced from Amazon, Newegg, and Ebay. Passmark’s listed pricing data does not correspond to regular intervals. We don’t know if prices were pulled at irregular intervals, or if Passmark pulls prices regularly and then only lists major changes as price points. When we see a price point, we treat it as though the GPU is that price only at that time point, not indefinitely into the future.\nThe data contains several blips where a GPU is briefly sold very unusually cheaply. A random checking of some of these suggests to us that these correspond to single or small numbers of GPUs for sale, which we are not interested in tracking, because we are trying to predict AI progress, which presumably isn’t influenced by temporary discounts on tiny batches of GPUs.\nKaggle\nThis Kaggle dataset contains scraped data of GPU prices from price comparison sites PriceSpy.co.uk, PCPartPicker.com, Geizhals.eu from the years 2013 – 2018. The Kaggle dataset has 319,147 price points for 284 GPUs. Unfortunately, at least some of the data is clearly wrong, potentially because price comparison sites include pricing data from untrustworthy merchants.36 As such, we don’t use the Kaggle data directly in our analysis, but do use it as a check on our Passmark data. The data that we get from Passmark roughly appears to be a subset of the Kaggle data from 2013 – 2018,37 which is what we would expect if the price comparison engines picked up prices from the merchants Passmark looks at.\nLimitations\nThere are a number of reasons why we think this analysis may in fact not reflect GPU price trends:\n\nWe effectively have just one source of pricing data, Passmark.\nPassmark appears to only look at Amazon, Newegg, and Ebay for pricing data.\nWe are not sure, but we suspect that Passmark only looks at the U.S. versions of Amazon, Newegg, and Ebay, and pricing may be significantly different in other parts of the world (though we guess it wouldn’t be different enough to change the general trend much).\nAs mentioned above, we are not sure if Passmark pulls price data regularly and only lists major price changes, or pulls price data irregularly. If the former is true, our data may be overrepresenting periods where the price changes dramatically.\nNone of the price data we found includes quantities of GPUs which were available at that price, which means some prices may be for only a very limited number of GPUs.\nWe don’t know how much the prices from these datasets reflect the prices that a company pays when buying GPUs in bulk, which we may be more interested in tracking.\n\nA better version of this analysis might start with more complete data from price comparison engines (along the lines of the Kaggle dataset) and then filter out clearly erroneous pricing information in some principled way.\nData\nThe original scraped datasets with cards renamed to match TechPowerUp can be found here. GPU price / FLOPS data is graphed on a log scale in the figures below. Price points for the same GPU are marked in the same color. We adjusted prices for inflation using the consumer price index.38 All points below are in 2019 dollars.\nTo try to filter out noisy prices that didn’t last or were only available in small numbers, we took out the lowest 5% of data in every several day period39 to get the 95th percentile cheapest hardware. We then found linear and exponential trendlines of best fit through the available hardware with the lowest GPU price / FLOPS every several days.40\nGPU price / single-precision FLOPS\nFigures 7-10 show the raw data, 95th percentile data, and trendlines for single-precision GPU price / FLOPS for the Passmark dataset. This folder contains plots of all our datasets, including the Kaggle dataset and combined Passmark + Kaggle dataset.41\nFigure 7: GPU price / single-precision FLOPS over time, taken from our Passmark dataset.42 Price is measured in 2019 dollars. This picture shows that the Kaggle data does appear to be a superset of the Passmark data from 2013 – 2018, giving us some evidence that the Passmark data is correct. The vertical axis is log-scale.\n\n\nFigure 8: The top 95% of data every 10 days for GPU price / single-precision FLOPS over time, taken from the Passmark dataset we plotted above. (Figure 7 with the cheapest 5% removed.) The vertical axis is log-scale.43\n\nFigure 9: The same data as Figure 8, with the vertical axis zoomed-in.\nFigure 10: The minimum data points from the top 95% of the Passmark dataset, taken every 10 days. We fit linear and exponential trendlines through the data. The vertical axis is log-scale.44\nAnalysis\nThe cheapest 95th percentile data every 10 days appears to fit relatively well to both a linear and exponential trendline. However we assume that progress will follow an exponential, because previous progress has followed an exponential.\nIn the Passmark dataset, the exponential trendline suggested that from 2011 to 2020, 95th-percentile GPU price / single-precision FLOPS fell by around 13% per year, for a factor of ten in ~17 years,45 bootstrap46 95% confidence interval 16.3 to 18.1 years.47 We believe the rise in price / FLOPS in 2017 corresponds to a rise in GPU prices due to increased demand from cryptocurrency miners.48 If we instead look at the trend from 2011 through 2016, before the cryptocurrency rise, we instead get that 95th-percentile GPU price / single-precision FLOPS price fell by around 13% per year, for a factor of ten in ~16 years.49\nThis is slower than the order of magnitude every ~12.5 years we found when looking at release prices. If we restrict the release price data to 2011 – 2019, we get an order of magnitude decrease every ~13.5 years instead,50 so part of the discrepancy can be explained because of the different start times of the datasets. To get some assurance that our active price data wasn’t erroneous, we spot checked the best active price at the start of 2011, which was somewhat lower than the best release price at the same time, and confirmed that its given price was consistent with surrounding pricing data.51 We think active prices are likely to be closer to the prices at which people actually bought GPUs, so we guess that ~17 years / order of magnitude decrease is a more accurate estimate of the trend we care about.\nGPU price / half-precision FLOPS\nFigures 11-14 show the raw data, 95th percentile data, and trendlines for half-precision GPU price / FLOPS for the Passmark dataset. This folder contains plots of the Kaggle dataset and combined Passmark + Kaggle dataset.\n Figure 11: GPU price / half-precision FLOPS over time, taken from our Passmark dataset. Price is measured in 2019 dollars.52 This picture shows that the Kaggle data does appear to be a superset of the Passmark data from 2013 – 2018, giving us some evidence that the Passmark data is reasonable. The vertical axis is log-scale.\n\nFigure 12: The top 95% of data every 30 days for GPU price / half-precision FLOPS over time, taken from the Passmark dataset we plotted above. (Figure 11 with the cheapest 5% removed.) The vertical axis is log-scale.53\n\nFigure 13: The same data as Figure 12, with the vertical axis zoomed-in.\nFigure 14: The minimum data points from the top 95% of the Passmark dataset, taken every 30 days. We fit linear and exponential trendlines through the data. The vertical axis is log-scale.54\nAnalysis\nIf we assume the trend is exponential, the Passmark trend seems to suggest that from 2015 to 2020, 95th-percentile GPU price / half-precision FLOPS of GPUs has fallen by around 21% per year, for a factor of ten over ~10 years,55 bootstrap56 95% confidence interval 8.8 to 11 years.57 This is fairly close to the ~8 years / order of magnitude decrease we found when looking at release price data, but we treat active prices as a more accurate estimate of the actual prices at which people bought GPUs. As in our previous dataset, there is a noticeable rise in 2017, which we think is due to GPU prices increasing as a result of cryptocurrency miners. If we look at the trend from 2015 through 2016, before this rise, we get that 95th-percentile GPU price / half-precision FLOPS has fallen by around 14% per year, which would yield a factor of ten over ~8 years.58\nGPU price / half-precision FMA FLOPS\nFigures 15-18 show the raw data, 95th percentile data, and trendlines for half-precision GPU price / FMA FLOPS for the Passmark dataset. GPUs with Tensor Cores are marked in black. This folder contains plots of the Kaggle dataset and combined Passmark + Kaggle dataset.\nFigure 15: GPU price / half-precision FMA FLOPS over time, taken from our Passmark dataset.59 price is measured in 2019 dollars. This picture shows that the Kaggle data does appear to be a superset of the Passmark data from 2013 – 2018, giving us some evidence that the Passmark data is correct. The vertical axis is log-scale.\n\nFigure 16: The top 95% of data every 30 days for GPU price / half-precision FMA FLOPS over time, taken from the Passmark dataset we plotted above.60 (Figure 15 with the cheapest 5% removed.)\n\nFigure 17: The same data as Figure 16, with the vertical axis zoomed-in.\nFigure 18: The minimum data points from the top 95% of the Passmark dataset, taken every 30 days. We fit linear and exponential trendlines through the data.61\nAnalysis\nIf we assume the trend is exponential, the Passmark trend seems to suggest the 95th-percentile GPU price / half-precision FMA FLOPS of GPUs has fallen by around 40% per year, which would yield a factor of ten in ~4.5 years,62 with a bootstrap63 95% confidence interval 4 to 5.2 years.64 This is fairly close to the ~4 years / order of magnitude decrease we found when looking at release price data, but we think active prices are a more accurate estimate of the actual prices at which people bought GPUs.\nThe figures above suggest that certain GPUs with Tensor Cores were a significant (~half an order of magnitude) improvement over existing GPU price / half-precision FMA FLOPS.\nConclusion\nWe summarize our results in the table below.\nRelease Prices95th-percentile Active Prices95th-percentile Active Prices (pre-crypto price rise)11/2007 – 1/20203/2011 – 1/20203/2011 – 12/2016 $ / single-precision FLOPS12.517169/2014 – 1/20201/2015 – 1/20201/2015 – 12/2016 $ / half-precision FLOPS8108$ / half-precision FMA FLOPS44.5—\nRelease price data seems to generally support the trends we found in active prices, with the notable exception of trends in GPU price / single-precision FLOPS, which cannot be explained solely by the different start dates.65 We think the best estimate of the overall trend for prices at which people recently bought GPUs is the 95th-percentile active price data from 2011 – 2020, since release price data does not account for existing GPUs becoming cheaper over time. The pre-crypto trends are similar to the overall trends, suggesting that the trends we are seeing are not anomalous due to cryptocurrency.\nGiven that, we guess that GPU prices as a whole have fallen at rates that would yield an order of magnitude over roughly:\n\n17 years for single-precision FLOPS\n10 years for half-precision FLOPS\n5 years for half-precision fused multiply-add FLOPS\n\nHalf-precision FLOPS seem to have become cheaper substantially faster than single-precision in recent years. This may be a “catching up” effect as more of the space on GPUs was allocated to half-precision computing, rather than reflecting more fundamental technological progress.\nPrimary author: Asya Bergal\nNotes", "url": "https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/", "title": "2019 recent trends in GPU price per FLOPS", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-03-25T23:46:49+00:00", "paged_url": "https://aiimpacts.org/feed?paged=8", "authors": ["Asya Bergal"], "id": "5213c9975670b1d24d274a82197ec3d7", "summary": []} {"text": "Cortés, Pizarro, and Afonso as precedents for takeover\n\nDaniel Kokotajlo, 29 February 2020\nEpistemic status: I am not a historian, nor have I investigated these case studies in detail. I admit I am still uncertain about how the conquistadors were able to colonize so much of the world so quickly. I think my ignorance is excusable because this is just a blog post; I welcome corrections from people who know more. If it generates sufficient interest I might do a deeper investigation. Even if I’m right, this is just one set of historical case-studies; it doesn’t prove anything about AI, even if it is suggestive. Finally, in describing these conquistadors as “successful,” I simply mean that they achieved their goals, not that what they achieved was good. \nSummary\nIn the span of a few years, some minor European explorers (later known as the conquistadors) encountered, conquered, and enslaved several huge regions of the world. That they were able to do this is surprising; their technological advantage was not huge. (This was before the scientific and industrial revolutions.) From these cases, I think we learn that it is occasionally possible for a small force to quickly conquer large parts of the world, despite:\n\nHaving only a minuscule fraction of the world’s resources and power\nHaving technology + diplomatic and strategic cunning that is better but not that much better\nHaving very little data about the world when the conquest begins\nBeing disunited\n\nWhich all suggests that it isn’t as implausible that a small AI takes over the world in mildly favorable circumstances as is sometimes thought. EDIT: In light of good pushback from people (e.g. Lucy.ea8 and Matthew Barnett) about the importance of disease, I think one should probably add a caveat to the above: “In times of chaos & disruption, at least.” NEW EDIT: After reading three giant history books on the subject, I take back my previous edit. My original claims were correct.\nThree shocking true stories\nI highly recommend you read the wiki pages yourself; otherwise, here are my summaries:\nCortés: [wiki] [wiki]\n\nApril 1519: Hernán Cortés lands in Yucatan with ~500 men, 13 horses, and a few cannons. He destroys his ships so his men won’t be able to retreat. His goal is to conquer the Aztec empire of several million people.\nHe makes his way towards the imperial capital, Tenochtitlán. Along the way he encounters various local groups, fighting some and allying with some. He is constantly outnumbered but his technology gives him an advantage in fights. His force grows in size, because even though he loses Spaniards he gains local allies who resent Aztec rule.\nTenochtitlán is an island fortress (like Venice) with a population of over 200,000, making it one of the largest and richest cities in the world at the time. Cortés arrives in the city asking for an audience with the Emperor, who receives him warily.\nCortés takes the emperor hostage within his own palace, indirectly ruling Tenochtitlán through him.\nCortés learns that the Spanish governor has landed in Mexico with a force twice his size, intent on arresting him. (Cortés’ expedition was illegal!) Cortés leaves 200 men guarding the Emperor, marches to the coast with the rest, surprises and defeats the new Spaniards in battle, and incorporates the survivors into his army.\nJuly 1520: Back at the capital, the locals are starting to rebel against his men. Cortés marches back to the capital, uniting his forces just in time to be besieged in the imperial palace. They murder the emperor and fight their way out of the city overnight, taking heavy losses.\nThey shelter in another city (Tlaxcala) that was thinking about rebelling against the Aztecs. Cortés allies with the Tlaxcalans and launches a general uprising against the Aztecs. Not everyone sides with him; many city-states remain loyal to Tenochtitlan. Some try to stay neutral. Some join him at first, and then abandon him later. Smallpox sweeps through the land, killing many on all sides and causing general chaos.\nMay 1521: The final assault on Tenochtitlán. By this point, Cortés has about 1,000 Spanish troops and 80,000 – 200,000 allied native warriors. He had 16 cannons and 13 boats. The Aztecs have 80,000 – 300,000 warriors and 400 boats. Cortés and his allies win.\nLater, the Spanish would betray their native allies and assert hegemony over the entire region, in violation of the treaties they had signed.\n\nPizarro [wiki] [wiki]\n\n1532: Francisco Pizarro arrives in Inca territory with 168 Spanish soldiers. His goal is to conquer the Inca empire, which was much bigger than the Aztec empire.\nThe Inca empire is in the middle of a civil war and a devastating plague.\nPizarro makes it to the Emperor right after the Emperor defeats his brother. Pizarro is allowed to approach because he promises that he comes in peace and will be able to provide useful information and gifts.\nAt the meeting, Pizarro ambushes the Emperor, killing his retinue with a volley of gunfire and taking him hostage. The remainder of the Emperor’s forces in the area back away, probably confused and scared by the novel weapons and hesitant to keep fighting for fear of risking the Emperor’s life.\nOver the next months, Pizarro is able to leverage his control over the Emperor to stay alive and order the Incans around; eventually he murders the Emperor and makes an alliance with local forces (some of the Inca generals) to take over the capital city of Cuzco.\nThe Spanish continue to rule via puppets, primarily Manco Inca, who is their puppet ruler while they crush various rebellions and consolidate their control over the empire. Manco Inca escapes and launches a rebellion of his own, which is partly successful: He utterly wipes out four columns of Spanish reinforcements, but is unable to retake the capital. With the morale and loyalty of his followers dwindling, Manco Inca eventually gives up and retreats, leaving the Spanish still in control.\nThen the Spanish ended up fighting each other for a while, while also putting down more local rebellions. After a few decades Spanish dominance of the region is complete. (1572).\n\nAfonso [wiki] [wiki] [wiki]\n\n1506: Afonso helps the Portuguese king come up with a shockingly ambitious plan. Eight years prior, the first Europeans had rounded the coast of Africa and made it to the Indian Ocean. The Indian Ocean contained most of the world’s trade at the time, since it linked up the world’s biggest and wealthiest regions. See this map of world population (timestamp 3:45). Remember, this is prior to the Industrial and Scientific Revolutions; Europe is just coming out of the Middle Ages and does not have an obvious technological advantage over India or China or the Middle East, and has an obvious economic disadvantage. And Portugal is a just tiny state on the edge of the Iberian peninsula.\nThe plan is: Not only will we go into the Indian Ocean and participate in the trading there — cutting out all the middlemen who are currently involved in the trade between that region and Europe — we will conquer strategic ports around the region so that no one else can trade there!\nLong story short, Afonso goes on to complete this plan by 1513. (!!!)\n\nSome comparisons and contrasts:\n\nAfonso had more European soldiers at his disposal than Cortes or Pizarro, but not many more — usually he had about a thousand or so. He did have more reinforcements and support from home.\nLike them, he was usually significantly outnumbered in battles. Like them, the empires he warred against were vastly wealthier and more populous than his forces.\nLike them, Afonso was often able to exploit local conflicts to gain local allies, which were crucial to his success.\nUnlike them, his goal wasn’t to conquer the empires entirely, just to get and hold strategic ports.\nUnlike them, he was fighting empires that were technologically advanced; for example, in several battles his enemies had more cannons and gunpowder than he did.\nThat said, it does seem that Portuguese technology was qualitatively better in some respects (ships, armor, and cannons, I’d say.) Not dramatically better, though.\nWhile Afonso’s was a naval campaign, he did fight many land battles, usually marine assaults on port cities, or defenses of said cities against counterattacks. So superior European naval technology is not by itself enough to explain his victory, though it certainly was important.\nPlague and civil war were not involved in Afonso’s success.\n\nWhat explains these devastating conquests?\nWrong answer: I cherry-picked my case studies.\nHistory is full of incredibly successful conquerors: Alexander the Great, Ghenghis Khan, etc. Perhaps some people are just really good at it, or really lucky, or both.\nHowever: Three incredibly successful conquerors from the same tiny region and time period, conquering three separate empires? Followed up by dozens of less successful but still very successful conquerors from the same region and time period? Surely this is not a coincidence. Moreover, it’s not like the conquistadors had many failed attempts and a few successes. The Aztec and Inca empires were the two biggest empires in the Americas, and there weren’t any other Indian Oceans for the Portuguese to fail at conquering.\nFun fact: I had not heard of Afonso before I started writing this post this morning. Following the Rule of Three, I needed a third example and I predicted on the basis of Cortes and Pizarro that there would be other, similar stories happening in the world at around that time. That’s how I found Afonso.\nRight answer: Technology\nHowever, I don’t think this is the whole explanation. The technological advantage of the conquistadors was not overwhelming.\nWhatever technological advantage the conquistadors had over the existing empires, it was the sort of technological advantage that one could acquire before the Scientific and Industrial revolutions. Technology didn’t change very fast back then, yet Portugal managed to get a lead over the Ottomans, Egyptians, Mughals, etc. that was sufficient to bring them victory. On paper, the Aztecs and Spanish were pretty similar: Both were medieval, feudal civilizations. I don’t know for sure, but I’d bet there were at least a few techniques and technologies the Aztecs had that the Spanish didn’t. And of course the technological similarities between the Portuguese and their enemies were much stronger; the Ottomans even had access to European mercenaries! Even in cases in which the conquistadors had technology that was completely novel — like steel armor, horses, and gunpowder were to the Aztecs and Incas — it wasn’t god-like. The armored soldiers were still killable; the gunpowder was more effective than arrows but limited in supply, etc.\n(Contrary to popular legend, neither Cortés nor Pizarro were regarded as gods by the people they conquered. The Incas concluded pretty early on that the Spanish were mere men, and while the idea did float around the Aztecs for a bit the modern historical consensus is that most of them didn’t take it seriously.)\nAsk yourself: Suppose Cortés had found 500 local warriors, gave them all his equipment, trained them to use it expertly, and left. Would those local men have taken over all of Mexico? I doubt it. And this is despite the fact that they would have had much better local knowledge than Cortés did! Same goes for Pizarro and Afonso. Perhaps if he had found 500 local warriors led by an exceptional commander it would work. But the explanation for the conquistador’s success can’t just be that they were all exceptional commanders; that would be positing too much innate talent to occur in one small region of the globe at one time.\nRight answer: Strategic and diplomatic cunning\nThis is my non-expert guess about the missing factor that joins with technology to explain this pattern of conquistador success.\nThey didn’t just have technology; they had effective strategy and they had effective diplomacy. They made long-term plans that worked despite being breathtakingly ambitious. (And their short-term plans were usually pretty effective too, read the stories in detail to see this.) Despite not knowing the local culture or history, these conquistadors made surprisingly savvy diplomatic decisions. They knew when they could get away with breaking their word and when they couldn’t; they knew which outrages the locals would tolerate and which they wouldn’t; they knew how to convince locals to ally with them; they knew how to use words to escape militarily impossible situations… The locals, by contrast, often badly misjudged the conquistadors, e.g. not thinking Pizarro had the will (or the ability?) to kidnap the emperor, and thinking the emperor would be safe as long as they played along.\nThis raises the question, how did they get that advantage? My answer: they had experience with this sort of thing, whereas locals didn’t. Presumably Pizarro learned from Cortés’ experience; his strategy was pretty similar. (See also: the prior conquest of the Canary Islands by the Spanish). In Afonso’s case, well, the Portuguese had been sailing around Africa, conquering ports and building forts for more than a hundred years.\nLessons I think we learn\nI think we learn that:\nIt is occasionally possible for a small force to quickly conquer large parts of the world, despite:\n\nHaving only a minuscule fraction of the world’s resources and power\nHaving technology + diplomatic and strategic cunning that is better but not that much better\nHaving very little data about the world when the conquest begins\nBeing disunited\n\nWhich all suggests that it isn’t as implausible that a small AI takes over the world in mildly favorable circumstances as is sometimes thought.\n EDIT: In light of good pushback from people (e.g. Lucy.ea8 and Matthew Barnett) about the importance of disease, I think one should probably add a caveat to the above: “In times of chaos & disruption, at least.” \nHaving only a minuscule fraction of the world’s resources and power\nIn all three examples, the conquest was more or less completed without support from home; while Spain/Portugal did send reinforcements, it wasn’t even close to the entire nation of Spain/Portugal fighting the war. So these conquests are examples of non-state entities conquering states, so to speak. (That said, their claim to represent a large state may have been crucial for Cortes and Pizarro getting audiences and respect initially.) Cortés landed with about a thousandth the troops of Tenochtitlan, which controlled a still larger empire of vassal states. Of course, his troops were better equipped, but on the other hand they were also cut off from resupply, whereas the Aztecs were in their home territory, able to draw on a large civilian population for new recruits and resupply.\nThe conquests succeeded in large part due to diplomacy. This has implications for AI takeover scenarios; rather than imagining a conflict of humans vs. robots, we could imagine humans vs. humans-with-AI-advisers, with the latter faction winning and somehow by the end of the conflict the AI advisers have managed to become de facto rulers, using the humans who obey them to put down rebellions by the humans who don’t.\nHaving technology + diplomatic and strategic skill that is better but not that much better\nAs previously mentioned, the conquistadors didn’t enjoy god-like technological superiority. In the case of Afonso the technology was pretty similar. Technology played an important role in their success, but it wasn’t enough on its own. Meanwhile, the conquistadors may have had more diplomatic and strategic cunning (or experience) than the enemies they conquered. But not that much more–they are only human, after all. And their enemies were pretty smart.\nIn the AI context, we don’t need to imagine god-like technology (e.g. swarms of self-replicating nanobots) to get an AI takeover. It might even be possible without any new physical technologies at all! Just superior software, e.g. piloting software for military drones, targeting software for anti-missile defenses, cyberwarfare capabilities, data analysis for military intelligence, and of course excellent propaganda and persuasion.\nNor do we need to imagine an AI so savvy and persuasive that it can persuade anyone of anything. We just need to imagine it about as cunning and experienced relative to its enemies as Cortés, Pizarro, and Afonso were relative to theirs. (Presumably no AI would be experienced with world takeover, but perhaps an intelligence advantage would give it the same benefits as an experience advantage.) And if I’m wrong about this explanation for the conquistador’s success–if they had no such advantage in cunning/experience–then the conclusion is even stronger.\nAdditionally, in a rapidly-changing world that is undergoing slow takeoff, where there are lesser AIs and AI-created technologies all over the place, most of which are successfully controlled by humans, AI takeover might still happen if one AI is better, but not that much better, than the others.\nHaving very little data about the world when the conquest begins\nCortés invaded Mexico knowing very little about it. After all, the Spanish had only realized the Americas existed two decades prior. He heard rumors of a big wealthy empire and he set out to conquer it, knowing little of the technology and tactics he would face. Two years later, he ruled the place.\nPizarro and Afonzo were in better epistemic positions, but still, they had to learn a lot of important details (like what the local power centers, norms, and conflicts were, and exactly what technology the locals had) on the fly. But they were good at learning these things and making it up as they went along, apparently.\nWe can expect superhuman AI to be good at learning. Even if it starts off knowing very little about the world — say, it figured out it was in a training environment and hacked its way out, having inferred a few general facts about its creators but not much else — if it is good at learning and reasoning, it might still be pretty dangerous.\nBeing disunited\nCortés invaded Mexico in defiance of his superiors and had to defeat the army they sent to arrest him. Pizarro ended up fighting a civil war against his fellow conquistadors in the middle of his conquest of Peru. Afonzo fought Greek mercenaries and some traitor Portuguese, conquered Malacca against the  orders of a rival conquistador in the area, and was ultimately demoted due to political maneuvers by rivals back home.\nThis astonishes me. Somehow these conquests were completed by people who were at the same time busy infighting and backstabbing each other!\nWhy was it that the conquistadors were able to split the locals into factions, ally with some to defeat the others, and end up on top? Why didn’t it happen the other way around: some ambitious local ruler talks to the conquistadors, exploits their internal divisions, allies with some to defeat the others, and ends up on top?\nI think the answer is partly the “diplomatic and strategic cunning” mentioned earlier, but mostly other things. (The conquistadors were disunited, but presumably were united in the ways that mattered.) At any rate, I expect AIs to be pretty good at coordinating too; they should be able to conquer the world just fine even while competing fiercely with each other. For more on this idea, see this comment.\nBy Daniel Kokotajlo\nAcknowledgements\nThanks to Katja Grace for feedback on a draft. All mistakes are my own, and should be pointed out to me via email at daniel@aiimpacts.org. Edit: Also, when I wrote this post I had forgotten that the basic idea for it probably came from this comment by JoshuaFox.\n(Front page image from the Conquest of México series. Representing the 1521 Fall of Tenochtitlan, in the Spanish conquest of the Aztec Empire)", "url": "https://aiimpacts.org/cortes-pizarro-and-afonso-as-precedents-for-ai-takeover/", "title": "Cortés, Pizarro, and Afonso as precedents for takeover", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-03-01T03:43:04+00:00", "paged_url": "https://aiimpacts.org/feed?paged=8", "authors": ["Daniel Kokotajlo"], "id": "cdedbaa773129f3995b05700e7b9bd00", "summary": []} {"text": "Incomplete case studies of discontinuous progress\n\nPublished 7 Feb 2020\nThis is a list of potential cases of discontinuous technological progress that we have investigated partially or not at all.\nList\nIn the course of investigating cases of potentially discontinuous technological progress, we have collected around fifty suggested instances that we have not investigated fully. This is a list of them and what we know about them.\nThe Haber Process\nThis was previously listed as NSD, but that is tentatively revoked while we investigate a complication with the data.\nPrevious explanation\nThe Haber process was the first energy efficient method of producing ammonia, which is key to making fertilizer. The reason to expect that the Haber process might represent discontinuous technological progress is that previous processes were barely affordable, while the Haber process was hugely valuable—it is credited with fixing much of the nitrogen now in human bodies—and has been used on an industrial scale since 1913.\nA likely place to look for discontinuities then is in the energy cost of fixing nitrogen. Table 4 in Grünewald’s Chemistry for the Future suggests that the invention of the Haber reduced the energy expense by around 60% per nitrogen bonded over a method developed eight years earlier. The previous step however appears to have represented at least a 50% improvement over the process of two years earlier (though the figure is hard to read). Later improvements to the Haber process appear to have been comparable. Thus it seems the Haber process was not an unusually large improvement in energy efficiency, but was probably instead the improvement that happened to take the process into the range of affordability.\nSince it appears that energy was an important expense, and the Haber process was especially notable for being energy efficient, and yet did not represent a particular discontinuity in energy efficiency progress, it seems unlikely that the Haber process involved a discontinuity. Furthermore, it appears that the world moved to using the Haber process over other sources of fertilizer gradually, suggesting there was not a massive price differential, nor any sharp practical change as a result of the adoption of the process. In the 20’s the US imported much nitrogen from Chile. Alternative nitrogen source calcium cyanamide reached peak production in 1945, thirty years since the Haber process reached industrial scale production.\nThe amount of synthetic nitrogen fertilizer applied hasn’t abruptly changed since 1860 (see p24). Neither has the amount of food produced, for a few foods at least.\nIn sum, it seems the Haber process has had a large effect, but it was produced by a moderate change in efficiency, and manifest over a long period.\n\nAluminium\nIt is sometimes claimed that the discovery of the Hall–Héroult process in the 1880s brought the price of aluminium down precipitously. We found several pieces of quantitative data about this, but they seriously conflict. The most rigorous looking is a report from Patricia Plunkert at the US Geological Survey, from which we get the following data. However note that some of these figures may be off by orders of magnitude, according to other sources.\nPlunkert provides a table of historic aluminium prices, according to which the nominal price fell from $8 per pound to $0.58 per pound sometime between 1887 and 1895 (during most of which time no records are available). This period probably captures the innovation of interest, as the Hall–Héroult process was patented in 1886 according to Plunkert, and the price only dropped by $1 per pound during the preceding fifteen years according to her table. Plunkert also says that the price was held artificially low to encourage consumers in the early 1900s, suggesting the same may have been true earlier, however this seems likely to be a small correction.\nThe sewing machine\nEarly sewing machines apparently brought the time to produce clothing down by an order of magnitude (from 14 hours to 75 minutes for a man’s dress shirt by one estimate). However it appears that the technology progressed more slowly, then was taken up by the public later – probably when it became cost-effective, at which time adoptees may have experienced a rapid reduction in sewing time (presumably at some expense). These impressions are from a very casual perusal of the evidence.\nVideo compression\nBlogger John McGowan claims that video compression performance was constant at a ratio of around 250 for about seven years prior to 2003, then jumped to around 900.\nFigure 1: video compression performance in the past two decades.\nInformation storage volume\nAccording to the Performance Curves Database (PCDB), ‘information storage volume’ for both handwriting and printing has grown by a factor of three in recent years, after less than doubling in the hundred years previously. It is unclear what exactly is being measured here however.\nUndersea cable price\nThe bandwidth per cable length available for a dollar apparently grew by more than 1000 times in around 1880.\n\n\nInfrared detector sensitivity\nWe understand that infrared detector sensitivity is measured in terms of ‘Noise Equivalent Power’ (NEP), or the amount of power (energy per time) that needs to hit the sensor for the sensor’s output to have a signal:noise ratio of one. We investigated progress in infrared detection technology because according to Academic Press (1974), the helium-cooled germanium bolometer represented a four order of magnitude improvement in sensitivity over uncooled detectors.1 However our own investigation suggests there were other innovations between uncooled detectors and the bolometer in question, and thus no abrupt improvement.\nWe list advances we know of here, and summarize them in Figure 5. The 1947 point is uncooled. The 1969 point is nearly four orders of magnitude better. However we know of at least four other detectors with intermediate levels of sensitivity, and these are spread fairly evenly between the uncooled device and the most efficient cooled one listed.\nWe have not checked whether the progress between the uncooled detector and the first cooled detector was discontinuous, given previous rates. This is because we have no strong reason to suspect it is.\nFigure 2: Sensitivity of infrared detectors during the transition to liquid-helium-cooled devices.\nGenome sequencing – IIP\nThis appears to have seen at least a moderate discontinuity. An investigation is in progress.\nIt was suggested to us in particular that Next Generation Sequencing produced discontinuous progress in output per instrument run for DNA sequencing.\n\nAircraft shot down per shell fired\n\n\nWe’ve seen it claimed that the proximity fuse increased this metric by 2x or more. We don’t know what the trend was beforehand, however.\n\n\nTime to produce clothing\n\n\nThe Sewing machine was proposed as discontinuous in this metric. We have not investigated.\n\n\nSensitivity of infrared detectors\n\n\nCryogenically cooled semiconductor sensors were proposed as discontinuous in this metric. We have not investigated.\n\n\nFrames per Second\n\n\nIt was suggested that something in high sensitivity, high precision metrology, e.g. trillion frame-per-second camera from MIT, would be discontinuous in this metric. We have not investigated.\n\n\nAccess to Information\n\n\nSmart phones were suggested as a discontinuity in this metric. We have not investigated.\n\n\nSpread of minimally invasive surgery\n\n\nLaparoscopic cholecystectomy was suggested as a discontinuity in this metric. We have not investigated.\n\n\nMax. submerged endurance, submerged runs\n\n\nNuclear-powered submarines may be a discontinuity in this metric. We have not investigated.\n\n\nClothmaking efficiency\n\n\nThe Jacquard Loom and the Spinning Jenny were suggested as discontinuities in this metric. We have not investigated.\n\n\nPersonal armor protectiveness-to-weight ratio\n\n\nKevlar was suggested as discontinuities in this metric. We have not investigated.\n\n\nLumens per watt\n\n\nHigh pressure sodium lamps were suggested as discontinuities in this metric. We have not investigated.\n\n\nLinear programming\n\n\nThe Simplex algorithm was suggested as discontinuities in this metric. We have not investigated.\n\n\nFourier transform speed\n\n\nFast fourier transform was suggested as discontinuities in this metric. We have not investigated.\n\n\nPolynomial identity testing efficiency\n\n\nProbabilistic testing methods were suggested as discontinuities in this metric. We have not investigated.\n\n\nAudio compression efficiency\n\n\nThe MP3 format was suggested as discontinuities in this metric. We have not investigated.\n\n\nCrop yields\n\n\nThis amazing genetic modification, if it works as claimed, may well be a discontinuity in this metric. We have not investigated.\nThanks to Stephen Jordan, Bren Worth, Finan Adamson and others for suggesting potential discontinuities in this list. \n", "url": "https://aiimpacts.org/incomplete-case-studies-of-discontinuous-progress/", "title": "Incomplete case studies of discontinuous progress", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-02-08T04:37:48+00:00", "paged_url": "https://aiimpacts.org/feed?paged=8", "authors": ["Katja Grace"], "id": "3f62dd4e5a669aea66fa1431b912dd53", "summary": []} {"text": "Effect of AlexNet on historic trends in image recognition\n\nAlexNet did not represent a greater than 10-year discontinuity in fraction of images labeled incorrectly, or log or inverse of this error rate, relative to progress in the past two years of competition data.\nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nBackground\nThe annual ImageNet competition asks researchers to build programs to label images.1 It began in 2010, when every team labeled at least 25% of images wrong. The same was true in 2011, and would have been true in 2012, if not for AlexNet, a convolutional neural network that mislabeled only 16.4% of images.2\nTrends\nPercent of images mislabeled\nData\nWe collected data on the error rate (%) of the 2010 – 2012 ImageNet competitors from Table 6 of Russakovsky et al3 into this spreadsheet. See Figure 1 below.\nFigure 1: Error rate (%) of ImageNet competitors from 2010 – 2012\nDiscontinuity measurement\nThe ImageNet competition had only been going for two years when AlexNet entered, so the past trend is very short. Given this, the shape of the curve prior to AlexNet is entirely ambiguous. We treat the trend as linear for simplicity, but given that, it is better to choose a transformation of the data that we expect to be linear, given our understanding of the situation.\nTwo plausible transformations are the log of the error, and the reciprocal of the error rate.4 These two transformations of the data are shown in Figures 2 and 3 below.\nFigure 2: Log base 2 / error rate of ImageNet competitors from 2010 – 2012\nFigure 3: 1 / error rate of ImageNet competitors from 2010 – 2012\nThe best 2012 AlexNet competitor gives us discontinuous jumps of 3 years of progress at previous rates for the raw error rate, 4 years of progress at previous rates for log base 2 of the error rate, or 6 years of progress at previous rates for 1 / the error rate.5 For the 6-year discontinuity, we tabulated a number of other potentially relevant metrics in the ‘Notable discontinuities under 10 years’ tab here.\nNotes\n", "url": "https://aiimpacts.org/effect-of-alexnet-on-historic-trends-in-image-recognition/", "title": "Effect of AlexNet on historic trends in image recognition", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-02-08T02:40:36+00:00", "paged_url": "https://aiimpacts.org/feed?paged=8", "authors": ["Katja Grace"], "id": "bab7348792af37ca1f4a92ffb48ed8c0", "summary": []} {"text": "Historic trends in transatlantic message speed\n\nThe speed of delivering a short message across the Atlantic Ocean saw at least three discontinuities of more than ten years before 1929, all of which also were more than one thousand years: a 1465-year discontinuity from Columbus’ second voyage in 1493, a 2085-year discontinuity from the first telegraph cable in 1858, and then a 1335-year discontinuity from the second telegraph cable in 1866.\nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nBackground\nSummary of historic developments\nAll communications between Europe and North America were carried on ships until 1858, when the first telegraph messages were transmitted over cable between the UK and US.1 That first cable only lasted six weeks, and took more than sixteen hours to send a message from the Queen.2\nA permanent cable wasn’t laid until eight years later.3 Better telegraph cables were laid a further thirty and sixty years later. We do not investigate developments after 1929.\nFigure 1: Undersea communications cables became common in the long run: map of undersea communications cables in 2007.4\nTrends\nTransatlantic message speed, 140 character message\nWe looked at historic times to send messages across the Atlantic Ocean.\nMessage speed can depend on the length of the message. Where this was relevant, we somewhat arbitrarily chose to investigate for a 140 character message. We measure fastest speeds of real historic systems that could send 140 character messages across the Atlantic Ocean. We do not require that a 140 character message was actually sent by the method in question. \nWe generally use whatever route was actually taken (or supposed in an estimate), and do not attempt to infer faster speeds possible had an optimal route been taken (though note that because we are measuring speed rather than time to cross the Ocean, route length is adjusted for to a first approximation).\nWe only investigated this metric from 1492-1493 and 1841-1928. We do not investigate 1493-1841 because our data is insufficiently complete to determine how continuous it was.5\nData\nOur data for message speed came from a variety of online sources, and has not been thoroughly vetted. The full dataset with sources can be found here.6\nBecause message delivery coincided with passenger travel until the first telegraph, data until then coincides with that used in our investigation into historic trends in transatlantic passenger travel. \nThe resulting trend is shown in Figures 2-3.\n Figure 2: Average speed for message transmission across the Atlantic in recent centuries (see Figure 3 for longer term trend) \n Figure 3: Average speed for message transmission across the Atlantic. \nDiscontinuity Measurement\nWe measure discontinuities by comparing progress made at a particular time to the past trend. For this purpose, we treat the past trend at any given point as exponential or linear depending on apparent fit, and judge a new trend to have begun when the recent trend has diverged sufficiently from the longer term trend. See our spreadsheet, tab ‘Message’ to view the trends, and our methodology page for details on how to interpret our sheets and how we divide data into trends. \nGiven these judgments about past progress, there were three discontinuities of more than ten years, all of which were more than one thousand years: a 1465-year discontinuity from Columbus’ second voyage in 1493, then a 2085-year discontinuity from the first telegraph cable in 1858, and then a 1335-year discontinuity from the improved telegraph cable in 1866.7 In addition to the size of these discontinuity in years, we have tabulated a number of other potentially relevant metrics here.8.\nDiscussion of causes\n Transatlantic message speed is a narrower metric than overall message speed, precluding some technologies that can only deliver messages short distances or on land (e.g. the semaphore telegraph, which relied on a series of towers within line of sight). We expected this would make discontinuity more likely.\nNotes\n", "url": "https://aiimpacts.org/historic-trends-in-transatlantic-message-speed/", "title": "Historic trends in transatlantic message speed", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-02-08T02:39:47+00:00", "paged_url": "https://aiimpacts.org/feed?paged=8", "authors": ["Katja Grace"], "id": "c3ea656ba75568819f102c40fa6c0873", "summary": []} {"text": "Historic trends in long-range military payload delivery\n\nThe speed at which a military payload could cross the Atlantic ocean contained six greater than 10-year discontinuities in 1493 and between 1841 and 1957: \nDateMode of transportKnotsDiscontinuity size(years of progress at past rate)1493Columbus’ second voyage5.814651884Oregon18.6101919WWI Bomber (first non-stop transatlantic flight)1063511938Focke-Wulf Fw 200 Condor174191945Lockheed Constellation288251957R-7 (ICBM)~10,000~500\nDetails\nBackground\nThe speed at which a weapons payload could be delivered to a target on the opposite side of the ocean appears to have been limited to the speed of a piloted vehicle (and so coincided with speed of passenger delivery) until the first long-range missiles became available in the late 1950s.1. \nTrends\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nTransatlantic military payload delivery speed\nWe look at fastest speeds of real historic systems that could have delivered military payloads across the Atlantic Ocean. We do not require that any military payload was actually sent by the method in question. \nWe generally use whatever route was actually taken (or supposed in an estimate), and do not attempt to infer faster speeds possible had an optimal route been taken (though note that because we are measuring speed rather than time to cross the Ocean, route length is adjusted for to a first approximation). \nWe only investigated this metric from 1492-1493 and 1841-1957. We do not investigate 1493-1841 because our data is insufficiently complete to determine how continuous it was.2\nData\nWe collated records of historic potential times to cross the Atlantic Ocean for military payloads. These are available at the ‘Payload’ tab of this spreadsheet, and are displayed in Figure 1 and 2 below. We have not thoroughly verified this data. \nBecause military payload delivery coincided with passenger travel until the late 1950s, most of our data coincides with that used in our investigation into historic trends in transatlantic passenger travel. \nThe advent of ICBMs in 1957 probably increased the crossing speed to thousands of knots. We are fairly uncertain about how fast the first ICBMs were, but our impression is that they traveled at an average of least 5,000 knots and likely more like 10,000 knots.3 Uncertainty here makes little difference to measurement of discontinuities. To not be a discontinuity of more than a hundred years, the first ICBM would need to have traveled horizontally at less than 2314 knots, which seems unlikely, because that is insufficient speed to cross the ocean, assuming optimal angle of fire.4\nWe haven’t recorded the later trend, since our understanding is that modern ICBMs do not travel much faster than we think early ones may easily have done.5 so will not yield clear discontinuities, and we do not know of faster missiles than ICBMs.6 \n Figure 1: Historic speeds of sending hypothetical military payloads across the Atlantic Ocean \n Figure 2: Historic speeds of sending hypothetical military payloads across the Atlantic Ocean since 1700 (close up of Figure 1) \nDiscontinuity measurement\nUntil 1957, discontinuities are the same as those for speed of transatlantic passenger travel, since the data coincides. This gives us five discontinuities.\nWe calculate the final development, the ICBM, to probably represent a discontinuity of around 500 years, but at least 1007. See this spreadsheet, tab ‘Payload’ for our calculation.8 \nThis gives us six greater than 10-year discontinuities in total, including five shared with transatlantic passenger travel speed. Three of them represent more than one hundred years of past progress:\nDateMode of transportKnotsDiscontinuity size(years of progress at past rate)1493Columbus’ second voyage5.814651884Oregon18.6101919WWI Bomber (first non-stop transatlantic flight)1063511938Focke-Wulf Fw 200 Condor174191945Lockheed Constellation288251957R-7 (ICBM)~10,000~500\nIn addition to the sizes of these discontinuity in years, we have tabulated a number of other potentially relevant metrics here.9\nNotes", "url": "https://aiimpacts.org/historic-trends-in-long-range-military-payload-delivery/", "title": "Historic trends in long-range military payload delivery", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-02-08T02:39:27+00:00", "paged_url": "https://aiimpacts.org/feed?paged=8", "authors": ["Katja Grace"], "id": "59d207e2baa070d1326af1f285e1139e", "summary": []} {"text": "Historic trends in bridge span length\n\nWe measure eight discontinuities of over ten years in the history of longest bridge spans, four of them of over one hundred years, five of them robust as to slight changes in trend extrapolation. \nThe annual average increase in bridge span length increased by over a factor of one hundred between the period before 1826 and the period after (0.25 feet/year to 35 feet/year), though there was not a clear turning point in it. \nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nBackground\nA bridge span is a section of bridge between supports.1 Bridges can have multiple spans, e.g. one for each arch.2 Bridges are often measured by their ‘main span’. \nWe investigated bridge span (rather than bridge length, mass, or carrying capacity) because it was suggested to us as discontinuous. We also expect it to be a good metric for seeing technological progress, rather than economic progress, because additional spending can probably add more spans to a structure more easily than it can make each span longer. Span length is also a less ambiguous metric than total length, since it is not always clear where a road ends and a bridge begins.\nThe Akashi Kaikyō Bridge, current record-holder for longest bridge span3\n\nTrends\nLongest bridge span length\nData\nWe gathered data for bridge span lengths from several Wikipedia lists of longest bridge spans over history for particular types of bridge, plus a few additional datapoints from elsewhere. Our data and citations are in this spreadsheet in the tab ‘five bridge types’. \nProblems, ambiguities, and limitations of the data and our collection process:\nSome span lengths are given as different lengths on different Wikipedia pages. We did not investigate this, and the one we used was arbitrary.We did not find a list of historic longest bridge spans for all bridge types, so used several pages about longest bridges for particular bridge types, for instance List of Longest Suspension Bridge Spans. It is quite possible we failed to find all such lists. In the data we have though, suspension bridges are usually longer than anything else, and the Wikipedia History of Longest Suspension Bridge Spans mentions in its list a few times when non-suspension bridges are the longest bridge span in the world, suggesting that the authors of that page at least believe that at all other times the suspension bridges are the longest. We had already found the other bridges they mention (all arch or cantilever bridges).We have not investigated the accuracy of the Wikipedia data.We are unsure what exact definition of ‘bridge’ is used in any of these pages. Our impression is that they need to allow foot vehicle traffic to cross independently (e.g. it looks like foot bridges are included, but not this cable car with a 2831m span, which it seems would hold the current record were it a bridge). We have not investigated more.We treated dates of N BC dates -N\nFigure 1-3 show the length of the longest bridge span for five types of bridge over time. If we understand correctly, these include the longest bridges of any kind at least since around 500AD.\nFigure 1: Entire history of longest bridge spans of five types, measured in feet. See text for further details.\nFigure 2: Figure 1 only visible to 600 feet.\nFigure 3: Figure 1 since 1800\nDiscontinuity measurement\nTo measure discontinuities relative to past progress, we treat past progress as linear, and belonging to five different periods (i.e. three times we consider the recent trend to be sufficiently different from the older trend that we base our extrapolation on a new period).4\nUsing this method, the length of the longest bridge span has seen a large number of discontinuities (see table below). \nNameYear opened/became longest of typeMain span (feet)DiscontinuityChakzam Bridge*14304492230Menai Suspension Bridge1826577146Great Suspension Bridge*1834889403Wheeling Suspension Bridge1849101070Niagara Clifton Bridge*1869126014George Washington Bridge*19313501132Golden Gate Bridge1937420019Akashi-Kaikyo Bridge*1998653256*Entry was more robust to informal experimentation with different linear extrapolations\nDeciding what to treat as the previous trend at any point is hard in this dataset, because the shape of the trend isn’t close to being exponential or linear. The sizes of the discontinuities and even the particular bridges that count as notably discontinuous are not very robust to different choices. In a small amount of experimentation with different linear trends, five bridges were always discontinuities, marked with * in the above table. That the overall trend is marked by many discontinuities seems robust.\n In addition to the size of these discontinuity in years, we have tabulated a number of other potentially relevant metrics here.5\nChange in rate of progress\nThe annual average increase in bridge span length increased by over a factor of one hundred between the period before 1826 and the period after (0.25 feet/year to 35 feet/year), though there was not a clear turning point in it. See spreadsheet for calculation (tab: ‘Five bridge types (longest)’)\nNotes\n", "url": "https://aiimpacts.org/historic-trends-in-bridge-span-length/", "title": "Historic trends in bridge span length", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-02-08T02:39:02+00:00", "paged_url": "https://aiimpacts.org/feed?paged=8", "authors": ["Katja Grace"], "id": "a4d9b9b072ad6d03e769e73e5f2b751f", "summary": []} {"text": "Historic trends in light intensity\n\nMaximum light intensity of artificial light sources has discontinuously increased once that we know of: argon flashes represented roughly 1000 years of progress at past rates.\nAnnual growth in light intensity increased from an average of roughly 0.4% per year between 424BC and 1943 to an average of roughly 190% per year between 1943 and the end of our data in 2008.\nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nBackground\nThat which is uncited on this page is our understanding, given familiarity with the topic.1\nElectromagnetic waves (also called electromagnetic radiation) are composed of oscillating electric and magnetic fields. They span in wavelength from gamma rays with wavelengths on the order of 10-20 meters to radio waves with wavelengths on the order of kilometers. The wavelengths from roughly 400 to 800 nanometers are visible to the human eye, and usually referred to as light waves, though the entire spectrum is sometimes referred to as light, especially in the context of physics. These waves carry energy and their usefulness and the effect that they have on matter is strongly affected by their intensity, or the amount of energy that they carry to a given area per time. Intensity is often measured in watts per square centimeter (W/cm2), and it can be increased either by increasing the power (energy per time, measured in watts) or focusing the light onto a smaller area.\nElectromagnetic radiation is given off by all matter as thermal radiation, with the power and wavelength of the waves determined by the temperature and material properties of the matter. When the matter is hot enough to emit visible light, as is the case with the tungsten filament in a light bulb or the sun, the process is referred to as incandescence. Processes which produce light by other means are commonly referred to as luminescence. Common sources of luminescence are LEDs and fireflies.\nThe total power emitted by a source of incandescent source of light is given by the Stefan-Boltzman Law.2\nLight intensity is relevant to applications such as starting fires with lenses, cutting with lasers, plasma physics, spectroscopy, and high-speed photography. \nHistory of progress\nFocused sunlight and magnesium\nFor much of history, our only practical sources of light have been the sun and burning various materials. In both cases, the light is incandescent (produced by a substance being hot), so light intensity depends on the temperature of the hot substance. It is difficult to make something as hot as the sun, so difficult to make something as bright as sunlight, even if it is very well focused. We do not know how close the best focused sunlight historically was to the practical limit, but focused sunlight was our most intense source of light for most of human history.\nThere is evidence that people have been using focused sunlight to start fires for a very long time.3 There is further evidence that more advanced lens technology has existed for over 1000 years4, so that humans have been able to focus sunlight to near the theoretical limit5 for a very long time. Nonetheless, it appears that nobody fully understood how lenses worked until the 17th century, and classical optics continued to advance well into the 19th and 20th century. So it seems likely that there were marginal improvements to be made in more recent times. In sum, we were probably slowly approaching an intensity limit for focusing sunlight for a very long time. There is no particular reason to think that there were any sudden jumps in progress during this time, but we have not investigated this. \nMagnesium is the first combustible material that we found that we are confident burns substantially brighter than crudely focused sunlight, and for which we have an estimated date of first availability. It was first isolated in 18086, and burns with a temperature of 3370K7. Magnesium was bright enough and had a broad enough spectrum to be useful for early photography.\nMercury Arc Lamp\nThe first arc lamp was invented as part of the same series of experiments that isolated magnesium. Arc lamps generate light by using an electrical current to generate a plasma8, which emits light due to a combination of luminescence and incandescence. Although they seem to have been the first intense artificial light sources that do not rely on high combustion temperature9, they do not seem to have been brighter than a magnesium flame10 in the early stages of their development. Nonetheless, by the mid 1930s, mercury arc lamps, operated in glass tubes filled with particular gases, were the brightest sources available that we found. Our impression is that progress was incremental between their first demonstration around 1800 and their implementation as high intensity sources in the the 1930s, but we have not investigated this thoroughly. \nArgon Flashes\nArgon flashes were invented during the Manhattan project11 to enable the high speed photography that was needed for understanding plutonium implosions. They are created by surrounding a high explosive with argon gas. The shock from the explosive ionizes the argon, which then gives off a lot of UV light as it recombines. The UV light is absorbed by the argon, and because argon has a low heat capacity (that is, takes very little energy to become hot), it becomes extremely hot, emitting ~25000 Kelvin blackbody radiation. This was a large improvement in intensity of light from blackbody radiation. There does not seem to have been much improvement in blackbody sources in the 60 years since. \nLasers\nLasers work by storing energy in a material by promoting electrons into higher energy states, so that the energy can then be used to amplify light that passes through the material. Because lasers can amplify light in a very controlled way, they can be used to make extremely short, high energy pulses of light, which can be focused onto a very small area. Because lasers are not subject to the same thermodynamic limits as blackbody sources, it is possible to achieve much higher intensities, with the current state of the art lasers creating light 16 orders of magnitude more intense than the light from an argon flash.\nFigure 1: Industrial laser12\nTrends\nLight intensity\nWe investigated the highest publicly recorded light intensities we could find, over time.13 Our estimates are for all light, not just the visible spectrum.\nData\nOne of our researchers, Rick Korzekwa, collected estimated light intensities produced by new technologies over time into this spreadsheet. Many sources lacked records of the intensity of light produced specifically, so the numbers are often inferred or estimated from available information. These inferences rely heavily on subject matter knowledge, so have not been checked by another researcher. Figures 2-3 illustrate this data.\nPre-1808 trend\nWe do not start looking for discontinuities until 1943, though we have data from beforehand, because our data is not sufficiently complete to distinguish discontinuous progress from continuous, only to suggest the rough shape of the longer term trend.\nTogether, focused sunlight and magnesium give us a rough trend for slow long term progress, from lenses focusing to the minimum intensity required to ignite plant material in ancient times to intensities similar to a camera flash over the course of at least two millenia. On average during that time, the brightest known lights increased in intensity by a factor of 1.0025 per year (though we do not know how this was distributed among the years). \nDue to our uncertainty in the early development of optics for focusing sunlight, the trend from 424 BC to 1808 AD should be taken as the most rapid progress that we believe was likely to have occurred during that period. That is, we look at the earliest date for which we have strong verification that burning glasses were used, and assuming these burning glasses produced light that was just barely intense enough to start a fire. So progress may have been slower, if more intense light was available in 424 BC than we know about, however progress could only have been faster on average if burning glass (that could actually burn) didn’t exist in 424 BC, or if there were better things available in 1808 than we are aware, both of which seem less likely than that technology was better than that in 424 BC.\n\nFigure 2: Estimated light intensity for some historic brightest artificial sources known to us. Note that the very earliest instances of a given type are not necessarily represented, for instance our understanding is that dimmer arc lamps existed in the early 1800s.\nFigure 3: Close up of Figure 2, since 1800\nDiscontinuity measurement\nWe treat the rate of previous progress as an exponential between the burning glass in 424BC and the first argon candle in 1943. At that point progress has been far above that long term trend for two points in a row, so we assume a new faster trend and measure from the 1936 arc lamp. In 1961, after the trend again has been far surpassed for two points, we start again measuring from the first laser in 1960. See this project’s methodology page for more detail on what we treat as past progress.\nGiven these choices, we find one large discontinuity from the first argon candle in 1943 (~1000 years of progress in one step), and no other discontinuities of more than ten years since we begin searching in 1943.14\nIn addition to the size of these discontinuity in years, we have tabulated a number of other potentially relevant metrics here.15\nNote on mercury arc lamp\nThe 1936 mercury arc lamp would be a large discontinuity if there were no progress since 1808. Our impression from various sources is that progress in arc lamp technology was incremental between their first invention at the beginning of the 19th century and the bright mercury lamps that were available in 1936. We did not thoroughly investigate the history and development of arc lamps however, so do not address the question of the first year that such lamps were available or whether such lamps represented a discontinuity.\nNote on argon flash\nThe argon flash seems to have been the first light source available that is brighter than focused sunlight, after centuries of very slow progress, and represents a large discontinuity. As discussed above, because we are less certain about the earlier data, our methods imply a relatively high estimate on the prior rate of advancement, and thus a low estimate of the size of the discontinuity. So the real discontinuity is likely to be at least 996 years (unless for instance there was accelerating progress during that time that we did not find records of).\nChange in rate of progress\nLight intensity saw a large increase in the rate of progress, seemingly beginning somewhere between the arc lamps of the 30s and the lasers of the 60s. Between 424BC and 1943, light intensity improved by around 0.4% per year on average, optimistically. Between 1943 and 2008, light intensity grew by an average of around 190% per year.16\nThe first demonstrations of working lasers seems to have prompted a flurry of work. For the first fifteen years, maximum light intensity had an average doubling time of four months, and over roughly five decades following lasers, the average doubling time was a year.17\nDiscussion\nFactors of potential relevance to causes of abrupt progress\nTechnological novelty\nOne might expect discontinuous progress to arise from particularly paradigm-shifting insights, where a very novel way is found to achieve an old goal. This has theoretical plausibility, and several discontinuities that we know of seem to be associated with fundamentally new methods (for instance, nuclear weapons came from a shift to a new type of energy, high temperature superconductors with a shift to a new class of materials for superconducting). So we are interested in whether discontinuities in light intensity are evidence for or against such a pattern.\nThe argon flash was a relatively novel method rather than a subtle refinement of previous technology, however it did not leverage any fundamentally new physics. Like previous light sources, it works by adding a lot of energy into a material to make it emit light in a relatively disorganized and isotropic manner. Achieving this by way of a shockwave from a high explosive was new.\nIt is unclear whether using an explosive shockwave in this way had not been done previously because nobody had thought of it, or because nobody wanted a shorter and brighter flash of light so much that they were willing to use explosives to get it.18\nThe advent of lasers did not produce a substantial discontinuity, but they did involve an entirely different mechanism for creating light to previous technologies. Older methods created more intense light by increasing the energy density of light generation (which mostly meant making the thing hotter), but lasers do it by creating light in a very organized way. Most high intensity lasers take in a huge amount of light, convert a small portion of it to laser light, and create a laser pulse that is many orders of magnitude more intense than the input light. This meant that lasers could scale to extremely high output power without becoming so hot that the output is that of a blackbody.\nEffort directed at progress on the metric\nThere is a hypothesis that metrics which see a lot of effort directed at them will tend to be more continuous than those which are improved as a side-effect of other efforts. So we are interested in whether these discontinuities fit that pattern.\nThough there was interest over the years in using intense light as a weapon19, and for early photographers, who wanted safe, convenient, short flashes that could be fired in quick succession, there seems to have been relatively little interest in increasing the peak intensity of a light source. The US military sought bright sources of light for illuminating aircraft or bombing targets at night during World War II. But most of the literature seems to focus on the duration, total quantity of light, or practical considerations, with peak intensity as a minor issue at most.\nThe argon flash appears to have been developed more as a high peak power device than as a high peak intensity device.20 It did not matter if the light could be focused to a small spot, so long as enough light was given off during the course of an experiment to take pictures. Still, you can only drive power output up so much before you start driving up intensity as well, and the argon flash was extremely high power.\nPossibly argon flashes were developed largely because an application appeared which could make use of very bright lights even with the concomitant downsides. \nThere seems to have been a somewhat confusing lack of interest in lasers, even after they looked feasible, in part due to a lack of foresight into their usefulness. Charles Townes, one of the scientists responsible for the invention of the laser, remarked that it could have been invented as early as 193021, so it seems unlikely that it was held up by a lack of understanding of the fundamental physics (Einstein first proposed the basic mechanism in 191722). Furthermore, the first paper reporting successful operation of a laser was rejected in 1960, because the reviewers/editors did not understand how it was importantly different from previous work.23 \nAlthough it seems clear that the scientific community was not eagerly awaiting the advent of the laser, there did seem to be some understanding, at least among those doing the work, that lasers would be powerful. Townes recalled that, before they finished building their laser, they did expect to “at least get a lot of power”24, something which could be predicted with relatively straightforward calculations. Immediately after the first results were published, the general sentiment seems to have been that it was novel and interesting, but it was allegedly described as “a solution in search of a problem”.25 Similar to the argon flash, it would appear that intensity was not a priority in itself at the time the laser was invented, and neither were any of the other features of laser light that are now considered valuable, such as narrow spectrum, short pulse duration, and long coherence length.\nMost of the work leading to the first lasers was focused on the associated atomic physics, which may help explain why the value of lasers for creating macroscopic quantities of light wasn’t noticed until after they had been built. \nIn sum, it seems the argon flash and the laser both caused large jumps in a metric that is relevant today but that was not a goal at the time of their development. Both could probably have been invented sooner, had there been interest.\nPredictability\nOne reason to care about discontinuities is because they might be surprising, and so cause instability or problems that we are not prepared for. So we are interested in whether discontinuities were in fact surprising.\nIt is unclear how predictable the large jump from the argon flash was. Our impression is that without knowledge of the field, it would have been difficult to predict the huge progress from the argon flash ahead of time. High explosives, arc lamps, and flash tubes all produced temperatures of around 4,000K to 5,000K. Jumping straight from that to >25,000K would probably have seemed rather unlikely.\nHowever as discussed above, it seems plausible that the technology allowing argon flashes was relatively mature earlier on, and therefore that they might have been predictable to someone familiar with the area.\nNotes", "url": "https://aiimpacts.org/historic-trends-in-light-intensity/", "title": "Historic trends in light intensity", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-02-08T02:38:35+00:00", "paged_url": "https://aiimpacts.org/feed?paged=9", "authors": ["Katja Grace"], "id": "3e69e88384f678615121d1e87fe5e454", "summary": []} {"text": "Historic trends in book production\n\nThe number of books produced in the previous hundred years, sampled every hundred or fifty years between 600AD to 1800AD contains five greater than 10-year discontinuities, four of them greater than 100 years. The last two follow the invention of the printing press in 1492. \nThe real price of books dropped precipitously following the invention of the printing press, but the longer term trend is sufficiently ambiguous that this may not represent a substantial discontinuity.\nThe rate of progress of book production changed shortly after the invention of the printing press, from a doubling time of 104 years to 43 years.\nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nBackground\nAround 1439, Johannes Gutenburg invented a machine for making books commonly referred to as “the printing press”. The printing press was used to quickly copy pre-created sheets of letters of ink onto a print medium.1 Presses that stamped paper with carved blocks of wood covered in ink were already being used in Europe,2 but Gutenburg made several major improvements on existing methods, notably creating the hand mould, a device which allowed for quickly creating sheets of inked letters rather than carving them out of wood. The printing press allowed for the quick and cheap production of printed books like never before.3\nReplica of the Gutenberg Printing Press4\nTrends\nWe looked primarily at two different metrics– the rate of book production in Western Europe and the real price of books in England. We chose these two because they were some of the only printing-related data sources which had data that went back several centuries before the invention of the printing press.\nHad the data been available, we would like to have looked at some metric correlated clearly with innovations in the writing / printing process — e.g. the number of pages produced per worker per hour. Then we could check whether the printing press represented a discontinuity relative to earlier innovations (e.g., the pecia system for hand-copying manuscripts).5\nUnfortunately, neither the rate of book production nor the price of book data we have correlate well with innovations in the writing / printing process. The authors of our rate of book production data claim that most of the variation in the pre-printing press numbers is explained by factors which are not innovation or close proxies to innovation.6 Our data on the price of books is similarly unhelpful, as the early price data is too sparse to be meaningful.\nIn addition to the two metrics described above, we looked cursorily at a few metrics with no early data which changed drastically as a result of the printing press– the number of unique titles printed per year, the variation of genres in books, the price of books in the Netherlands, and the total consumption of books.\nRate of book production in Western Europe\nData collection\nOur data for the rate of book production come from estimates of Europe-only production generated in a 2009 paper by historians Eltjo Buringh and Jan Luiten Van Zanden.7 Rate data is represented as the number of books produced in the previous 100 years at various points in time.\nWhen we use the term book, we mean it to refer to any copy of a written work, whether hand-copied manually or produced via some kind of printing technique. The paper separates book production into estimates of manuscript and printed book production, where the production of printed books starts only after the printing press is invented. We will also use the terms manuscript and printed book to talk about the data, but it’s unclear to us if the paper means manuscript to mean “any book not made using a Gutenburg-era printing press” or “any book transcribed by hand”. At one point the authors sum these two estimates into a single graph of production per capita,8 suggesting that the combination of manuscript and printed book data should cover all books.\nThe paper’s estimates for manuscript production are constructed by taking an existing sample of manuscripts and then attempting to correct for its geographical and temporal biases.9 Estimates for book production are constructed by counting new titles in library catalogues and multiplying by estimates for average prints per title at a given time.10\nThe estimates of manuscript production seem extremely non-robust given that large number of correction factors applied.11 The estimates of book production seem somewhat more robust, but should be taken as a lower bound as the authors did not correct for lost books and have estimated the average number of prints per title conservatively.12\nData\nFigure 1a displays the raw data for rate of book production on a log scale, taken from the data in the paper described above and compiled in this spreadsheet. Each data point represents the total number of books produced in the previous 100 years. \nFigure 1a: Book production in Western Europe\nFigure 1b displays the same data as Figure 1a along with our interpretation. \nLooking at the data, we assume an exponential trend up until 1500, and another one after that.13 The blue line is the average log rate of the rate of book production before the invention of the printing press (just manuscripts); the red line is the average log rate of the rate of book production after the invention of the printing press (manuscripts + printed books).\nFigure 1b: Rate of book production in Western Europe. Blue and red lines are the average log rates of the rate of book production before and after the printing press. Grey points are projections of the average log rate before the printing press.\nGrey points shown after 1500 reflect projected manuscript and therefore book production had the printing press not been invented. In practice, the actual number of manuscripts produced after 1500 were very small and not presented in the data.\nDiscontinuity measurement\nIf we just look at the trend of book production per past 100 years, measured once every 100 years before 1500, and then once every 50 years afterwards, we can calculate discontinuities of sizes 161 years in 900, 134 years in 1200, 23 years in 1300, 180 years in 1500, and 138 years in 1550.14 This is obviously a strange kind of trend–a discontinuity of one hundred years in a metric with datapoints every hundred years might mean nothing perceptible at the one-year scale. So in particular, these discontinuities do not tell us much about whether there would be discontinuities in a more natural metric, such as annual book production.\nChanges in the speed of progress\nThere was a marked change in progress in the rate of book production with the invention of the printing press, corresponding to a change in the doubling time of the rate of book production from 104 years to 43 years.15\nInterpreting this rate of change on the graph, before the invention of the printing press, the total rate of book production, which consists entirely of manuscripts, follows the exponential line shown in blue. The invention of the printing press in 1439 allows for mass production of printed books, causing the rate of book production to veer sharply off the existing exponential line, shown as the first point in red. Note that our underlying data sources are non-robust, particularly for manuscript data pre-printing press, so the magnitude of this change in rate of progress may be under or overstated. \nDiscussion of causes\nThe change in the doubling time of the rate of book production caused by the printing press may reflect a large change in the factors that drove book production. \nIn their paper, Buringh and Van Zanden note that in the Middle Ages, 60% of the variation in book production is explained by the number of universities, the number of monasteries, and urbanization.16 They produce the following graph, correlating monastery numbers and early book production:\nFigure 2: Buringh and Van Zanden’s figure of the relationship between book production and monestaries\nBy contrast, after the printing press was invented, Buringh and Van Zanden attribute a much more important role to individual book consumption and the forces of the market:\nHow to explain the significant increase in book production and consumption in the centuries following the invention of moveable type printing in the 1450s? The effect of the new technology (and import technological changes in the production of paper) was that from the 1470s on, book prices declined very rapidly. This had number of effects: consumption per literate individual increased, but it also became more desirable and less costly to become literate. Moreover, economies of scale in the printing industry led to further price reductions stimulating even more growth in book consumption.\nIt seems plausible that the move between exponential curves caused by the printing press was a shift from an exponential curve that reflected the growth of monasteries, universities, and cities to an exponential curve that reflected the growth of a complicated set of market forces.\nReal price of books in England\nData collection\nWe took data from a 2004 paper written by economic historian Gregory Clark, who took data from historic records of price quotes,17 though data before 1450 is based on just 32 total price quotes, and Clark notes that “the prices vary a lot by decade since it is hard to control for the quality and size of the manuscript.”18 As such, we should not take much meaning out of the individual data points before 1450 or interpret the period between 1360 and 1500 as a rising trend.\nClark’s paper reports an index of the nominal price of books in Table 9 of his paper. To instead get an index of the real price of books,19 we divided each nominal price by Clark’s reported “cost of living” for each year (x 100), which was the amount of money paid by a relatively prosperous consumer for the same bundle of goods every year.20\nData\nFigure 3 is a graph of (an index of) the real price of books in England, generated from the data described above and compiled in this spreadsheet. Each point represents the amount of money in each year needed to buy some bundle of books assuming you would pay 100 for that same bundle of books in 1860.\nEconomist Timothy Irwin, looking at this same dataset, claims that the drop in price around 1350 was due to knowledge about paper-making finally making its way to England: “As time passed, improvements in technology and the gradual spread of literacy reduced these obstacles to effective transparency. In particular, the diffusion of knowledge about paper-making (from around 1150) and then printing (from around 1450) dramatically reduced the price of books. (See Figure 1 for an estimate of the decline in England).”21\n‘Figure 1’ in the quote above refers to a graph of the real price of books that uses the same data source and is identical to the one we generated. His claim seems plausible, but we are not confident about it given how sparse and noisy the early data is and given our lack of precise date information on how paper-making spread in England.\nDiscontinuity measurement\nThe graph contains two major drops– one as a result of the printing press, and one claimed to be the result of paper replacing parchment in England. Looking at just the set of blue data points between these two drops, we can see that they are clustered around some range of values. The data in this range is too noisy and sparse22 to generate a meaningful rate of progress, but it seems clear that for a wide variety of plausible rates, the printing press represented a discontinuity in the real price of books when compared to the trend in book prices after the spread of paper-making.\nHowever, if you take the past trend to include the price of books before paper-making, then there is no clear discontinuity– the price of printed books could be part of an existing trend of dropping prices that started with paper-making. We also believe the data here is too poor to draw firm conclusions.\nFigure 3: Real price of books in England. All prices are relative to a book in 1860 costing 100, so a real price of 1800 would be 18x as expensive as a book in 1860.\nDiscussion of causes\nWhether or not it counts as a substantial discontinuity relative to the longer term trend, the printing press produced a sharp drop in the real price of books. This was because their price was largely driven by labor costs, which went down sharply (one author estimates by a factor of 341) when a laborer could use a machine to print massive numbers of books rather than manually transcribing each copy.23\nOther noteworthy metrics\nMany historians associate the invention of the printing press with other unsurprising book-related changes in the world, including:24\nAn increase in the productivity of book production, i.e. the ratio between the wage of a copy producer and the price of a standard book. In particular, one estimate measures a 20-fold increase in the productivity in the first 200 years after the invention.25 Another estimate guesses that there was a 340-fold decrease in the cost per book as a direct result of the printing press.26\nA sharp decrease in the real price of books. One estimate of the real price of books in the Netherlands suggests a ~5-fold decrease between 1460 and 1550.27\nAn increase in genre-variety of books, and in particular a shift away from theological texts and an increase in the amount of fiction.28\nAn increase in the total consumption of books, likely as a result of their declining price and increased literacy levels.29\nMost of these changes are gradual over at least a century, rather than involving a sharp change that might be a large discontinuity. Such gradual changes might suggest a sharper change in some underlying technology though, for instance. The fall in the price of Dutch books was relatively abrupt, but the data lacks a trend leading up to the printing press.\nThe increases in the number of genres and unique titles published suggest that there was a larger amount of information available in printed form. Decreased prices and increased consumption of books suggests that this information was easier to access than before. These things suggest there might have been an interesting change in availability of information in general, however we do not know enough about the past trend to say whether this was likely to be discontinuous. \nNotes", "url": "https://aiimpacts.org/historic-trends-in-book-production/", "title": "Historic trends in book production", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-02-08T01:56:44+00:00", "paged_url": "https://aiimpacts.org/feed?paged=9", "authors": ["Katja Grace"], "id": "3d20becfc5a696a240f3909211a74383", "summary": []} {"text": "Historic trends in telecommunications performance\n\nPublished February 2020\nJanuary 2023 note: This page contains errors that have not yet been corrected. Our overall conclusion, that there were likely no discontinuities in our metrics for telecommunications performance are likely unaffected by these errors.\nThere do not appear to have been any greater than 10-year discontinuities in telecommunications performance, measured as: \n\nbandwidth-distance product for all technologies 1840-2015\nbandwidth-distance product for optical fiber 1975-2000\ntotal bandwidth across the Atlantic 1956-2018\n\nRadio does not seem likely to have represented a discontinuity in message speed.\nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nBackground\nFiber optic cables were first used for telecommunications in the 1970s and 80s.1 While previous telecommunications technology sent information via electricity, fiber optic cables instead sent information via light. Though electric signals travel at around 80% of the speed of light, while optical signals within a fiber only travel at roughly 70% of the speed of light, fiber optics have other benefits which add up to a considerable advantage.2\nFiber optic cable: laser light shining on one end comes out of the other.3\nTrends\nBandwidth-distance product, usually given in bits*kilometers/seconds, is both the most apparently relevant metric of progress in telecommunications4 5, and the one that was suggested to us as discontinuous. We also considered data transfer rate (measured in Mbps) for transatlantic cables, as a metric which more closely tracks the performance of cables that were actually in use, with the Atlantic serving as a distance constraint. We found separate data for bandwidth-distance product across all technologies, in fiber optics alone, and crossing the Atlantic, so we consider each of these metrics.\nBandwidth-distance product across all technologies 1840-2015\nData\nWe used a tool for extracting data from figures6 to extract data from Figure 8.2 from Agrawal, 2016,7 shown in Figure 1. We put the data into this spreadsheet. Figures 2 and 3 show this data without a trendline, and the log of the data on a log axis with a straight trendline. \nFigure 1 below shows progress in bandwidth-distance product across all technologies on a log scale.\nFigure 1: Growth in bandwidth-distance product across all telecommunications during 1840-2015 from Agrawal, 2016\nFigure 2: Agrawal’s data, manually extracted, without trendline. \nFigure 3: Log of Agrawal’s data, shown on a log axis. The linear fit says that the data is well modeled as double exponential. \nDiscontinuity measurement\nIf we treat the previous rate of progress at each point to be exponential (as Agrawal does, with two different regimes) then optical fibers appear to represent a 27 year discontinuity.8 The following 2-3 developments are also substantial discontinuities, depending on whether one breaks the data into multiple trends. As shown in Figure 3 however, the log of the data fits an exponential trend well. If we extrapolate progress expecting the log to be exponential, there are no discontinuities of more than ten years in this data. This seems like the better fit, so we take it there are not discontinuities.\nArgawal’s data also does not include minor improvements on the broad types of systems mentioned, which presumably occurred. In particular, our impression is that there were better coaxial cables as well as worse optical fibers, such that the difference when fiber optics appeared was probably not more than a factor of two,9 10 or about six years of exponential progress at the rate seemingly prevailing around the time of coaxial cables.11\nBandwidth-distance product in fiber optics alone 1975-2000\nData\nWe used a tool for extracting data from figures to extract data from Figure 8.8 from Agrawal, 201612 and put it into this spreadsheet.\nFigure 4 below shows bandwidth-distance product on a log scale in fiber optics alone, from Agrawal, 2016.\nFigure 4: Progress in bandwidth-distance product in fiber optics alone, from Agrawal, 2016 (Note: 1 Gb = 10^9 bits) \nDiscontinuity measurement\nWe chose to model this data as a single exponential trend.13 Compared to previous rates in this trend, there are no greater than ten year discontinuities in bandwidth-distance product in fiber optics alone.14\nBandwidth for Transatlantic Cables 1956-2018\nData\nFigure 5 shows bandwidth of transatlantic cables according to our own calculations, based on data we collected mainly from Wikipedia.15\nFigure 5: Transatlantic cable bandwidth of all types. Pre-1980 cables were copper, post-1980 cables were optical fiber.\nDiscontinuity measurement\nWe treat this data as a single exponential trend.16 The data did not contain any discontinuities of more than ten years.18\nThere was a notable temporary increase in the growth rate between 1996 and 2001. We speculate that this and the following 15 years of stagnation may be a result of heavy telecommunications investment during the dot com bubble.19\nNotes", "url": "https://aiimpacts.org/historic-trends-in-telecommunications-performance/", "title": "Historic trends in telecommunications performance", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-02-08T01:56:41+00:00", "paged_url": "https://aiimpacts.org/feed?paged=9", "authors": ["Katja Grace"], "id": "13e62783ec65e330e7d001f6507fc38e", "summary": []} {"text": "Historic trends in slow light technology\n\nPublished Feb 7 2020\nGroup index of light appears to have seen discontinuities of 22 years in 1995 from Coherent Population Trapping (CPT) and 37 years in 1999 from EIT (condensate). Pulse delay of light over a short distance may have had a large discontinuity in 1994 but our data is not good enough to judge. After 1994, pulse delay does not appear to have seen discontinuities of more than ten years. \nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nBackground\nThat which is uncited on this page is our understanding, given familiarity with the topic.1 \n“Slow Light” is a phenomenon where the speed at which a pulse of light propagates through a medium is greatly reduced. This has potential applications for lasers, communication, and cameras.2\nThe speed of propagation of the light through the medium referred to as the ‘group velocity‘ of the light, and it is a function of the medium’s refractive index and dispersion (the rate at which the refractive index changes with the frequency of the light).\nIn most materials—for instance glass, air, or water—the dispersion is low enough that the group velocity is simply the speed of light divided by the index of refraction. In order to slow down light by more than roughly a factor of 3, physicists needed to create optical media with a greater dispersion in the frequency range of interest. The challenge in this was doing so without the medium absorbing most of the light, since most materials exhibit maximum dispersion under conditions of high absorption. This was resolved using exotic phases of matter and sophisticated methods for inducing transparency in them.\nSummary of historic developments\nDiamonds have a very high index of refraction, and the ability to cut and polish them to achieve good optical quality has existed for hundreds of years3. However there were no light sources available for studying low group velocities until the 1960s, so recorded progress begins then. The first pulsed sources of light that could reasonably be used for the investigation of slow light came about in 1962 with the invention of Q-switching, which is a method for generating series of short light pulses from a laser. We do not know whether early Q-switched lasers could be used for this work, but doubt that any earlier light sources were suitable.\nFollowing Q-switching, progress for slowing light proceeded roughly in four stages:\nHigh index materials: For instance, diamonds. There may have been marginally better materials, but we did not investigate because our understanding is that they should at most represent a few tens of a percent of difference, and later gains represent factors of millions to trillions.High absorption media: Materials with very low group velocity at a particular wavelength range, at the cost of very high absorption (losing >99% of the light over <100 microns).Induced transparency: Materials with a narrow window of transparency in spectral regions of low group velocity. This led to rapid increases in total delay of a pulse, both through longer propagation distance and lower speeds.Stopped light: Eventually, group velocity had been lowered to the point that it was possible to destroy the pulse, but store enough information in the medium about it to reconstruct it after some delay. There is room for debate about whether this is really the same pulse of light, however there are applications in which treating is as such is reasonable. We view this as progress in pulse delay, but not group index. After the invention of stopped light, slow light was no longer a major target for progress.\nTrends\nThere are several metrics that one might plausibly be interested in in this area. Group velocity is a natural choice because it is simple, but it trades off against absorption. So it is relatively easy to make a medium that has a very low group velocity, but it will absorb too much light to be useful. Because of this, research was more plausibly aimed at some combination of low  group velocity and low absorption.\nOne simple way to combine absorption and group velocity into a single metric is group velocity with an absorption criterion (say, lowest group velocity in a medium that transmits at least 1% of the light). Another is total time delay of the pulse by the medium, since longer delays can be achieved either by slowing down the pulse more, or slowing it down over a longer distance (requiring lower absorption). Pulse delay seems to have been a goal for researchers, suggesting it tracks something important, making it more interesting from our perspective.\nWe chose to investigate pulse delay and group index (the speed of light divided by the group velocity).4\nPulse delay and group index\nData\nWe collected data from a variety of online sources into this spreadsheet. The sheet shows progress in pulse delay and group index over time as well as our source for each data point, and calculates unexpected progress at each step. Figures 1-3 illustrates these trends.\nFigure 1: Progress in delay of a pulse of light over a short distance\n Figure 2: Progress in group index of a material (speed of light divided by speed of light in that material) \nFigure 3: Progress in pulse delay and group index. “Human speed” shows the rough scale of motion familiar to humans.\nDiscontinuity measurement\nFor comparing points to ‘past rates of progress’ we treat past progress for both pulse delay and group index as exponential, changing to a new exponential regime near 1995 in both cases.5 \nCompared to these rates of past progress, the 1994 point—EIT (hot gas)—could be a very large discontinuity in pulse delay, if there was a small amount of progress prior to it. There probably was, however our estimates of the points leading up to it are so uncertain that it isn’t clear that there was any well-defined progress, and if there was we have not measured it. So we do not attempt to judge whether there is a discontinuity there. Aside from that, pulse delay saw no discontinuities of more than ten years.\nGroup index has discontinuities of 22 years in 1995 from CPT and 37 years in 1999 from EIT (condensate). \nIn addition to the size of these discontinuities in years, we have tabulated a number of other potentially relevant metrics here.6\nDiscussion of causes\nThese trends are short and not characterized by a clearly established rate prior to any potential change of rate, making changes in apparent rate relatively unsurprising. This means they are both less in need of explanation, and less informative about what to expect in cases where a technology does have a better-established progress trend.\nIncreasing group index of light does not appear to have been a major research goal prior to the discovery of induced transparency in the mid 1990s. Most of the work up to that point (and, to a lesser, extent after) was directed toward controlling the properties of optical media in general, with group index as one particularly salient parameter that could be controlled, but perhaps at the expense of others. Thus the moderate discontinuities in group index might relate to the hypothesized pattern of metrics that receive ongoing concerted effort tending to be more continuous than those receiving weak or sporadic attention.\nPrimary author: Rick Korzekwa\nThanks to Stephen Jordan for suggesting slow light as a potential area of discontinuity.\nNotes", "url": "https://aiimpacts.org/historic-trends-in-slow-light-technology/", "title": "Historic trends in slow light technology", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-02-08T01:56:25+00:00", "paged_url": "https://aiimpacts.org/feed?paged=9", "authors": ["Katja Grace"], "id": "ea81089bef06c400f797611f52191490", "summary": []} {"text": "Penicillin and historic syphilis trends\n\nPenicillin did not precipitate a discontinuity of more than ten years in deaths from syphilis in the US. Nor were there other discontinuities in that trend between 1916 and 2015.\nThe number of syphilis cases in the US also saw steep decline but no substantial discontinuity between 1941 and 2008.\nOn brief investigation, the effectiveness of syphilis treatment and inclusive costs of syphilis treatment do not appear to have seen large discontinuities with penicillin, but we have not investigated either thoroughly enough to be confident.\nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nBackground\nPenicillin was first used to treat a patient in 19411 and became mass-produced in the US between 1942 and 1944.2 It quickly became the preferred treatment for syphilis, and appears to be generally credited with producing a steep decline in the prevalence of syphilis which was seen at around that time.3 4\nFigure 1: US World War II Poster5\nTrends\nWe consider four metrics of success in treating syphilis: the number of syphilis cases, the number of syphilis deaths, effectiveness of syphilis treatment, and the inclusive cost of treatment.\nIn addition to the size of any discontinuities in years, we tabulated a number of other potentially relevant statistics for each metric here.\nUS Syphilis cases\nData\nFigure 1 shows historic reported syphilis cases after 1941, according to the CDC.6 We converted the data in the figure into this spreadsheet.7\n\nFigure 1: Syphilis—Reported Cases by Stage of Infection, United States, 1941–2009, according to the CDC8\nDiscontinuity Measurement\nAccording to this data, total cases of syphilis declined by around 80% over fifteen years (see Figure 1). We do not see any substantial discontinuities, with 1944 seeing the largest change, equal to only 4 years of progress at the previous rate. Unfortunately, we were unable to find quantitative data prior to 1941, so we were only able to track progress for the three years leading up to the mass production of penicillin.\nFrom our perspective, progress by 1943 may already have been affected by availability of penicillin that we do not know about, in which case we have no earlier trend to go by. However we note that the scale of annual reductions following penicillin is not larger than the increase seen in 1943, and not vastly larger than later annual variations, so the largest abrupt decrease from penicillin seems unlikely to have been large compared to the usual scale of variation.\nUS Deaths from syphilis\nData\nWe collected data from two graphs of historical US syphilis deaths and put it in this spreadsheet. The first is shown in Figure 2, and comes from Armstrong et al.’s 1999 report on infectious disease mortality in the United States.9 The authors collected it from historical mortality and population data from the CDC and public use mortality data tapes.10 We used an automatic figure data extraction tool to extract data from the figure.11 Mortality rates after the mid-60s are indistinguishable from zero in this figure, so we do not include them. Instead we include records of total US deaths from Peterman & Kidd, 201912, which we combine with US population data to get mortality rates between 1957 and 2015.\n\nFigure 2: Syphilis mortality rate in the US during the 20th century.13\nFigure 3: Syphilis mortality rate in the US during the 20th century, plotted on a log scale\nDiscontinuity Measurement\nWe calculate discontinuities in our spreadsheet, according to this methodology. There were no substantial discontinuities in progress for reducing syphilis deaths in the US during the time for which we have data. The largest positive deviation from a previous trend was a drop representing five years of progress in around 1940, two years before even enough ‘US penicillin’ was available to treat ten people.14\nIn sum, while deaths from syphilis rapidly declined around the 1940s, this progress was not discontinuous at the scale of years. And while penicillin seems likely to have helped in this decline, it did not yet exist to contribute to the most discontinuously fast progress in that trend (and that progress was still not rapid enough to count as a substantial discontinuity for this project).\nDiscussion of causes\nThe decline of syphilis mortality does not appear to be entirely from penicillin, since it is underway by 1940, just prior to the mass-production of penicillin. This is strange, so it is plausible that we misunderstand some aspect of the situation.\nThe only other factor we know about is US Surgeon General Thomas Parran’s launch of a national syphilis control campaign in 1938.15 Wikipedia also attributes some of the syphilis decline over the 19th and 20th centuries to decreasing virulence of the spirochete, but we don’t know of any reason for that to especially coincide with the 1940s decline.16\nEffectiveness at treating syphilis\nEven if penicillin’s effect on the US death rate from syphilis was gradual, we might expect this to be due to frictions like institutional inertia, rather than from gradual progress in the underlying technology. It might still be that penicillin was a radically better drug than its predecessors, when applied.\nWe briefly investigated whether penicillin might have represented discontinuous progress in effectiveness at curing syphilis, and conclude that it probably did not, because it does not appear to have been clearly better than its predecessor in terms of cure rates. In a 1962 review of treatment of ‘early’ syphilis17, Willcox writes that ‘a seronegativity-rate of 85 per cent. at 11 months had been achieved’ in 1944 after penicillin became the primary treatment for syphilis, but also says that the previously common treatment—arsenic and bismuth—was successful in more than 90% of cases in which it was carried out.18\nWillcox explains that the major downsides of the earlier treatment were very high defection rates (with perhaps as few as a quarter of patients completing the treatment), and ‘serious toxic effects’.19 We have not checked that exactly the same notion of success is being used in these figures, have not assessed the reliability of this source, and do not know how important treatment for ‘early’ syphilis is relative to treatment for all syphilis, so it could still be that penicillin was a more effective treatment overall. However we did not investigate this further.\nInclusive costs of treatment\nPenicillin apparently allowed most patients to receive a curative dose of medicine, whereas ‘arsenic and bismuth therapy’ achieved this for perhaps as few as a quarter of patients.20 If penicillin made an abrupt difference to syphilis treatment then, it seems likely to have been in terms of inclusive costs (which were partly reflected in willingness to be treated).\nQualitatively, the costs of treatment do seem to have been much lower. The time for treatment dropped from a year to around eight days.21 Our impression is that the side effects qualitatively reduced from horrible and sometimes deadly to apparently bearable.\nHowever even if penicillin was a large improvement over its predecessors in absolute terms (which seems likely), it would be hard to make a clear case that it was large relative to previous progress in syphilis treatments, because recent progress was also incredible.\nThe ‘arsenic and bismuth therapy’ mentioned above, that preceded penicillin, seems to have been a combination of the arsenic-based drug salvarsan (arsphenamine) and similar drugs developed subsequently, with bismuth. 22 Salvarsan (arsphenamine) was considered such radical improvement over its own predecessors that it was known as the ‘magic bullet’, and won its discoverer Paul Erhlich a Nobel prize.23 A physician at the time describes24:\n\n“Arsenobenzol, designated “606,” whatever the future may bring to justify the present enthusiasm, is now actually a more or less incredible advance in the treatment of syphilis and in many ways is superior to the old mercury – as valuable as this will continue to be – because of its eminently powerful and eminently rapid spirochaeticidal property.”\n\nIt is easy to see how salvarsan could be hugely costly to take, yet still represent large progress over earlier options, when we note that the common treatment prior to salvarsan was mercury,25 which had ‘terrible side effects’ including the death of many patients, characteristically took years, and was not obviously helpful.26\nSo at a glance penicillin doesn’t look to have been clearly discontinuous relative to the impressive recent trend, and measuring inclusive costs is hard to do finely enough to see less clear discontinuities. Thus evaluating these costs quantitatively will remain beyond the scope of this investigation at present. We tentatively guess that penicillin did not represent a large discontinuity in inclusive costs of syphilis treatment, though it did represent huge progress.\nConclusions\nPenicillin probably made quick but not abrupt progress in reducing syphilis and syphilis mortality. Penicillin doesn’t appear to have been much more likely to cure a patient than earlier treatments, conditional on the treatment being carried out, but it penicillin treatment appears to have been around four times more likely to be carried out, due to lower costs. Qualitatively penicillin represented an important reduction in costs, but it is hard to evaluate this precisely or compare it with the longer term progress. It appears that as recently as 1910 another drug for syphilis also represented qualitatively huge progress in treatment, so it is unlikely that penicillin was a large discontinuity relative to past progress.\nNotes", "url": "https://aiimpacts.org/penicillin-and-historic-syphilis-trends/", "title": "Penicillin and historic syphilis trends", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-02-08T01:36:10+00:00", "paged_url": "https://aiimpacts.org/feed?paged=9", "authors": ["Asya Bergal"], "id": "b2268636a09ddb672803951619fcb6fe", "summary": []} {"text": "Historic trends in the maximum superconducting temperature\n\nThe maximum superconducting temperature of any material up to 1993 contained four greater than 10-year discontinuities: A 14-year discontinuity with NbN in 1941, a 26-year discontinuity with LaBaCuO4 in 1986, a 140-year discontinuity with YBa2Cu3O7 in 1987, and a 10-year discontinuity with BiCaSrCu2O9 in 1987. \nYBa2Cu3O7 superconductors seem to correspond to a marked change in the rate of progress of maximum superconducting temperature, from a rate of progress of .41 Kelvin per year to a rate of 5.7 Kelvin per year.\nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nBackground\nSuperconductors were discovered in 1911.1 Until 1986 the maximum temperature for superconducting behavior had gradually risen from around 4K to less than 30K (see figure 2 below). Theory at the time apparently predicted that 30K was an upper limit.2 In 1986 a new class of ceramics known as YBCO superconductors was discovered to allow superconducting behavior at higher temperatures: above 80K,3 and within seven years, above 130K.4\nFigure 1: Levitation of a magnet above a superconductor5\nTrends\nMaximum temperature for superconducting behavior\nWe looked at data for the maximum temperature at which any material is known to have superconducting behavior. \nData\nWe found the following data in a figure from the University of Cambridge’s online learning materials course, DoITPoMS,6 and have verified most of it against other data sources (see our spreadsheet, where we also collected ‘Extended data’ to verify that these were indeed the record temperatures).\nWe display the original figure from DoITPoMS in Figure 2 below, followed by our figure, Figure 3, which includes the a more recent superconducting material, H2S.\nFigure 2: Maximum superconducting temperature by material over time through 2000, from the University of Cambridge’s online learning materials course, DoITPoMS,7\nFigure 3: Maximum superconducting temperate by material over time through 2015\nDiscontinuity measurement\nWe modeled this data as linear within two different regimes, one up to LaBaCu04 in 1986, and another starting with 1986 until our last data point.8 Using previous rates from those trends, we calculated four greater than 10-year discontinuities (rounded), shown in the table below:9\nYearTemperatureDiscontinuityMaterial194116 K14 yearsNbN198635 K26 yearsLaBaCuO4198793 K140 yearsYBa2Cu3O71987105 K10 yearsBiCaSrCu2O9\nIn addition to the size of this discontinuity in years, we have tabulated a number of other potentially relevant metrics here.10\nChanges in the rate of progress\nWe note that there was a marked change in the rate of progress of maximum superconducting temperature with YBa2Cu3O7. The maximum superconducting temperature changed from a rate of progress of .41 Kelvin per year to a rate of 5.7 Kelvin per year.11\nNotes", "url": "https://aiimpacts.org/historic-trends-in-the-maximum-superconducting-temperature/", "title": "Historic trends in the maximum superconducting temperature", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-02-08T00:22:32+00:00", "paged_url": "https://aiimpacts.org/feed?paged=9", "authors": ["Asya Bergal"], "id": "1b52d76be14f293d79de1b3df7224111", "summary": []} {"text": "Historic trends in chess AI\n\nThe Elo rating of the best chess program measured by the Swedish Chess Computer Association did not contain any greater than 10-year discontinuities between 1984 and 2018. A four year discontinuity in 2008 was notable in the context of otherwise regular progress. \nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nBackground\nThe history of chess-playing computers is long and rich, partly because chess-playing ability has long been thought (by some) to be a sign of general intelligence.1 The first two ‘chess-playing machines’ were in fact fakes, with small human chess-players crouching inside.2 It was not until 1951 that a program was published (by Alan Turing) that could actually play the full game.3 There has been fairly regular progress since then.4 \nIn 1997 IBM’s chess machine Deep Blue beat Gary Kasparov, world chess champion at the time, under standard tournament time controls.5 This was seen as particularly significant in light of the continued popular association between chess AI and general AI.6 The event marked the point at which chess AI became superhuman, and received substantial press coverage.7 \nThe Swedish Chess Computer Association (SSDF) measures computer chess software performance by playing chess programs against one another on standard hardware.8\nFigure 1: Deep Blue9\nTrends\nSSDF Elo Ratings\nAccording to Wikipedia10:\nThe Swedish Chess Computer Association (Swedish: Svenska schackdatorföreningen, SSDF) is an organization that tests computer chess software by playing chess programs against one another and producing a rating list…The SSDF list is one of the only statistically significant measures of chess engine strength, especially compared to tournaments, because it incorporates the results of thousands of games played on standard hardware at tournament time controls. The list reports not only absolute rating, but also error bars, winning percentages, and recorded moves of played games.\nData\nWe took data from Wikipedia’s list of SSDF Ratings11 (which we have not verified) and added it to this spreadsheet. See Figure 2 below.\n Figure 2: Elo ratings of the best program on SSDF at the end of each year. \nDiscontinuity measurement\nLooking at the data, we assume a linear trend in Elo.12 There are no discontinuities of 10 or more years. \nMinor discontinuity\nThere is a four year discontinuity in 2008. While this is below the scale of interest for our discontinuous progress investigation, it strikes us as notable in the context of otherwise very regular progress.13 We’ve tabulated a number of other potentially relevant metrics for this discontinuity in the ‘Notable discontinuities less than 10 years’ tab here.14 \nThis jump appears to have been partially caused by the introduction of new hardware in the contest, as well as software progress.15\nNotes", "url": "https://aiimpacts.org/historic-trends-in-chess-ai/", "title": "Historic trends in chess AI", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-02-08T00:00:27+00:00", "paged_url": "https://aiimpacts.org/feed?paged=9", "authors": ["Asya Bergal"], "id": "8b56573fc5466f42147140d6d2c05bc2", "summary": []} {"text": "Effect of Eli Whitney’s cotton gin on historic trends in cotton ginning\n\nWe estimate that Eli Whitney’s cotton gin represented a 10 to 25 year discontinuity in pounds of cotton ginned per person per day, in 1793. Two innovations in 1747 and 1788 look like discontinuities of over a thousand years each on this metric, but these could easily stem from our ignorance of such early developments. We tentatively doubt that Whitney’s gin represented a large discontinuity in the cost per value of cotton ginned, though it may have represented a moderate one.\nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nBackground\nCotton fibers grow around cotton seeds, which they need to be separated from before use. This can be done by hand, but since 500 C.E.,1 and plausibly prehistory, a variety of tools have aided in speeding up the process. 2\nThese tools are called ‘cotton gins’. Eli Whitney’s 1793 cotton gin was a particularly famous innovation, commonly credited with having vastly increased cotton’s profitability, fueling an otherwise diminishing demand for slave labor, and so substantially contributing to the American Civil War.3 Variants on Whitney’s gin are known as ‘saw gins’.4 (See Figure 1.) Cotton became more valuable than all other US exports combined during the antebellum era.5 Thus Whitney’s gin is a good contender for representing a discontinuity in innovation. \nOur investigation draws heavily from Lakwete’s Inventing the Cotton Gin. Lakwete summarizes the situation surrounding Whitney’s invention as follows6:\nThe introduction of a new gin in 1794 was as unexpected as it was unprecedented. It was unexpected because the British textile industry had expanded from the sixteenth through the eighteenth centuries without a change in the ginning principle. Cotton producers had increased the acres they planted in cotton and planted new varieties to suit textile makers. The market attracted new producers who, like established planters, used roller gins to process their crops. Roller gins, whether hand-cranked in the Levant and India, or foot-, animal-, and inanimately powered in the Americas, provided adequate amounts of fiber with the qualities that textile makers wanted, namely length and cleanliness. All roller gins removed the fiber by pinching it off in bundles, preserving its length and orientation as grown. Random fragments of fractured seeds were picked out of the fiber before it was bagged and shipped. In 1788 Joseph Eve gave planters and merchants a machine that bridged the medieval and modern. It preserved the ancient roller principle but completed the appropriation of the ginner’s skill, as Arkwright’s frame had that of the spinner. Appropriation had proceeded in stages beginning with the single-roller gin that mechanized the thumb and finger pinching motion. The roller gin in turn appropriated the agility and strength needed to manipulate the single roller, while the foot gin freed both hands to supply seed cotton. The barrel gin used animal and water power, removing humans as a power source but retaining them as seed cotton suppliers. The self-feeding animal-, wind-, or water-powered Eve gin replaced each of the skilled tasks of the ginner with mechanical components. Nevertheless, Eli Whitney’s unprecedented gin filled a vacuum. While large merchants invested in barrel gins and large planter in the Eve gin, the majority continued to use the skill- and labor-intensive foot gin to gin fuzzy-seed short-staple cotton as well as the smooth-seed, Sea Island cotton. Barrel gins had not decreased the number of ginners and only marginally improved ginner productivity, and Eve’s complicated gin was notoriously finicky. Whitney ignored these modernizing gins and offered a replacement for the ubiquitous foot gin.\nFigure 1: What appears to be a saw gin of some kind on display at the Eli Whitney Museum7\nTrends\nPounds of cotton ginned per person-day\nWe are most interested in metrics that people were working to improve—in this case, perhaps ‘cost of producing a dollar’s worth of cotton’. Inclusive metrics are hard to measure however. Instead we have collected data on ‘pounds of cotton ginned per person per day’, which is simpler, often reported on, and probably a reasonable proxy. However, it departs from tracking the usefulness of a gin by ignoring several major factors:\n\nUpfront costs: these presumably varied a lot, because a gin can for instance resemble a rolling pin and a board, or involve horses or steam power.8 Thus the gins with higher upfront costs are less useful than their cotton-per-person-day statistic would make them seem. In the mid-1860’s many farms still used foot gins, seemingly because Eve gins—while more efficiently producing high quality output—were expensive.9 Even if everything had the same upfront costs, the existence of upfront costs means that a gin which processes 200lb of cotton with two people per day would be better than one that processes 100lb of cotton with one person, so cotton/person-day still fails to match what we are interested in.\nVariation in labor requirements: Some gins required especially skilled labor.10\nSubstitutes for people: some gins used people to power them and others used animals or water-power, along with a smaller number of people.11 This again makes for higher output per person, but at the cost of additional animals, that we are not accounting for.\n\n\nRisks of injury: Some gins, particularly foot and barrel gins, were dangerous to operate.12\n\nTypes and quality of cotton ginned: Whitney’s gin produced degraded cotton fiber, relative to other gins available at the time.13 However, Whitney’s gin could process short staple cotton, an easier to grow strain which was previously hard to process.14 The cotton industry might adjust to different cotton over time, so that long-run differences in quality of outputs of different gins are smaller than initial differences. If so, we expect value produced by a new gin producing lower quality cotton to grow continuously over an extended period. \n\n\nWe do also investigate overall cost per value of cotton ginned later, but do not have such clear data for it (see section, ‘cost per value of cotton ginned’).\nData\nWe collected claims about cotton gin productivity in the time leading up to Whitney’s gin, and some after. Many but not all are from Angela Lakwete’s book, Inventing the Cotton Gin: Machine and Myth in Antebellum America.15 Our sources are mostly secondhand or thirdhand claims about nonspecific observations in the 1700s. We have the impression that claims in this space are not very reliable.16 We classified claims as ‘credible’ or not, but this is fairly ambiguous, and we would be unsurprised if some of the ‘credible’ claims turned out to be inaccurate, or the ‘non-credible’ ones were correct.\nOur dataset of claims is here, and illustrated in Figures 2 – 5. Note that dates are those when a claim was made, not necessarily dates of the invention of the type of cotton gin in question. This is because invention dates are hard to find, and also because it seems likely that much improvement happened incrementally between distinct ‘inventions’ of new types. Nonetheless, this means that a report dated to a time could be from a gin that was built earlier. \nFigure 2: Claimed cotton gin productivity, 1720 to modern day, coded by credibility and being records, and dated by when the claim was made (not necessarily when the gin was made). Claims that are both relatively credible and higher than previous relatively credible claims are few. The last credible best point before the modern day is an improved version of Whitney’s gin, two years after the original (the original features in the two high non-credible claims slightly earlier).\nFigure 3: Historic claimed cotton gin productivity, all time (zoomed out version of Figure 2)\nFigure 4: Zoom-in on credible best cotton gins (excluding modern era)\nDiscontinuity measurement\nFor measuring discontinuities, we treat past progress as exponential at each point, but entering a new exponential regime at the fourth point. We confine our investigation to credible records. Given these things, we find the improved Whitney gin to be a 23-year discontinuity over the previous record in this dataset. However the foot gin and Eve’s mill gin appear to be at least one-thousand year discontinuities each.17. \nHowever our data has at least one key gap. Whitney’s original 1793 gin design was almost immediately copied and improved by many people, most notably Hodgen Holmes and Daniel Clark.18 The plausible productivity data we have appears to all be for these later variants, which we understand were non-negligibly better than Whitney’s original gin in some way.19 So we know that Whitney’s gin should be somewhat lower and two years earlier than our first data for Whitney-style gins. This means at most, Whitney’s original gin would be a 25 year discontinuity. If it accounted for even half of the progress since Eve’s mill gin, and we are not missing further innovations between the two, Whitney’s gin would still represent a 13 year discontinuity, and the later improved version would no longer account for a discontinuity of more than ten years.20 It seems likely to us that Whitney’s gin was at least this revolutionary so we think the Whitney gin probably represented a moderate (10-25 year) discontinuity in pounds of cotton ginned per day.\nWe are fairly uncertain about whether the two larger discontinuities earlier are real, or due to gaps in our data. We did attempt to collect data for these earlier times (rather than just prior to the Whitney gin), but seem very likely to be missing a lot.\nIn addition to the size of these discontinuities in years, we have tabulated a number of other potentially relevant metrics here.21\nChanges in the rate of progress\nOver the history of gin productivity, the average rate of progress became higher. It is unclear whether this happened at a particular point. In our data, it looks as though it happened with the foot gin, in 1747, and that progress went from around .05% per year to around 4% per year (see our spreadsheet, tab ‘credible record gin calculations’). However our data is too sparse and uncertain to draw firm conclusions from.\nCost per value of cotton ginned\nAs discussed above, pounds of cotton ginned per person-day is not a perfect proxy of the value of a cotton gin, and therefore presumably not exactly what cotton-gin users were aiming for. ‘Cost per value of cotton ginned’ seems closer, if we measure costs inclusively and average across various cotton ginning situations. We did not collect data on this, but can make some inferences about the shape of this trend—and in particular whether Whitney’s gin represented a discontinuity—from what we know about the pounds/person-day figures and other aspects of the situation.\nEvidence from the trend in pounds of cotton ginned per person-day\nWe expect that the pounds of cotton ginned per person-day roughly approximates cost per value of cotton ginned, with the following adjustments that we know of:\nEve’s gin is worse on cost/value than on cotton/person-day because the latter metric doesn’t reflect its large upfront costs.Whitney’s gin may be worse on cost per value than it appears, because of its lower-quality cotton output.Whitney’s gin may be better than it appears, because it could handle short-staple cotton. However this value seems unlikely to have manifest immediately, since it presumably takes time for cotton users to adjust to a new material.Foot gins and barrel gins (e.g. Eve’s) were dangerous to operate, so are worse on cost/value than they appear.Foot gins apparently required especially skilled labor, so are worse on cost/value than they appear.Barrel gins and Eve gins often ran on non-human power-sources, so are worse on cost/value than they appear.22\nThese are several considerations in favor of Whitney’s gin representing more progress on cost/value than on cotton/person-day, and one against. However it is unclear to us whether the downside of lower quality cotton was larger than the other considerations combined, so the overall effect on the expected size of discontinuity from Whitney’s gin seems ambiguous, but probably in favor of larger.\nEvidence from takeup of gins\nThe foot gin persisted for at least sixty years after Whitney’s invention.23 This suggests that Whitney’s gin wasn’t radically better on cost per value of cotton ginned than its predecessors, at least for some cotton producers.\nOn the other hand, apparently there was a rush to manufacture copies of Whitney’s gin, so much so that many mechanics became professional gin-makers, and most plantations had one of the new gins within five years. 24\nThis suggests that there were situations for which Whitney’s gin was substantially better than alternatives, and situations for which it was worse. This seems like weak evidence that on average across cotton ginning needs it was not radically better than precursors, though there might be narrower metrics we could define on which it was radically better.\nEvidence from cotton production trends\nIf the Whitney gin made cotton much cheaper to process, we might expect cotton production at the time to sharply increase. Our impression is that this is a common story about what happened.25 However the data we could find on this, seemingly from a 1958 history of early US agriculture, suggests that cotton production was already growing rapidly, and continued on a similar trajectory after Whitney’s invention.26 See Figure 5.\nThis dataset begins in 1790, only a few years before Whitney’s invention. This is enough to see that the trend just before 1793 is much like the trend just after, however we can further verify this by looking at earlier cotton export figures (Figure 7).27 Cotton exports appear to closely match overall productivity where the trends overlap, and the pre-1790 export trend appears to be roughly continuous with the rest of the curve, at least if we ignore the aberrantly low 1790 figure.\nFigure 5: historic cotton production (bales), probably from Gray et al 195828, see above for elaboration on source, data here.\nFigure 6: Rough collected figures for US cotton exports and production over period from 1780 – 1830, data and sources here.\nFigure 7: close up on relevant years from Figure 6\nThis does not preclude a large change in gin efficacy—perhaps there were other bottlenecks to cotton productivity, or it took time for the gains from Whitney’s gin to manifest in national productivity data. However it does cause us to doubt the story of Whitney’s gin being evidently responsible for massive growth in the cotton industry, which was a reason for suspecting the gin may have represented discontinuous progress. So this is some evidence against Whitney’s gin representing a large discontinuity in cost per value of cotton ginned. \nEvidence from the 1879 evaluation of ginning technology\nThis 1879 evaluation of ginning technology reports on extensive measurement trials of different cotton gins. It was seemingly conducted to understand why Indian cotton production lagged behind American. The author says all methods for ginning cotton in India were primitive until recently; he hears that in some places it was done by hand as late as 1859.29 This is confusing because if Whitney’s gin really was much better in terms of cost-per-value than the alternatives, it would be surprising if sixty years later the alternatives were still in use. However many alternatives seem clearly more cost-effective than ginning by hand, so this seems like little evidence about Whitney’s gin in particular.\nThe outputs of the gins in the experiment seem different from (and usually higher than) the outputs for similarly named gins in our dataset. Which might be confusing, but we expect is because there are modest improvements to gin technology over time within particular classes of gin. \nIn sum, this evidence looks as though it might be informative, but we do not see it as such on consideration.\nEvidence from historians\nWe have not thoroughly reviewed popular or academic opinions on the discontinuousness of the cotton gin, but our impression is that a common popular view is that Eli Whitney’s cotton gin was a discontinuous improvement over the state-of-the-art. On the other hand Dr Lakwete, author of Inventing the Cotton Gin—a book we found most helpful in this project, and that also won an award for being the best scholarly book published about the history of technology30—disagrees, actually explicitly saying it was continuous (though she may mean something different by this than we do): \nCollapsing two hundred years of cotton production and roller gin use in North America in to the moment when Eli Whitney invented the toothed gin, Phineas Miller and Judge Johnson marked 1794 as a turning point in southern development. Before, southeners languished without an effective gin for short-staple cotton; afterwards, the cotton economy blossomed. Arguing for discontinuity, the idea allowed the visualization of a moment and a machine that separated the colonial past from the new republic. Continuity, however, marked the history of cotton and the gin in America. Continuity would characterize the first two decades of the nineteenth century, as saw-gin and roller gin makers competed for dominance in the expanding short-staple cotton market.Lakwete, Angela. Inventing the Cotton Gin: Machine and Myth in Antebellum America. Baltimore, MD: Johns Hopkins University Press, 2003. , 71.\nConclusions on cost per value of cotton ginned\nOn the earlier ‘pounds of cotton ginned per person per day’ metric, we estimated that Whitney’s gin was worth around 10-25 years of past progress. Various considerations suggested Whitney’s gin might have been a bigger deal for overall cost-effectiveness of ginning cotton than that calculation suggested, but the quality of cotton was lower. We took this in total as neutral to weakly favoring Whitney’s gin being better than it seemed. We then saw that the Whitney gin was taken up with enthusiasm by a subset of people needing to gin cotton, that it didn’t seem to recognizably affect the growth of US cotton production, and that at least one historian with particular expertise in this topic thinks that progress was relatively continuous. This does not particularly suggest to us that Whitney’s gin represented a large discontinuity in cost per value of cotton ginned, and seems like some evidence against. \nNotes", "url": "https://aiimpacts.org/effect-of-eli-whitneys-cotton-gin-on-historic-trends-in-cotton-ginning/", "title": "Effect of Eli Whitney’s cotton gin on historic trends in cotton ginning", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-02-07T22:58:55+00:00", "paged_url": "https://aiimpacts.org/feed?paged=9", "authors": ["Katja Grace"], "id": "ccc326ab0866efde447b22d3bc93f763", "summary": []} {"text": "Historic trends in flight airspeed records\n\nFlight airspeed records between 1903 and 1976 contained one greater than 10-year discontinuity: a 19-year discontinuity corresponding to the Fairey Delta 2 flight in 1956.\nThe average annual growth in flight airspeed markedly increased with the Fairey Delta 2, from 16mph/year to 129mph/year. \nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nBackground\nFlight airspeed records are measured relative to particular classes of aircraft, with official rules defined by the Fédération Aéronautique Internationale (FAI). is “the highest airspeed attained by any aircraft of a particular class”.1 \nTrends\nFlight airspeed records\nData\nWe took data from Wikipedia’s list of flight airspeed records2 (which we have not verified) and added it to this spreadsheet. We understand it to be fastest records across all classes of manned aircraft that are able to take off under their own power, but it is not well explained on the page. We included only official airspeed records. See Figure 1 below. \nFigure 1: Flight airspeed records over time\nDiscontinuity measurement\nWe treat the data as linear, and once deem it to have begun a new trend, for the purpose of determining the past rate of progress. 3 We calculate the size of discontinuities in this spreadsheet.4 In 1956, there was a 19-year discontinuity in flight airspeed records with the Fairey Delta 2 flight. \nWe tabulated a number of other related metrics here.5\nFigure 2: Fairey Delta 26, whose 1956 record represented a 19 year discontinuity.\nChange in the growth rate\nThe average annual growth in flight airspeed markedly increased at around the time of the Fairey Delta 2. Airspeed records grew by an average of 16mph/year up until the one before Fairey Delta 2, whereas from that point until 1965 they grew by an average of 129mph/year.7 \nNotes", "url": "https://aiimpacts.org/historic-trends-in-flight-airspeed-records/", "title": "Historic trends in flight airspeed records", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2020-02-07T22:47:35+00:00", "paged_url": "https://aiimpacts.org/feed?paged=9", "authors": ["Asya Bergal"], "id": "56e15a83368e4339be836c7e3fcaef1e", "summary": []} {"text": "Comparison of naturally evolved and engineered solutions\n\nThis page describes a project that is in progress, and does not yet have results\nWe are comparing naturally evolved and engineered solutions to problems, to learn about regularities that might let us make inferences about artificial intelligence from what we know about naturally evolved intelligence. \nDetails\nMotivation\nEngineers and evolution have faced many similar design problems. For instance, the problem of designing an efficient flying machine. Another instance of a design problem that engineers and evolution have both worked on is designing intelligent machines. We hope that by looking at other instances of engineers and evolution working on similar problems, we will be able to learn more about how future AI systems will compare to evolved intelligences.\nMethods\nWe will collect examples of optimization problems that engineers and evolution would perform better on if they could. Here are some candidate examples of such problems: \nFlyingHoveringSwimmingRunningTraveling long distancesTraveling quicklyJumpingBalancingHeight of structurePiercingApplying compressive forceStrikingTensile strengthPumping bloodBreathingLiver functionDetecting lightRecording lightProducing lightDetecting soundRecording soundProducing soundHeat insulationDetermining chemical composition of a substanceDetecting chemical composition in the airAdhesivenessPicking heavy things upJoint activationElasticityToxicityExtracting energy from sunlightStoring energy\nWe will then collect the best solutions we can readily find to these design problems, made by human engineers and by evolution respectively, and quantitative data on their performances. We will try to collect this over time, for engineered solutions. \nAnalysis\nWe will use the data to answer the following questions for different design problems:\nHow long does it take engineers to half, match, double, triple, etc. the performance of evolution’s current best designs? What does the shape of engineers’ performance curve look like around the point where engineers’ solutions first match evolution’s? How efficient (in terms of performance per energy or mass used) are the first solutions that match evolution’s performance compared to evolution’s best solutions? How long does it take engineers to find a more efficient solution after finding an equally good solution in terms of absolute performance? From a design perspective, how similar are engineers’ first equally good solutions to evolution’s best solutions? \nWe will use patterns in the answers to these questions across technologies to make inferences about the answers for natural and artificial intelligence.\nIn general, the more similar the answers to these questions turn out to be across design problems, the more strongly we will expect the answers for problems addressed by future AI developments to fit the same patterns. \nWe expect to make the data publicly available, so that others can check our conclusions, investigate related questions, or use it in other investigations of technology and evolution.", "url": "https://aiimpacts.org/comparison-of-naturally-evolved-and-engineered-solutions/", "title": "Comparison of naturally evolved and engineered solutions", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-12-25T02:32:01+00:00", "paged_url": "https://aiimpacts.org/feed?paged=9", "authors": ["Katja Grace"], "id": "84c718f168be3a5389a4b5c05e797818", "summary": []} {"text": "Walsh 2017 survey\n\nToby Walsh surveyed hundreds of experts and non-experts in 2016 and found their median estimates for ‘when a computer might be able to carry out most human professions at least as well as a typical human’ were as follows:\nProbability of HLMIGroup of survey respondentsAI expertsRobotics expertsNon-experts10%20352033202650%20612065203990%210921182060\nDetails\nToby Walsh, professor of AI at the University of New South Wales and Technical University of Berlin, conducted a poll of AI experts, robotics experts, and non-experts from late January to early February 2017. The survey focused on the potential automation of various occupations and the arrival of high-level machine intelligence (HLMI). \nSurvey respondents\nThere were 849 total survey respondents composing three separate groups: AI experts, robotics experts, and non-experts.\nThe AI experts consisted of 200 authors from two AI conferences: the 2015 meeting of the Association for the Advancement of AI (AAAI) and the 2011 International Joint Conference on AI (IJCAI).\nThe robotics experts consisted of 101 individuals who were either Fellows of the Institute for Electrical and Electronics Engineers (IEEE) Robotics & Automation Society or authors from the 2016 meeting of the IEEE Conference on Robotics & Automation (ICRA).\nThe non-experts consisted of 548 readers of an article about AI on the website The Conversation. While it seems data on their possible expertise in AI or robotics was not collected, Walsh writes that “it is reasonable to suppose that most are not experts in AI & robotics, and that they are unlikely to be publishing in the top venues in AI and robotics like IJCAI, AAAI or ICRA” (p. 635). Some additional demographic data was collected and reported (for this survey group only):\nGeographic distribution: 36% Australia, 29% United States, 7% United Kingdom, 4% Canada, and 24% rest of the worldEducation: 85% have an undergraduate degree or higherAge: >33% are 34 or under, 59% are under 44, and 11% are 65 or olderEmployment status: >66% are employed and 25% are in or about to enter higher educationIncome: 40% reported an annual income of >$100,000\nClassifying occupations at risk of automation\nThe first seven survey questions (out of eight total) asked respondents to classify occupations as either at risk of automation in the next two decades or not (binary response). For each occupation, respondents were provided with information about the work involved and skills required. There were 70 total occupations, which came from a previous study that had used a machine learning (ML) classifier to rank them in terms of their risk for automation. These rankings were then used in the present survey: Each question had respondents classify 10 occupations, starting with the five most likely and five least likely at risk of automation according to the ML classifier. This continued through subsequent questions until respondents classified all 70 occupations.\nArrival of high-level machine intelligence (HLMI)\nThe last survey question asked by what year there would be a 10%, 50%, and 90% chance of HLMI, which was defined as “when a computer might be able to carry out most human professions at least as well as a typical human” (p. 634). For each probability respondents chose from among eight options: 2025, 2030, 2040, 2050, 2075, 2100, After 2100, and Never. Median responses were calculated by interpolating the cumulative distribution function between the two nearest dates.\nResults\nProbability of when (in years) HLMI will arrive\nTable 1 below summarized the median responses and is reproduced here for convenience.\nTable 1\nProbability of HLMIGroup of survey respondentsAI expertsRobotics expertsNon-experts10%20352033202650%20612065203990%210921182060\nFigures 1-3 below show the cumulative distribution functions (CDFs) for 10%, 50%, and 90% probability of HLMI (respectively) at different years.\n\nFigure 1\n\nFigure 2\n\nFigure 3\nOccupations at risk of automation\nTable 2 below contains descriptive statistics about the number of occupations (out of 70 total) classified as being at risk of automation in the next two decades. Confidence intervals (last column) are at the 95% level. It is unclear why the sample size for Non-experts is listed as 473 when earlier in the article the number reported is 548.\nTable 2\n\nThe difference in means between the Robotics (29.0) and AI experts (31.1) was not statistically significant (two-sided t-test, p = 0.096), while the differences in means between both expert groups and the non-expert group (36.5) separately were significant (two-sided t-test, both p’s < 0.0001).\nTable 3 below lists some of the largest differences in the proportion of experts (AI and robotics combined) compared to non-experts who classified occupations as at risk for automation.\nTable 3\nOccupationProportion of respondents predicting risk for automationExpertsNon-expertsEconomist12%39%Electrical engineer6%33%Technical writer31%54%Civil engineer6%30%\nFigure 4 below shows that respondents who predicted that HLMI would arrive earlier also classified more occupations as being at risk of automation (and vice versa).\n\nFigure 4", "url": "https://aiimpacts.org/walsh-2017-survey/", "title": "Walsh 2017 survey", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-12-25T02:17:21+00:00", "paged_url": "https://aiimpacts.org/feed?paged=10", "authors": ["Asya Bergal"], "id": "03832f6838ba44e95f35962aa6d72033", "summary": []} {"text": "Conversation with Adam Gleave\n\nAI Impacts talked to AI safety researcher Adam Gleave about his views on AI risk. With his permission, we have transcribed this interview.\nParticipants\nAdam Gleave — PhD student at the Center for Human-Compatible AI, UC BerkeleyAsya Bergal – AI ImpactsRobert Long – AI Impacts\nSummary\nWe spoke with Adam Gleave on August 27, 2019. Here is a brief summary of that conversation:\nGleave gives a number of reasons why it’s worth working on AI safety:It seems like the AI research community currently isn’t paying enough attention to building safe, reliable systems.There are several unsolved technical problems that could plausibly occur in AI systems without much advance notice.A few additional people working on safety may be extremely high leverage, especially if they can push the rest of the AI research community to pay more attention to important problems.Gleave thinks there’s a ~10% chance that AI safety is very hard in the way that MIRI would argue, a ~20-30% chance that AI safety will almost certainly be solved by default, and a remaining ~60-70% chance that what we’re working on actually has some impact.Here are the reasons for Gleave’s beliefs, weighted by how much they factor into his holistic viewpoint:40%: The traditional arguments for risks from AI are unconvincing:Traditional arguments often make an unexplained leap from having superintelligent AIs to superintelligent AIs being catastrophically bad.It’s unlikely that AI systems not designed from mathematical principles are going to inherently be unsafe.They’re long chains of heuristic reasoning, with little empirical validation.Outside view: most fears about technology have been misplaced.20%: The AI research community will solve the AI safety problem naturally.20%: AI researchers will be more interested in AI safety when the problems are nearer.10%: The hard, MIRI version of the AI safety problem is not very compelling.10%: AI safety problems that seem hard now will be easier to solve once we have more sophisticated ML.Fast takeoff defined as “GDP will double in 6 months before it doubles in 24 months” is plausible, though Gleave still leans towards slow takeoff.Gleave thinks discontinuous progress in AI is extremely unlikely:There is unlikely to be a sudden important insight dropped into place, since AI has empirically progressed more by accumulation of lots of bags and tricks and compute.There isn’t going to be a sudden influx of compute in the near future, since well-funded organizations are currently already spending billions of dollars to optimize it.If we train impressive systems, we will likely train other systems beforehand that are almost as capable.Given discontinuous progress, the most likely story is that we combine many narrow AI systems in a way where the integrated whole is much better than half of them.Gleave guesses a ~10-20% chance that AGI technology will only be a small difference away from current techniques, and a ~50% chance that AGI technology will be easily comprehensible to current AI researchers:There are fairly serious roadblocks in current techniques right now, e.g. memory, transfer learning, Sim2Real, sample inefficiency.Deep learning is slowing down compared to 2012 – 2013:Much of the new progress is going to different domains, e.g. deep RL instead of supervised deep learning.Computationally expensive algorithms will likely hit limits without new insights.Though it seems possible that in fact progress will come from more computationally efficient algorithms.Outside view, we’ve had lots of different techniques for AI over time, so it would be surprising is the current one is the right one for AGI.Pushing more towards current techniques getting to AGI, from an economic point of view, there is a lot of money going into companies whose current mission is to build AGI.Conditional on advanced AI technology being created, Gleave gives a 60-70% chance that it will pose a significant risk of harm without additional safety efforts.Gleave thinks that best case, we drive it down to 20 – 10%, median case, we drive it down to 40 – 30%. A lot of his uncertainty comes from how difficult the problem is.Gleave thinks he could see evidence that could push him in either direction in terms of how likely AI is to be safe:Evidence that would cause Gleave to think AI is less likely to be safe:Evidence that thorny but speculative technical problems, like inner optimizers, exist.Seeing more arms race dynamics, e.g. between U.S. and China.Seeing major catastrophes involving AI, though they would also cause people to pay more attention to risks from AI.Hearing more solid arguments for AI risk.Evidence that would cause Gleave to think AI is more likely to be safe:Seeing AI researchers spontaneously focus on relevant problems would make Gleave think that AI is less risky.Getting evidence that AGI was going to take longer to develop.Gleave is concerned that he doesn’t understand why members of the safety community come to widely different conclusions when it comes to AI safety.Gleave thinks a potentially important question is the extent to which we can successfully influence field building within AI safety.\nThis transcript has been lightly edited for concision and clarity.\nTranscript\nAsya Bergal: We have a bunch of questions, sort of around the issue of– basically, we’ve been talking to people who are more optimistic than a lot of people in the community about AI. The proposition we’ve been asking people to explain their reasoning about is, ‘Is it valuable for people to be expending significant effort doing work that purports to reduce the risk from advanced artificial intelligence?’ To start with, I’d be curious for you to give a brief summary of what your take on that question is, and what your reasoning is.\nAdam Gleave: Yeah, sure. The short answer is, yes, I think it’s worth people spending a lot of effort on this, at the margins, it’s still in absolute terms quite a small number. Obviously it depends a bit whether you’re talking about diverting resources of people who are already really dedicated to having a high impact, versus having your median AI researchers work more on safety related things. Maybe you think the median AI researcher isn’t trying to optimize for impact anyway, so the opportunity cost might be lower. The case I see from reducing the risk of AI is maybe weaker than some people in the community, but I think it’s still overall very strong.\nThe goal of AI as a field is still to build artificial general intelligence, or human-level AI. If we’re successful in that, it does seem like it’s going to be an extremely transformative technology. There doesn’t seem to be any roadblock that would prevent us from eventually reaching that goal. The path to that, the timeline is quite murky, but that alone seems like a pretty strong signal for ‘oh, there should be some people looking at this and being aware of what’s going on.’\nAnd then, if I look at the state of the art in AI, there’s a number of somewhat worrying trends. We seem to be quite good at getting very powerful superhuman systems in narrow domains when we can specify the objective that we want quite precisely. So AlphaStar, AlphaGo, OpenAI Five, these systems are very much lacking in robustness, so you have some quite surprising failure modes. Mostly we see adversarial examples in image classifiers, but some of these RL systems also have somewhat surprising failure modes. This seems to me like an area the AI research community isn’t paying much attention to, and I feel like it’s almost gotten obsessed with producing flashy results rather than necessarily doing good rigorous science and engineering. That seems like quite a worrying trend if you extrapolate it out, because some other engineering disciplines are much more focused on building reliable systems, so I more trust them to get that right by default.\nEven in something like aeronautical engineering where safety standards are very high, there are still accidents in initial systems. But because we don’t even have that focus, it doesn’t seem like the AI research community is going to put that much focus on building safe, reliable systems until they’re facing really strong external or commercial pressures to do so. Autonomous vehicles do have a reasonably good safety track record, but that’s somewhere where it’s very obvious what the risks are. So that’s kinda the sociological argument, I guess, for why I don’t think that the AI research community is going to solve all of the safety problems as far ahead of time as I would like.\nAnd then, there’s also a lot of very thorny technical problems that do seem like they’re going to need to be solved at some point before AGI. How do we get some information about what humans actually want? I’m a bit hesitant to use this phrase ‘value learning’ because you could plausibly do this just by imitation learning as well. But there needs to be some way of getting information from humans into the system, you can’t just derive it from first principles, we still don’t have a good way of doing that.\nThere’s lots of more speculative problems, e.g. inner optimizers. I’m not sure if these problems are necessarily going to be real or cause issues, but it’s not something that we– we’ve not ruled it in or out. So there’s enough plausible technical problems that could occur and we’re not necessarily going to get that much advance notice of, that it seems worrying to just charge ahead without looking into this.\nAnd then to caveat all this, I do think the AI community does care about producing useful technology. We’ve already seen some backlashes against autonomous weapons. People do want to do good science. And when the issues are obvious, there’s going to be a huge amount of focus on them. And it also seems like some of the problems might not actually be that hard to solve. So I am reasonably optimistic that in the default case of there’s no safety community really, things will still work out okay, but it also seems like the risk is large enough that just having a few people working on it can be extremely high leverage, especially if you can push the rest of the AI research community to pay a bit more attention to these problems.\nDoes that answer that question?\nAsya Bergal: Yeah, it totally does.\nRobert Long: Could you say a little bit more about why you think you might be more optimistic than other people in the safety community?\nAdam Gleave: Yeah, I guess one big reason is that I’m still not fully convinced by a lot of the arguments for risks from AI. I think they are compelling heuristic arguments, meaning it’s worth me working on this, but it’s not compelling enough for me to think ‘oh, this is definitely a watertight case’.\nI think the common area where I just don’t really follow the arguments is when you say, ‘oh, you have this superintelligent AI’. Let’s suppose we get to that, that’s already kind of a big leap of faith. And then if it’s not aligned, humans will die. It seems like there’s just a bit of a jump here that no one’s really filled in.\nIn particular it seems like sure, if you have something sufficiently capable, both in terms of intelligence and also access to other resources, it could destroy humanity. But it doesn’t just have to be smarter than an individual human, it has to be smarter than all of humanity potentially trying to work to combat this. And humanity will have a lot of inside knowledge about how this AI system works. And it’s also starting from a potentially weakened position in that it doesn’t already have legal protection, property ownership, all these other things.\nI can certainly imagine there being scenarios unfolding where this is a problem, so maybe you actually give an AI system a lot of power, or it just becomes so, so much more capable than humans that it really is able to outsmart all of us, or it might just be quite easy to kill everyone. Maybe civilization is just much more fragile than we think. Maybe there are some quite easy bio ex-risks or nanotech that you could reason about from first principles. If it turned out that a malevolent but very smart human could kill all of humanity, then I would be more worried about the AI problem, but then maybe we should also be working on the human x-risk problem. So that’s one area that I’m a bit skeptical about, though maybe flushing that argument out more is bad for info-hazard reasons. \nThen the other thing is I guess I feel like there’s a distribution of how difficult the AI safety problem is going to be. So there’s one world where anything that is not designed from mathematical principles is just going to be unsafe– there are going to be failure modes we haven’t considered, these failure modes are only going to arise when the system is smart enough to hurt you, and the system is going to be actively trying to deceive you. So this is I think, maybe a bit of a caricature, but I think this is roughly MIRI’s viewpoint. I think this is a productive viewpoint to inhabit when you’re trying to identify problems, but I think it’s probably not the world we actually live in. If you can solve that version, great, but it seems like a lot of the failure modes that are going to occur with advanced AI systems you’re going to see signs of earlier, especially if you’re actually looking out for them.\nI don’t see much reason for AI progress to be discontinuous in particular. So there’s a lot of empirical records you could bring to bear on this, and it also seems like a lot of commercially valuable interesting research applications are going to require solving some of these problems. You’ve already seen this with value learning, that people are beginning to realize that there’s a limitation to what we can just write a reward function down for, and there’s been a lot more focus on imitation learning recently. Obviously people are solving much narrower versions of what the safety community cares about, but as AI progresses, they’re going to work on broader and broader versions of these problems.\nI guess the general skepticism I have with the arguments, is, a lot of them take the form of ‘oh, there’s this problem that we need to solve and we have no idea how to solve it,’ but forget that we only need to solve that problem once we have all this other treasure trove of AI techniques that we can bring to bear on the problem. It seems plausible that this very strong unsupervised learning is going to do a lot of heavy lifting for us, maybe it’s going to give us a human ontology, it’s going to give us quite a good inductive bias for learning values, and so on. So there’s just a lot of things that might seem a lot stickier than they actually are in practice.\nAnd then, I also have optimism that yes, the AI research community is going to try to solve these problems. It’s not like people are just completely disinterested in whether their systems cause harm, it’s just that right now, it seems to a lot of people very premature to work on this. There’s a sense of ‘how much good can we do now, where nearer to the time there’s going to just be naturally 100s of times more people working on the problem?’. I think there is still value you can do now, in laying the foundations of the field, but that maybe gives me a bit of a different perspective in terms of thinking, ‘What can we do that’s going to be useful to people in the future, who are going to be aware of this problem?’ versus ‘How can I solve all the problems now, and build a separate AI safety community?’.\nI guess there’s also the outside view of just, people have been worried about a lot of new technology in the past, and most of the time it’s worked out fine. I’m not that compelled by this. I think there are real reasons to think that AI is going to be quite different. I guess there’s also just the outside view of, if you don’t know how hard a problem is, you should put a probability distribution over it and have quite a lot of uncertainty, and right now we don’t have that much information about how hard the AI safety problem is. Some problems seem to be pretty tractable, some problems seem to be intractable, but we don’t know if they actually need to be solved or not. \nSo, decent chance– I think I put a reasonable probability, like 10% probability, on the hard-mode MIRI version of the world being true. In which case, I think there’s probably nothing we can do. And I also put a significant probability, 20-30%, on AI safety basically not needing to be solved, we’ll just solve it by default unless we’re completely completely careless. And then there’s this big chunk of probability mass in the middle where maybe what we’re working on will actually have an impact, and obviously it’s hard to know whether at the margin, you’re going to be changing the outcome.\nAsya Bergal: I’m curious– I think a lot of people we’ve talked to, some people have said somewhat similar things to what you said. And I think there’s two classic axes on which peoples’ opinions differ. One is this slow takeoff, fast takeoff proposition. The other is whether they think something that looks like current methods is likely to lead to AGI. I’m curious on your take on both those questions.\nAdam Gleave: Yeah, sure. So, for slow vs. fast takeoff, I feel like I need to define the terms for people who use them in slightly different ways. I don’t expect there to be a discontinuity, in the sense of, we just see this sudden jump. But I wouldn’t be that surprised if there was exponential growth and quite a high growth rate. I think Paul defines fast takeoff as, GDP will double in six months before it doubles in 24 months. I’m probably mangling that but it was something like that. I think that scenario of fast takeoff seems plausible to me. I probably am still leaning slightly more towards the slow takeoff scenario, but it seems like fast takeoff will be plausible in terms of very fast exponential growth.\nI think a lot of the case for the discontinuous progress argument falls on there being sudden insight that dropped into place, and it doesn’t seem to me like that’s what’s happening in AI, it’s more just a cumulation of lots of bags of tricks and a lot of compute. I also don’t see there being bags of compute falling out of the sky. Maybe if there was another AI winter, leading to a hardware overhang, then you might see sudden progress when AI gets funding again. But right now a lot of very well-funded organizations are spending billions of dollars on compute, including developing new application-specific integrated circuits for AI, so we’re going to be very close to the physical limits there anyway. \nProbably the strongest case I see for discontinuities are the discontinuities you see when you’re training systems. But I just don’t think that’s going to be strong enough, because you’ll train other systems before that’ll be almost as capable. I guess we do see sometimes cases where one technique lets you solve a new class of problems.\nMaybe you could see something where you get increasingly capable narrow systems, and there’s not a discontinuity overall, you already had very strong narrow AI. But eventually you just have so many narrow AI systems that they can basically do everything, and maybe you get to a stage where the integrated whole of those is much stronger than if you just had half of them, let’s say. I guess this is sort of the comprehensive AI services model. But again that seems a bit unlikely to me, because most of the time you can probably outsource some other chunks to humans if you really needed to. But yeah, I think it’s a bit more plausible than some of the other stories.\nAnd then, in terms of whether I think current techniques are likely to get us to human-level AI– I guess I put significant probability mass on that depending on how narrowly you define it. One fuzzy definition is that a PhD thesis describing AGI being something that a typical AI researcher today could read and understand without too much work. Under this definition I’d assign 40 – 50%. And that could still include introducing quite a lot of new techniques, right, but just– I mean plausibly I think something based on deep learning, deep RL, you could describe to someone in the 1970s in a PhD thesis and they’d still understand it. But it’s just showing you, it’s not that much real theory that was developed, it was applying some pretty simple algorithms and a lot of compute in the right way. Which implies no huge new theoretical insights.\nBut if we’re defining it more narrowly, only allowing small variants of current techniques, I think that’s much less likely to lead to AGI: around 10-20%. I think that case is almost synonymous with the argument that you just need more compute, because it seems like there are so many things right now that we really cannot do: we still don’t have great solutions to memory, we still can’t really do transfer learning, Sim2Real just barely works sometimes. We’re still extremely sample inefficient. It just feels like all of those problems are going to require quite a lot of research in themselves. I can’t see there being one simple trick that would solve all of them. But maybe, current algorithms if you gave them 10000x compute would do a lot better on these, that is somewhat plausible.\nAnd yeah, I do put fairly significant probability, 50%, on it being something that is kind of radically different. And I guess there’s a couple of reasons for that. One is, just trying to extrapolate progress forward, it does seem like there are some fairly serious roadblocks. Deep learning is slowing down in terms of, it’s not hitting as many big achievements as it was in the past. And also just AI has had many kinds of fads over time, right. We’ve had good old-fashioned AI, symbolic AI, we had expert systems, we had Bayesianism. It would be sort of surprising that the current method is the right one.\nI don’t find that people are focusing on these techniques is necessarily particularly strong evidence that these systems are going to lead us to AGI. First, many researchers are not focused on AGI, and you can probably get useful applications out of current techniques. Second, AI research seems like it can be quite fashion driven. Obviously, there are organizations whose mission is to build AGI who are working within the current paradigm. And I think it is probably still the best bet, of the things that we know, but I still think it’s a bet that’s reasonably unlikely to pay off.\nDoes that answer your question?\nAsya Bergal: Yeah.\nRobert Long: Just on that last bit, you said– I might just be mixing up the different definitions you had and your different credences in those– but in the end there you said that’s a bet that you think is reasonably unlikely to pay off, but you’d also said 50% that it’s something radically different, so how– I think I was just confusing which ones you were on.\nAdam Gleave: Right. So, I guess these definitions are all quite fuzzy, but I was saying 10-20% that something that is only a small difference away from current techniques would build AGI, and 50% that AGI was going to be comprehensible to us. I guess the distinction I’m trying to draw is the narrow one, which I give 10-20% credence, is we basically already have the right algorithms and we just need a few tricks and more compute. And the other more expansive definition, which I give 40-50% credence to, is allows for completely different algorithms, but excludes any deep theoretical insight akin to a whole new field of mathematics. So we might not be using back propagation any longer, we might not be using gradient descent, but it’ll be something similar — like the difference between gradient descent and evolutionary algorithms.\nThere’s a separate question of, if you’re trying to build AGI right now, where should you be investing your resources? Should you be trying to come up with a completely new novel theory, or should you be trying to scale up current techniques? And I think it’s plausible that you should just be trying to scale up techniques and figure out if we can push them forward, because trying to come up with a completely new way of doing AI is also very challenging, right. It’s not really a sort of insight you can force.\nAsya Bergal: You kind of covered this earlier– and maybe you even said the exact number, so I’m sorry if this is a repeat. But one thing we’ve been asking people is the credence that without additional intervention– so imagining a world where EA wasn’t pushing for AI safety, and there wasn’t this separate AI safety movement outside of the AI research community, imagining that world. In that world, what is the chance that advanced artificial intelligence poses a significant risk of harm?\nAdam Gleave: The chance it does cause a significant risk of harm?\nAsya Bergal: Yeah, that’s right.\nAdam Gleave: Conditional on advanced artificial intelligence being created, I think 60, 70%. I have a much harder time giving an unconditional probability, because there are other things that could cause humanity to stop developing AI. Is a conditional probability good enough, or do you want me to give an unconditional one?\nAsya Bergal: No, I think the conditional one is what we’re looking for.\nRobert Long: Do you have a hunch about how much we can expect dedicated efforts to drive down that probability? That is, the EA-focused AI safety efforts.\nAdam Gleave: I think the best case is, you drive it down to 20 – 10%. I’m kind of picturing a lot of this uncertainty coming from just, how hard is the problem technically? And if we do inhabit this really hard version where you have to solve all of the problems perfectly and you have to have a formally verified AI system, I just don’t think we’re going to do that in time. You’d have to solve a very hard coordination problem to stop people developing AI without those safety checks. It seems like a very expensive process, developing safe AI.\nI guess the median case, where the AI safety community just sort of grows at its current pace, I think maybe that gets it down to 40 – 30%? But I have a lot of uncertainty in these numbers.\nAsya Bergal: Another question, going back to original statements for why you believe this– do you think there’s plausible concrete evidence that we could get or are likely to get that would change your views on this one direction or the other?\nAdam Gleave: Yeah, so, seeing evidence of some of the more thorny but currently quite speculative technical problems, like inner optimizers, would make me update towards, ‘oh, this is just a really hard technical problem, and unless we really work hard on this, the default outcome is definitely going to be bad’. Right now, no one’s demonstrated an inner optimizer existing, it’s just a sort of theoretical problem. This is a bit of an unfair thing to ask in some sense, in that the whole reason that people are worried about this is that it’s only a problem with very advanced AI systems. Maybe I’m asking for evidence that can’t be provided. But relative to many other people, I am unconvinced by heuristic arguments appealing just to mathematical intuitions. I’m much more convinced either by very solid theoretical arguments that are proof-based, or by empirical evidence.\nAnother thing that would update me in a positive direction, as in AI seems less risky, would be seeing more AI researchers spontaneously focus on some relevant problems. There’s already, I guess this is a bit of a tangent, but I think maybe– people tend to conceive as the AI safety community as people who would identify as AI safety researchers. But I think the vast majority of AI safety research work is happening by people who have never heard of AI safety, but they have been working on related problems. This is useful to me all of the time. I think where we could plausibly end up having a lot more of this work happening without AI safety ever really becoming a thing is people realizing ‘oh, I want my robot to do this thing and I have a really hard time making it do that, let’s come up with a new imitation learning technique’.\nBut yeah, other things that could update me positively… I guess, AI seeming like a harder problem, as in, it seems like AI, general artificial intelligence is further away, that would probably update me in a positive direction. It’s not obvious but I think generally all else being equal, longer timelines is going to generally have more time to diagnose problems. And also it seems like the current set of AI techniques — deep learning and very data-driven approaches — are particularly difficult to analyze or prove anything about, so some other paradigm is probably going to be better, if possible.\nOther things that would make me scared would be more arms race dynamics. It’s been very sad to me what we’re seeing with China – U.S. arms race dynamics around AI, especially since it doesn’t even seem like there is much direct competition, but that meme is still being pushed for political reasons. \nAny actual major catastrophes involving AI would make me think it’s more risky, although it would also make people pay more attention to AI risk, so I guess it’s not obvious what direction it would push overall. But it certainly would make me think that there’s a bit more technical risk.\nI’m trying to think if there’s anything else that would make more pessimistic. I guess just more solid arguments for AI safety, because a lot of my skepticism is coming from there’s just this very unlikely sounding set of ideas, and there are just heuristic arguments that I’m convinced enough by to work on the problem, but not convinced by enough to say, this is definitely going to happen. And if there was a way to patch some of the holes in those arguments, then I probably would be more convinced as well.\nRobert Long: Can I ask you a little bit more about evidence for or against AGI being a certain distance away? You mentioned that as evidence that would change your mind. What sort of evidence do you have in mind?\nAdam Gleave: Sure, so I guess a lot of the short timelines scenarios basically are coming from current ML techniques scaling to AGI, with just a bit more compute. So, watching for if those milestones are being achieved at the rate I was expecting, or slower.\nThis is a little bit hard to crystallize, but I would say right now it seems like the rate of progress is slowing down compared to something like 2012, 2013. And interestingly, I think a lot of the more interesting progress has come from, I guess, from going to different domains. So we’ve seen maybe a little bit more progress happening in deep RL compared to supervised deep learning. And the optimistic thing is to say, well, that’s because we’ve solved supervised learning, but we haven’t really. We’ve got superhuman performance on ImageNet, but not on real images that you just take on your mobile phone. And it’s still very sample inefficient, we can’t do few-shot learning well. Sometimes it seems like there’s a lack of interest on the part of the research community in solving some of these problems. I think it’s partly because no one has a solid angle of attack on solving these problems.\nSimilarly, while some of the recent progress in deep RL has been very exciting, it seems to have some limits. For example, AlphaStar and OpenAI Five both involved scaling up self-play and population based training. These were hugely computationally expensive, and that was where a lot of the scaling was coming from. So while there have been algorithmic improvements, I don’t see how you get this working in much more complicated environments without either huge additional compute or some major insights. These are things that are pushing me towards thinking deep learning will not continue to scale, and therefore very short timelines are unlikely.\nSomething that would update me towards shorter timelines would be if  something that I thought was impossible turns out to be very easy. So OpenAI Five did update me positively, because I just didn’t think PPO was going to work well in Dota and it turns out that it does if you have enough compute. I don’t think it updated me that strongly towards short timelines, because it did need a lot of compute, and if you scale it to a more complex game you’re going to have exponential scaling. But it did make me think, well, maybe there isn’t a deep insight required, maybe this is going to be much more about finding more computationally efficient algorithms rather than lots of novel insights.\nI guess there’s also sort of economic factors– I mention mostly because I often see people neglecting them. One thing that makes me bullish on short timelines is that, there’s some very well-resourced companies whose mission is to build AGI. OpenAI just raised a billion, DeepMind is spending considerable resources. As long as this continues, it’s going to be a real accelerator. But that could go away: if AI doesn’t start making people money, I expect another AI winter.\nRobert Long: One thing we’re asking people, and again I think you’ve actually already given us a pretty good sense of this, is just a relative weighting of different considerations. And as I say that, you actually have already been tagging this. But just to half review, from what I’ve scrawled down. A lot of different considerations in your relative optimism are: cases for AI as an x-risk being not as watertight as you’d like them, arguments for failure modes being the default and really hard, not being sold on those arguments, ideas that these problems might become easier to solve the closer we get to AGI when we have more powerful techniques, and then the general hope that people will try to solve them as we get closer to AI. Yeah, I think those were at least some of the main considerations I got. How strong relatively are those considerations in your reasoning?\nAdam Gleave: I’m going to quote numbers that may not add up to 100, so we’ll have to normalize it at the end. I think the skepticism surrounding AI x-risk arguments is probably the strongest consideration, so I would put maybe 40% of my weight on that. This is because the outside view is quite strong to me, so if you talk about this very big problem that there’s not much concrete evidence for, then I’m going to be reasonably optimistic that actually we’re wrong and there isn’t a big problem.\nThe second most important thing to me is the AI research community solving this naturally. We’re already seeing signs of a set of people beginning to work on related problems, and I see this continuing. So I’m putting 20% of my weight on that.\nAnd then, the hard version of AI safety not seeming very likely to me, I think that’s 10% of the weight. This seems reasonably important if I buy into the AI safety argument in general, because that makes a big difference in terms of how tractable these problems are. What were the other considerations you listed?\nRobert Long: Two of them might be so related that you already covered them, but I had distinguished between the problems getting easier the closer we get, and people working more on them the closer we get.\nAdam Gleave: Yeah, that makes sense. I think I don’t put that much weight on the problems getting easier. Or I don’t directly put weight on it, maybe it’s just rolled into my skepticism surrounding AI safety arguments, because I’m going to naturally find an argument a bit uncompelling if you say ‘we don’t know how to properly model human preferences’. I’m going to say, ‘Well, we don’t know how to properly do lots of things humans can do right now’. So everything needs to be relative to our capabilities. Whereas I find arguments of the form ‘we can solve problems that humans can’t solve, but only when we know how to specify what those problems are’, that seems more compelling, that’s talking about a relative strength between ability to optimize vs. ability to specify objectives. Obviously that’s not the only AI safety problem, but it’s a problem.\nSo yeah, I think I’m putting a lot of the weight on people paying more attention to these problems over time, so that’s probably actually 15 – 20% of my weight. And then I’ll put 5% on the problems getting easier and then some residual probability mass on things I haven’t thought about or haven’t mentioned in this conversation.\nRobert Long: Is there anything you wish we had asked that you would like to talk about?\nAdam Gleave: I guess, I don’t know if this is really useful, but I do wish I had a better sense of what other people in the safety community and outside of it actually thought and why they were working on it, so I really appreciate you guys doing these interviews because it’s useful to me as well. I am generally a bit concerned about lots of people coming to lots of different conclusions regarding how pessimistic we should be, regarding timelines, regarding the right research agenda. \nI think disagreement can be healthy because it’s good to explore different areas. The ideal thing would be for us to all converge to some common probability distribution and we decide we’re going to work on different areas. But it’s very hard psychologically to do this, to say, ‘okay, I’m going to be the person working on this area that I think isn’t very promising because at the margin it’s good’– people don’t work like that. It’s better if people think, ‘oh, I am working on the best thing, under my beliefs’. So having some diversity of beliefs is good. But it bothers me that I don’t know why people have come to different conclusions to me. If I understood why they disagree, I’d be happier at least.\nI’m trying to think if there’s anything else that’s relevant… yeah, so I guess another, this is merely just a question for you guys to maybe think about, is, I’m still unsure about how valuable field-building should be. And in particular, to what extent AI safety researchers should be working on this. It seems like a lot of reasons why I was optimistic assume the the AI research community is going to solve some of these problems naturally. A natural follow up to that is to ask whether we should be doing something to encourage this to happen, like writing more position papers, or just training up more grad students? Should we be trying to actively push for this rather than just relying on people to organically develop an interest in this research area? And I don’t know whether you can actually change research directions in this way, because it’s very far outside my area of expertise, but I’d love someone to study it.", "url": "https://aiimpacts.org/conversation-with-adam-gleave/", "title": "Conversation with Adam Gleave", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-12-24T03:08:20+00:00", "paged_url": "https://aiimpacts.org/feed?paged=10", "authors": ["Asya Bergal"], "id": "a1ede5c4ef216a30a8fc4355da4b20a0", "summary": []} {"text": "Historic trends in ship size\n\nThis page may be out-of-date. Visit the updated version of this page on our wiki.Trends for ship tonnage (builder’s old measurement) and ship displacement for Royal Navy first rate line-of-battle ships saw eleven and six discontinuities of between ten and one hundred years respectively during the period 1637-1876, if progress is treated as linear or exponential as usual. There is a hyperbolic extrapolation of progress such that neither measurement sees any discontinuities of more than ten years.\nWe do not have long term data for ship size in general, however the SS Great Eastern seems to have represented around 400 years of discontinuity in both tonnage (BOM) and displacement if we use Royal Navy ship of the line size as a proxy, and exponential progress is expected, or 11 or 13 in the hyperbolic trend. This discontinuity appears to have been the result of some combination of technological innovation and poor financial decisions.\nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nBackground\nAccording to Wikipedia, naval tactics in the age of sail rewarded larger ships, because larger ships were harder to sink and could carry more guns, and battles were usually lengthy affairs in which two lines of ships fired at each other until one side surrendered.1 2 Our understanding is that when steamships and iron-clad ships appeared, financial constraints sometimes prevented navies from building ships as big as technically possible, but the incentives towards bigger ships remained, since the best way to punch through heavy armor was to carry heavy guns, which required a big ship.3 \nFigure 1: A Royal Navy First-Rate Ship of the Line4\nTrends\nRoyal Navy first-rate line-of-battle ships ‘tonnage’ (BOM)\n‘Tonnage’ (BOM) is a pre-20th century measure of ship cargo capacity.5 It is calculated as:\n\nWe use it because that’s what we have data on; displacement, the modern way of calculating ship weight, was not widely recorded until towards the end of our dataset. Unfortunately, BOM seems to be less accurate for estimating the cargo capacity of ships after 1860, which could affect some of the findings in the report. However, our Spot check section goes into more detail about this and offers some evidence that this choice of metric isn’t responsible for producing the largest discontinuity as an uninteresting artifact.\nData\nFigure 2 shows ship ‘tonnage’ (BOM) over time for UK Royal Navy first-rate line-of-battle ships, according to Wikipedia contributors Toddy1 and Morn The Gorn.6 This spreadsheet contains their data. We have not vetted it thoroughly, but have spot-checked it (see Spot check section below). We extract the record-breaking subset of ships (see Figure 3).\nFigure 2: Tonnage (in BOM) of Royal Navy First-Rate Line-of-Battle Ships, from Wikipedia.7\nFigure 3: The subset of tonnages from Figure 2 that are the highest so far.\nDiscontinuity measurement\nExponential prior\nIf we have a strong prior on technological trends being linear or exponential, we might treat this data as a linear trend through 1804 followed by an exponential trend.8 Extrapolated in this way, tonnage saw ten greater than ten year discontinuities in this data, shown in the table below.9\nYearTonnage (BOM)DiscontinuityName17011,88311 yearsRoyal Sovereign17562,04713 yearsRoyal George17622,11610 yearsBrittania17952,35131 yearsVille de Paris18042,53025 yearsHibernia18393,10441 yearsQueen18523,75941 yearsDuke of Wellington18594,11612 yearsVictoria18606,03977 yearsWarrior18636,64313 yearsMinotaur1867894633 yearsInflexible\nIn addition to the size of these discontinuity in years, we have tabulated a number of other potentially relevant metrics here.10\nOther curves\nWith a weaker prior on linear or exponential trends in technology progress, one might prefer to extrapolate this data as a more exotic curve, such as a hyperbola. For instance, Tonnage = (1/(c*year + d))^(1/3) for some constants c and d appears to be a good model, since 1/(tonnage^3) looks fairly linear (see Figure 4).\nFigure 4: 1/(tonnage^3) is roughly linear\nUsing this to extrapolate past progress, we get no discontinuities (see our spreadsheet, sheet ‘Tonnage calculations’ for this calculation). However this is unsurprising toward the end, since hyperbolas have asymptotes (potentially going to infinity in finite time), and this particular one reaches such a singularity in about 1869. So on that model, any size ship is expected by 1869, and discontinuities cannot be larger than the time remaining until that date. (The largest discontinuity is nine years, from Warrior, which is within a year of the implied ship-tonnage singularity.)\nDiscussion of causes\nGiven that modeling the data as hyperbolic means there are no discontinuities, a plausible cause for apparent discontinuities when modeling it as exponential is that the process of ship size increase is fundamentally closer to being hyperbolic (though it must have departed from this trend before long, since it would have implied arbitrarily large ships from 1869). We do not know why this trend in particular would be hyperbolic, given that we understand exponential curves to be much more common in technological progress. \nOn a brief investigation of possible causes of particular discontinuities in this trend, Ville de Paris, Hibernia, and Queen do not appear to use any dramatically different technology to previous ships. \nThe Duke of Wellington was the first Royal Navy ship of the line to be steam powered, and it was apparently lengthened to fit the engines.11\nThe largest discontinuity was from Warrior, which was one of the two first armor-plated, iron-hulled warships.12 It seems likely that iron hulls allowed larger ships. For example, the wooden steamship Mersey, unusually large for a wooden ship yet smaller than the Warrior, is considered to have been beyond the limits of wood as a structural material.13 Moreover, the very large civilian ship Great Eastern, made extensive use of iron for structure and appears to have been regarded as innovative for its structure.14 So plausibly this was an important enough innovation to produce an immediate jump in ship size.\nSpot check\nOur metric, BOM, doesn’t measure volume or weight directly; it is derived from a ship’s width and length. Thus it might often be a reasonable proxy for a more normal notion of size, but change arbitrarily between different ship designs. There is particular reason to suspect this here, since according to Wikipedia, “[s]teamships required a different method of estimating tonnage, because the ratio of length to beam was larger and a significant volume of internal space was used for boilers and machinery.”15\nTo check whether the Warrior discontinuity was an artifact of this measurement scheme, we also searched for displacement figures for some of these ships. (We also made a brief attempt to find ships from other navies, like the French, that might destroy the discontinuity. We didn’t find any.) We did not collect many, but they cover the period of the largest discontinuity, 1850-1860, and confirm that it is probably robust to different ship size metrics, and thus not an artifact. See this spreadsheet.\nRoyal Navy first-rate line-of-battle ships displacement (tons)\nThe displacement of a ship is its weight, measured by looking at the amount of water that a ship displaces when it’s floating.16\nData\nWe took displacement and ‘estimated displacement’ data from the same Wikipedia table17 for Royal Navy first-rate line-of-battle ships and put it in this spreadsheet, sheet ‘Displacement calculations’. Figure 5 below shows this data.\nFigure 5: Ship Weight (displacement) over time\nDiscontinuity Measurement\nIf we model this data as a linear trend through 1795 followed by an exponential trend,18 then compared to previous rates in these trends, tonnage contained six greater than ten year discontinuities, shown in the table below.19 \nYearDisplacement (tons)DiscontinuityShip Surname18044,20017 yearsHibernia18395,10035 yearsQueen18525,82925 yearsDuke of Wellington18607,00030 yearsVictoria18609,18059 yearsWarrior186310,69023 yearsMinotaur\nIn addition to the sizes of these discontinuities in years, we have tabulated a number of other potentially relevant metrics here.20\nOnce again, if we model the data as hyperbolic, it contains no discontinuities of more than ten years, however this is unsurprising after about 1859 end given the proximity of the asymptote (see further explanation in previous section, and calculations in the spreadsheet, sheet ‘Displacement calculations’).\nThe SS Great Eastern\nIn the process of another investigation, we noted that a civilian ship, the SS Great Eastern, launched in 1858, was about six times larger by volume than any other ship at the time. It apparently took more than forty years for its length, gross tonnage, and passenger capacity to be surpassed.21 We calculate its tonnage (BOM) to be 22,990 tons (see spreadsheet, tab Tonnage calculations). Supposing our Royal Navy dataset is a good proxy for overall ship size records during this time, and treating past progress as exponential, the SS Great Eastern represents a 416-year discontinuity in tonnage (BOM) over previous Royal Navy ships. (See spreadsheet, tab Tonnage calculations) If the trend is modeled as a hyperbola instead, the SS Great Eastern still represents an 11 year discontinuity, which is as big as a discontinuity can be in the context of the theoretical expectation of arbitrarily large ships when the hyperbola reaches its asymptote in 11 years.22 \nFigure 6: The SS Great Eastern23\nIt is possible that there were other civilian ships prior to the Great Eastern that were also big, making the Great Eastern not a discontinuity. However we think this is unlikely. The RMS Persia, launched only three years prior to the Great Eastern, was the ‘largest’ ship in the world at that time24, and yet was only slightly bigger than the biggest military ships, in terms of tonnage (BOM). (See spreadsheet). If in the intervening two or three years a larger ship appeared, we do not know of it. \nUsing displacement, recorded on Wikipedia as 32,160 tons, and again assuming that our Royal Navy dataset is a good proxy for all ships, we also get a large discontinuity– 407 years when compared to our previous exponential trend for Royal Navy ships. (See spreadsheet). \n\nFigure 7: Ship weight (displacement) over time, now with the Great Eastern.\nWe do not know why the Great Eastern was so exceptional. It seems that it was innovative in several ways,25 and that it was designed by a pair of exceptional engineer-scientists, one of whom may have been influential to the design of the Warrior.26 However, the ship’s size might have also been the result of poor business sense, as it appears to have been a financial failure. 27\n Figure 8: Cutaway of one of the SS Great Eastern‘s engine rooms.28 \nNuño Sempere has also investigated the Great Eastern as a potential discontinuity to passenger and sailing vessel length trends.29 We learned of this after our own investigation, so have not measured these discontinuities by the same methods as those noted above, nor checked the data. Sempere notes that it took 41 years for the length trend excluding the Great Eastern to surpass it. Figures 9-11 show some of this data.\n \nFigure 9: Nuño Sempere’s data on passenger ship lengths over history. \nFigure 10: Nuño Sempere’s data on passenger ship lengths over recent history (note that points prior to the Great Eastern are not all-time records).\nFigure 11: Nuño Sempere’s data on sailing ship lengths over history, beams included. SS Great Eastern is the highest point. \nAcknowledgements\nThanks to bean for his help with this.  He blogs about naval history at navalgazing.net.\nNotes", "url": "https://aiimpacts.org/historic-trends-in-ship-size/", "title": "Historic trends in ship size", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-12-23T07:57:11+00:00", "paged_url": "https://aiimpacts.org/feed?paged=10", "authors": ["Katja Grace"], "id": "213c6d6d785e4a484ef8a4cf0690614d", "summary": []} {"text": "Effects of breech loading rifles on historic trends in firearm progress\n\nPublished Feb 7 2020\nWe do not know if breech loading rifles represented a discontinuity in military strength. They probably did not represent a discontinuity in fire rate.\nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nWe have not investigated this topic in depth. What follows are our initial impressions.\nBackground\nFrom Wikipedia1:\nA breechloader[1][2] is a firearm in which the cartridge or shell is inserted or loaded into a chamber integral to the rear portion of a barrel.Modern mass production firearms are breech-loading (though mortars are generally muzzle-loaded), except those which are intended specifically by design to be muzzle-loaders, in order to be legal for certain types of hunting. Early firearms, on the other hand, were almost entirely muzzle-loading. The main advantage of breech-loading is a reduction in reloading time – it is much quicker to load the projectile and the charge into the breech of a gun or cannon than to try to force them down a long tube, especially when the bullet fit is tight and the tube has spiral ridges from rifling. In field artillery, the advantages were similar: the crew no longer had to force powder and shot down a long barrel with rammers, and the shot could now tightly fit the bore (increasing accuracy greatly), without being impossible to ram home with a fouled barrel. \nTrends\nBreech loading rifles were suggested to us as a potential discontinuity in some measure of army strength, due to high fire rate and ability to be used while lying down. We did not have time to investigate this extensively, and have not looked for evidence for or against discontinuities in military strength overall. That said, the reading we have done does not suggest any such discontinuities. \nWe briefly looked for evidence of discontinuity in firing rate, since firing rate seemed to be a key factor of any advantage in military strength.\nFiring rate\nUpon brief review it seems unlikely to us that breech loading rifles represented a discontinuity in firing rate alone. Revolvers developed in parallel with breech-loading rifles, and appear to have had similar or higher rates of fire. This includes revolver rifles, which (being rifles) appear to be long-ranged enough to be comparable to muskets and breech-loading rifles.2\nThe best candidate we found for a breech loading rifle constituting a discontinuity in firing rate is the Ferguson Rifle, first used in 1777 in the American Revolutionary War.3 It was expensive and fragile, so it did not see widespread use;4 breech-loading rifles did not become standard in any army until the Prussian “Needle gun” in 1841 and the Norwegian “Kammerlader” in 1842.5 Both the Ferguson and the Dreyse needle gun could fire about six rounds a minute (sources vary),6 but by the time of the Ferguson well-trained British soldiers could fire muskets at about four rounds a minute.7 Moreover, apparently there are some expensive and fragile revolvers that predate the Ferguson, again suggesting that breech-loading rifles did not lead to a discontinuity in rate of fire.8 All in all, while we don’t have enough data to plot a trend, everything we’ve seen is consistent with continuous growth in firing rate.\nFigure 1: Diagram of how to load the Ferguson rifle9\nOther metrics\nIt is still possible that a combination of factors including fire rate contributed to a discontinuity in a military strength metric, or that a narrower metric including fire rate saw some discontinuity. \nThanks to Jesko Zimmerman for suggesting breech-loading rifles as a potential area of discontinuity.\nNotes", "url": "https://aiimpacts.org/effects-of-breech-loading-rifles-on-historic-trends-in-firearm-progress/", "title": "Effects of breech loading rifles on historic trends in firearm progress", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-12-23T07:53:06+00:00", "paged_url": "https://aiimpacts.org/feed?paged=10", "authors": ["Katja Grace"], "id": "0d298970be14af9d76491532b3d98e72", "summary": []} {"text": "Historic trends in transatlantic passenger travel\n\nThe speed of human travel across the Atlantic Ocean has seen at least seven discontinuities of more than ten years’ progress at past rates, two of which represented more than one hundred years’ progress at past rates: Columbus’ second journey, and the first non-stop transatlantic flight.\nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nTrends\nTransatlantic passenger crossing speed\nWe investigated fastest recorded passenger trips across the Atlantic Ocean over time. By ‘passenger’ we mean that any human made the crossing, or could have done.\nWe look for fastest speeds of real historic systems that could have with high probability delivered a live person across the Atlantic Ocean. We do not require that a person was actually sent by the method in question, though in fact all of our records did involve a passenger traveling. \nWe generally use whatever route was actually taken (or supposed in an estimate), and do not attempt to infer faster speeds possible had an optimal route been taken (though note that because we are measuring speed rather than time to cross the Ocean, route length is adjusted for to a first approximation). \nData\nWe collated records of historic speeds to cross the Atlantic Ocean from online sources.1 These are available at the ‘Passenger’ tab of this spreadsheet, and are shown in Figure 1 below. We have not verified this data.\nDetailed overview of data\nWe collected some data on speeds of drifting across the Atlantic Ocean and Viking ship speeds as evidence about the previous trend, but do not look for discontinuities until Columbus’ trips, for which relatively detailed descriptions are available. \nBetween then and 1841 the fastest records of transatlantic crossings we know of come from Slavevoyages.org‘s database of over thirty six thousand voyages made by slave ships. We combined this data with distances between recorded ports for trips that might plausibly be fastest, to find speed records. This produced only three record trips. These were substantial outliers in speed, which suggests to us that those records may have been driven by error, or may have involved different types of ship or circumstances to the others. The latter explanation would suggest that faster trips were likely made for purposes other than slave transport, meaning that these slave trips were unlikely to represent discontinuities in crossing speed across all types of ship. Given this, and that we do not have data for other types of ship at that time, we do not measure discontinuities during this period. We do include these ships to estimate the longer term trend, for measuring later discontinuities. The existence of later discontinuities does not appear to be sensitive to whether we include outlier slave ships in the historic trend, or replace them with more credible slower slave ships.\nFrom 1841 to 1909 all of our records are from Wikipedia’s page, Blue Riband. That page describes the Blue Riband was ‘an unofficial accolade given to the passenger liner crossing the Atlantic Ocean in regular service with the record highest speed’. It appears that this title was sought after, and the records during that time are dense, so this part of the dataset is probably relatively accurate and complete for passenger steam ships. The main potential gap in this data is that we cannot be sure there are not other types of boat at the time that traveled faster than passenger steam ships.\nFrom the first flight in 1919, speed records were held by planes. We found these in a variety of places, and we judged the data to be relatively complete when we ceased to find new records with moderate searching.\nWe are particularly interested in avoiding missing data just before apparent discontinuities, since continuous progress may look discontinuous if data is missing. There is a fifteen year gap before the Concorde discontinuity in 1973 where we didn’t find any records. However we note that a record for fastest subsonic Atlantic crossing set in 1979 was substantially slower than the Concorde. This means that if there were no other supersonic transatlantic crossings prior to the Concorde, the Concorde must have been substantially faster than the previous record even if we were missing some data. For instance if we were missing a 1965 record as fast as the 1979 record (which might make sense, since the 1979 record was set by a 1965 aircraft), then the Concorde would still be a discontinuity of around twenty years. We could not find other supersonic transatlantic crossings, but cannot rule them out.\nFigure 1: Historical progress in passenger travel across the Atlantic\n Figure 2: Historical progress in passenger travel across the Atlantic, since 1730\nDiscontinuity measurement\nWe measure discontinuities by comparing progress made at a particular time to the past trend. For this purpose, we treat the past trend at any given point as exponential or linear depending on apparent fit, and judge a new trend to have begun when the recent trend has diverged sufficiently from the longer term trend. See our spreadsheet, tab ‘Passenger’ to view the trends we break this data into, and our methodology page for details on how to interpret our sheets and how we divide data into trends. \nGiven these judgments about past progress, there were seven greater than 10-year discontinuities during the periods that we looked at, summarized in the following table.2 Two of them were large (more than one hundred years of progress at previous rate).\nDateMode of transportKnotsDiscontinuity size (years progress at past rate)1493Columbus’ second voyage5.814651884Oregon (steamship)18.6101919WWI Bomber (first non-stop transatlantic flight)1063511938Focke-Wulf Fw 200 Condor174191945Lockheed Constellation288251973Concorde1035191974SR-71156921\nIn addition to the sizes of these discontinuity in years, we have tabulated a number of other potentially relevant metrics here.3\nDiscussion of causes\nThe first measured discontinuity comes from Columbus’ second voyage being much quicker than his first. We expect this is for non-technological reasons, such as noise in crossing times (such that if there had been a longer history of crossing, Columbus’ first voyage would not have been record-setting), Columbus’ crew benefiting from experience, and the second voyage being intended to reach its destination rather than doing so accidentally.\nThe largest discontinuity we noted (352 years at previous rates) came from the first non-stop transatlantic flight, in 1919.4 This represented a relatively fundamental change in the means of crossing the Atlantic, supporting the hypothesis that discontinuities tend to be associated with more fundamental technological progress.\nWe have not investigated the significance of the developments underlying the other smaller discontinuities.\nDuring the Blue Riband period, attention appears to have been given to Atlantic crossing speed in particular, suggesting that more effort may have been directed to this metric then. During the later era of flight, record Atlantic crossing time appears to have been less of a goal.5 This, in combination with the much more incremental progress in the earlier era, weakly supports the hypothesis that discontinuities are associated with metrics that receive less attention.\nNotes", "url": "https://aiimpacts.org/historic-trends-in-transatlantic-passenger-travel/", "title": "Historic trends in transatlantic passenger travel", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-12-05T00:07:19+00:00", "paged_url": "https://aiimpacts.org/feed?paged=10", "authors": ["Katja Grace"], "id": "c8a5869efb04107ffb84043c0e13753a", "summary": []} {"text": "Robin Hanson on the futurist focus on AI\n\nBy Asya Bergal, 13 November 2019\nRobin Hanson\nRobert Long and I recently talked to Robin Hanson—GMU economist, prolific blogger, and longtime thinker on the future of AI—about the amount of futurist effort going into thinking about AI risk.\nIt was noteworthy to me that Robin thinks human-level AI is a century, perhaps multiple centuries away— much longer than the 50-year number given by AI researchers. I think these longer timelines are the source of a lot of his disagreement with the AI risk community about how much of futurist thought should be put into AI.\nRobin is particularly interested in the notion of ‘lumpiness’– how much AI is likely to be furthered by a few big improvements as opposed to a slow and steady trickle of progress. If, as Robin believes, most academic progress and AI in particular are not likely to be ‘lumpy’, he thinks we shouldn’t think things will happen without a lot of warning.\nThe full recording and transcript of our conversation can be found here.", "url": "https://aiimpacts.org/robin-hanson-on-the-futurist-focus-on-ai/", "title": "Robin Hanson on the futurist focus on AI", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-11-13T21:40:42+00:00", "paged_url": "https://aiimpacts.org/feed?paged=10", "authors": ["Asya Bergal"], "id": "2a9b4a9f8be04f73c38a33d4669bace5", "summary": []} {"text": "Conversation with Robin Hanson\n\nAI Impacts talked to economist Robin Hanson about his views on AI risk and timelines. With his permission, we have posted and transcribed this interview.\nParticipants\nRobin Hanson — Associate Professor of Economics, George Mason UniversityAsya Bergal – AI ImpactsRobert Long – AI Impacts\nSummary\nWe spoke with Robin Hanson on September 5, 2019. Here is a brief summary of that conversation:\nHanson thinks that now is the wrong time to put a lot of effort into addressing AI risk:We will know more about the problem later, and there’s an opportunity cost to spending resources now vs later, so there has to be a compelling reason to spend resources now instead.Hanson is not compelled by existing arguments he’s heard that would argue that we need to spend resources now:Hanson famously disagrees with the theory that AI will appear very quickly and in a very concentrated way, which would suggest that we need to spend resources now because won’t have time to prepare.Hanson views the AI risk problem as essentially continuous with existing principal agent problems, and disagrees that the key difference—the agents being smarter—should clearly worsen such problems.Hanson thinks that we will see concrete signatures of problems before it’s too late– he is skeptical that there are big things that have to be coordinated ahead of time.Relatedly, he thinks useful work anticipating problems in advance usually happens with concrete designs, not with abstract descriptions of systems. Hanson thinks we are still too far away from AI for field-building to be useful.Hanson thinks AI is probably at least a century, perhaps multiple centuries away:Hanson thinks the mean estimate for human-level AI arriving is long, and he thinks AI is unlikely to be ‘lumpy’ enough to happen without much warning :Hanson is interested in how ‘lumpy’ progress in AI is likely to be: whether progress is likely to come in large chunks or in a slower and steadier stream.Measured in terms of how much a given paper is cited, academic progress is not lumpy in any field.The literature on innovation suggests that innovation is not lumpy: most innovation is lots of little things, though once in a while there are a few bigger things.From an outside view perspective, the current AI boom does not seem different from previous AI booms.We don’t have a good sense of how much research needs to be done to get to human-level AI.If we don’t expect progress to be particularly lumpy, and we don’t have a good sense of exactly how close we are, we have good reason to think we are not  e.g. five-years away rather than halfway.Hanson thinks we shouldn’t believe it when AI researchers give 50-year timescales:Rephrasing the question in different ways, e.g. “When will most people lose their jobs?” causes people to give different timescales.People consistently give overconfident estimates when they’re estimating things that are abstract and far away.Hanson thinks AI risk takes up far too large a fraction of people thinking seriously about the future.Hanson thinks more futurists should be exploring other future scenarios, roughly proportionally to how likely they are with some kicker for extremity of consequences.Hanson doesn’t think that AI is that much worse than other future scenarios in terms of how much future value is likely to be destroyed.Hanson thinks the key to intelligence is having many not-fully-general tools:Most of the value in tools is in more specific tools, and we shouldn’t expect intelligence innovation to be different.Academic fields are often simplified to simple essences, but real-life things like biological organisms and the industrial world progress via lots of little things, and we should expect intelligence to be more similar to the latter examples.Hanson says the literature on human uniqueness suggests cultural evolution and language abilities came from several modest brain improvements, not clear differences in brain architecture.Hanson worries that having so many people publicly worrying about AI risk before it is an acute problem will mean it is taken less seriously when it is, because the public will have learned to think of such concerns as erroneous fear mongering. Hanson would be interested in seeing more work on the following things:Seeing examples of big, lumpy innovations that made a big difference to the performance of a system. This could change Hanson’s view of intelligence.In particular, he’d be influenced by evidence for important architectural differences in the brains of humans vs. primates.Tracking of the automation of U.S. jobs over time as a potential proxy for AI progress.Hanson thinks there’s a lack of engagement with critics from people concerned about AI risk.Hanson is interested in seeing concrete outside-view models people have for why AI might be soon.Hanson is interested in proponents of AI risk responding to the following questions:Setting aside everything you know except what this looks like from the outside, would you predict AGI happening soon?Should reasoning around AI risk arguments be compelling to outsiders outside of AI?What percentage of people who agree with you that AI risk is big, agree for the same reasons that you do?Hanson thinks even if we tried, we wouldn’t now be able to solve all the small messy problems that insects can solve, indicating that it’s not sufficient to have insect-level amounts of hardware.Hanson thinks that AI researchers might argue that we can solve the core functionalities of insects, but Hanson thinks that their intelligence is largely in being able to do many small things in complicated environments, robustly.\nSmall sections of the original audio recording have been removed. The corresponding transcript has been lightly edited for concision and clarity. \nAudio\n\nTranscript\nAsya Bergal:        Great. Yeah. I guess to start with, the proposition we’ve been asking people to weigh in on is whether it’s valuable for people to be expending significant effort doing work that purports to reduce the risk from advanced AI. I’d be curious for your take on that question, and maybe a brief description of your reasoning there.\nRobin Hanson:       Well, my highest level reaction is to say whatever effort you’re putting in, probably now isn’t the right time. When is the right time is a separate question from how much effort, and in what context. AI’s going to be a big fraction of the world when it shows up, so it certainly at some point is worth a fair bit of effort to think about and deal with. It’s not like you should just completely ignore it.\nYou should put a fair bit of effort into any large area of life or large area of the world, anything that’s big and has big impacts. The question is just really, should you be doing it way ahead of time before you know much about it at all, or have much concrete examples, know the– even structure or architecture, how it’s integrated in the economy, what are the terms of purchase, what are the terms of relationships.\nI mean, there’s just a whole bunch of things we don’t know about. That’s one of the reasons to wait–because you’ll know more later. Another reason to wait is because of the opportunity cost of resources. If you save the resources until later, you have more to work with. Those considerations have to be weighed against some expectation of an especially early leverage, or an especially early choice point or things like that.\nFor most things you expect that you should wait until they show themselves in a substantial form before you start to envision problems and deal with them. But there could be exceptions. Mostly it comes down to arguments that this is an exception.\nAsya Bergal:        Yeah. I think we’re definitely interested in the proposition that you should put in work now as opposed to later. If you’re familiar with the arguments that this might be an exceptional case, I’d be curious for your take on those and where you disagree .\nRobin Hanson:       Sure. As you may know, I started involving in this conversation over a decade ago with my co-blogger Eliezer Yudkowsky, and at that point, the major argument that he brought up was something we now call the Foom Argument. \nThat argument was a very particular one, that this would appear under a certain trajectory, under a certain scenario. That was a scenario where it would happen really fast, would happen in a very concentrated place in time, and basically once it starts, it happens so fast, you can’t really do much about it after that point. So the only chance you have is before that point.\nBecause it’s very hard to predict when or where, you’re forced to just do stuff early, because you’re never sure when is how early. That’s a perfectly plausible argument given that scenario, if you believe that it shows up in one time and place all of a sudden, fully formed and no longer influenceable. Then you only have the shot before that moment. If you are very unsure when and where that moment would be, then you basically just have to do it now.\nBut I was doubting that scenario. I was saying that that wasn’t a zero probability scenario, but I was thinking it was overestimated by him and other people in that space. I still think many people overestimate the probability of that scenario. Over time, it seems like more people have distanced themselves from that scenario, yet I haven’t heard as many substitute rationales for why we should do any of this stuff early.\nI did a recent blog post responding to a Paul Christiano post and my title was Agency Failure AI Apocalypse?, and so at least I saw an argument there that was different from the Foom argument. It was an argument that you’d see a certain kind of agency failure with AI, and that because of that agency failure, it would just be bad.\nIt wasn’t exactly an argument that we need to do effort early, though. Even that argument wasn’t per se a reason why you need to do stuff way ahead of time. But it was an argument of why the consequences might be especially bad I guess, and therefore deserving of more investment. And then I critiqued that argument in my post saying he was basically saying the agency problem, which is a standard problem in all human relationships and all organizations, is exasperated when the agent is smart.\nAnd because the AI is, by assumption, very smart, then it’s a very exasperated agency problem; therefore, it goes really bad. I said, “Our literature on the agency problem doesn’t say that it’s a worse problem when they’re smart.” I just denied that basic assumption, pointing to what I’ve known about the agency literature over a long time. Basically Paul in his response said, “Oh, I wasn’t saying there was an agency problem,” and then I was kind of baffled because I thought that was the whole point of his post that I was summarizing.\nIn any case, he just said he was worried about wealth redistribution. Of course, any large social change has the potential to produce wealth redistribution, and so I’m still less clear why this change would be a bigger wealth distribution consequence than others, or why it would happen more suddenly, or require a more early effort. But if you guys have other particular arguments to talk about here, I’d love to hear what you think, or what you’ve heard are the best arguments aside from Foom.\nAsya Bergal:        Yeah. I’m at risk of putting words in other people’s mouth here, because we’ve interviewed a bunch of people. I think one thing that’s come up repeatedly is-\nRobin Hanson:       You aren’t going to name them.\nAsya Bergal:        Oh, I definitely won’t give a name, but-\nRobin Hanson:       I’ll just respond to whatever-\nAsya Bergal: Yeah, just prefacing this, this might be a strawman of some argument. One thing people are sort of consistently excited about is– they use the term ‘field building,’ where basically the idea is: AI’s likely to be this pretty difficult problem and if we do think it’s far away, there’s still sort of meaningful work we can do in terms of setting up an AI safety field with an increasing number of people who have an increasing amount of–the assumption is useful knowledge–about the field.\nThen sort of there’s another assumption that goes along with that that if we investigate problems now, even if we don’t know the exact specifics of what AGI might look like, they’re going to share some common sub problems with problems that we may encounter in the future. I don’t know if both of those would sort of count as field building in people’s lexicon.\nRobin Hanson:       The example I would give to make it concrete is to imagine in the year 1,000, tasking people with dealing with various of our major problems in our society today. Social media addiction, nuclear war, concentration of capital and manufacturing, privacy invasions by police, I mean any major problem that you could think of in our world today, imagine tasking people in the year 1,000 with trying to deal with that problem.\nNow the arguments you gave would sound kind of silly. We need to build up a field in the year 1,000 to study nuclear annihilation, or nuclear conflict, or criminal privacy rules? I mean, you only want to build up a field just before you want to use a field, right? I mean, building up a field way in advance is crazy. You still need some sort of argument that we are near enough that the timescale on which it takes to build a field will match roughly the timescale until we need the field. If it’s a factor of ten off or a thousand off, then that’s crazy.\nRobert Long:        Yeah. This leads into a specific question I was going to ask about your views. You’ve written based on AI practitioners estimates of how much progress they’ve been making that an outside view calculation suggests we probably have at least a century to go, if maybe a great many centuries at the current rates of progress in AI. That was in 2012. Is that still roughly your timeline? Are there other things that go into your timelines? Basically in general what’s your current AI timeline?\nRobin Hanson:       Obviously there’s a median estimate and a mean estimate, and then there’s a probability per-unit-time estimate, say, and obviously most everyone agrees that the median or mean could be pretty long, and that’s reasonable. So they’re focused on some, “Yes, but what’s the probability of an early surprise.”\nThat isn’t directly addressed by that estimate, of course. I mean, you could turn that into a per-unit time if you just thought it was a constant per-unit time thing. That would, I think, be overly optimistic. That would give you too high an estimate I think. I have a series of blog posts, which you may have seen on lumpiness. A key idea here would be we’re getting AI progress over time, and how lumpy it is, is extremely directly relevant to these estimates.\nFor example, if it was maximally lumpy, if it just shows up at one point, like the Foom scenario, then in that scenario, you kind of have to work ahead of time because you’re not sure when. There’s a substantial… if like, the mean is two centuries, but that means in every year there’s a 1-in-200 chance. There’s a half-a-percent chance next year. Half-a-percent is pretty high, I guess we better do something, because what if it happens next year?\nOkay. I mean, that’s where extreme lumpiness goes. The less lumpy it is, then the more that the variance around that mean is less. It’s just going to take a long time, and it’ll take 10% less or 10% more, but it’s basically going to take that long. The key question is how lumpy is it reasonable to expect these sorts of things. I would say, “Well, let’s look at how lumpy things have been. How lumpy are most things? Even how lumpy has computer science innovation been? Or even AI innovation?”\nI think those are all relevant data sets. There’s general lumpiness in everything, and lumpiness of the kinds of innovation that are closest to the kinds of innovation postulated here. I note that one of our best or most concrete measures we have of lumpiness is citations. That is, we can take for any research idea, how many citations the seminal paper produces, and we say, “How lumpy are citations?”\nInterestingly, citation lumpiness seems to be field independent. Not just time independent, but field independent. Seems to be a general feature of academia, which you might have thought lumpiness would vary by field, and maybe it does in some more fundamental sense, but as it’s translated into citations, it’s field independent. And of course, it’s not that lumpy, i.e. most of the distribution of citations is papers with few citations, and the few papers that have the most citations constitute a relatively small fraction of the total citations.\nThat’s what we also know for other kinds of innovation literature. The generic innovation literature says that most innovation is lots of little things, even though once in a while there are a few bigger things. For example, I remember there’s this time series of the best locomotive at any one time. You have that from 1800 or something. You can just see in speed, or energy efficiency, and you see this point—.\nIt’s not an exactly smooth graph. On the other hand, it’s pretty smooth. The biggest jumps are a small fraction of the total jumpiness. A lot of technical, social innovation is, as we well understand, a few big things, matched with lots of small things. Of course, we also understand that big ideas, big fundamental insights, usually require lots of complementary, matching, small insights to make it work.\nThat’s part of why this trajectory happens this way. That smooths out and makes more effectively less lumpy the overall pace of progress in most areas. It seems to me that the most reasonable default assumption is to assume future AI progress looks like past computer science progress and even past technical progress in other areas. I mean, the most concrete example is AI progress.\nI’ve observed that we’ve had these repeated booms of AI concern and interest, and we’re in one boom now, but we saw a boom in the 90s. We saw a boom in the 60s, 70s, we saw a boom in the 30s. In each of these booms, the primary thing people point to is, “Look at these demos. These demos are so cool. Look what they can do that we couldn’t do before.” That’s the primary evidence people tend to point to in all of these areas.\nThey just have concrete examples that they were really impressed by. No doubt we have had these very impressive things. The question really is, for example, well, one question is, do we have any evidence that now is different? As opposed to evidence that there will be a big difference in the future. So if you’re asking, “Is now different,” then you’d want to ask, “Are the signs people point to now, i.e. AlphaGo, say, as a dramatic really impressive thing, how different are they as a degree than the comparable things that have happened in the past?”\nThe more you understand the past and see it, you saw how impressed people were back in the past with the best things that happened then. That suggests to me that, I mean AlphaGo is say a lump, I’m happy to admit it looks out of line with a smooth attribution of equal research progress to all teams at all times. But it also doesn’t look out of line with the lumpiness we’ve seen over the last 70 years, say, in computer innovation.\nIt’s on trajectory. So if you’re going to say, “And we still expect that same overall lumpiness for the next 70 years, or the next 700,” then I’d say then it’s about how close are we now? If you just don’t know how close you are, then you’re still going to end up with a relatively random, “When do we reach this threshold where it’s good enough?” If you just had no idea how close you were, how much is required.\nThe more you think you have an idea of what’s required and where you are, the more you can ask how far you are. Then if you say you’re only halfway, then you could say, “Well, if it’s taken us this many years to get halfway,” then the odds that we’re going to get all the rest of the way in the next five years are much less than you’d attribute to just randomly assigning say, “It’s going to happen in 200 years, therefore it’ll be one in two hundred per year.” I do think we’re in more of that sort of situation. We can roughly guess that we’re not almost there.\nRobert Long:        Can you say a little bit more about how we should think about this question of how close we are?\nRobin Hanson:       Sure. The best reliable source on that would be people who have been in this research area for a long time. They’ve just seen lots of problems, they’ve seen lots of techniques, they better understand what it takes to do many hard problems. They have a better sense of, no, they have a good sense of where we are, but ultimately where we have to go.\nI think when you don’t understand these things as well by theory or by experience, et cetera, you’re more tempted to look at something like AlphaGo and say, “Oh my God, we’re almost there.” Because you just say, “Oh, look.” You tend more to think, “Well, if we can do human level anywhere, we can do it everywhere.” That was the initial— what people in the 1960s said, “Let’s solve chess, and if we can solve chess, certainly we can do anything.”\nI mean, something that can do chess, it’s got to be smart. But they just didn’t fully appreciate the range of tasks, and problems, and problem environments, that you need to deal with. Once you understand the range of possible tasks, task environments, obstacles, issues, et cetera, once you’ve been in AI for a long time and have just seen a wide range of those things, then you have a more of a sense for “I see, AlphaGo, that’s a good job, but let’s list all these simplifying assumptions you made here that made this problem easier”, and you know how to make that list.\nThen you’re not so much saying, “If we can do this, we can do anything.” I think pretty uniformly, the experienced AI researchers have said, “We’re not close.” I mean I’d be very surprised if you interviewed any person with a more broad range of AI experience who said, “We’re almost there. If we can do this one more thing we can do everything.”\nAsya Bergal:        Yeah. I might be wrong about this–my impression is that your estimate of at least a century or maybe centuries might still be longer than a lot of researchers–and this might be because there’s this trend where people will just say 50 years about almost any technology or something like that.\nRobin Hanson:       Sure. I’m happy to walk through that. That’s the logic of that post of mine that you mentioned. It was exactly trying to confront that issue. So I would say there is a disconnect to be addressed. The people you ask are not being consistent when you ask similar things in different ways. The challenge is to disentangle that.\nI’m happy to admit when you ask a lot of people how long it will take, they give you 40, 50 year sort of timescales. Absolutely true. Question is, should you believe it? One way to check whether you should believe that is to see how they answer when you ask them different ways. I mean, as you know, I guess one of those surveys interestingly said, “When will most people lose their jobs?”\nThey gave much longer time scales than when will computers be able to do most everything, like a factor of two or something. That’s kind of bothersome. That’s a pretty close consistency relation. If computers can do everything cheaper, then they will, right? Apparently not. But I would think that, I mean, I’ve done some writing on this psychology concept called construal-level theory, which just really emphasizes how people have different ways they think about things conceived abstractly and broadly versus narrowly.\nThere’s a consistent pattern there, which is consistent with the pattern we are seeing here, that is in the far mode where you’re thinking abstractly and broadly, we tend to be more confident in simple, abstract theories that have simple predictions and you tend to neglect messy details. When you’re in the near mode and focus on a particular thing, you see all the messy difficulties.\nIt’s kind of the difference between will you have a happy marriage in life? Sure. This person you’re in a relationship with? Will that work in the next week? I don’t know. There’s all the things to work out. Of course, you’ll only have a happy relationship over a lifetime if every week keeps going okay for the rest of your life. I mean, if enough weeks do. That’s a near/far sort of distinction.\nWhen you ask people about AI in general and what time scale, that’s a very far mode sort of version of the question. They are aggregating, and they are going on very aggregate sort of theories in their head. But if you take an AI researcher who has been staring at difficult problems in their area for 20 years, and you ask them, “In the problems you’re looking at, how far have we gotten since 20 years ago?,” they’ll be really aware of all the obstacles they have not solved, succeeded in dealing with that, all the things we have not been able to do for 20 years.\nThat seems to me a more reliable basis for projection. I mean, of course we’re still in a similar regime. If the regime would change, then past experience is not relevant. If we’re in a similar regime of the kind of problems we’re dealing with and the kind of tools and the kind of people and the kind of incentives, all that sort of thing, then that seems to be much more relevant. That’s the point of that survey, and that’s the point of believing that survey somewhat more than the question asked very much more abstractly.\nAsya Bergal:        Two sort of related questions on this. One question is, how many years out do you think it is important to start work on AI? And I guess, a related question is, now even given that it’s super unlikely, what’s the ideal number of people working about or thinking about this?\nRobin Hanson:       Well, I’ve said many times in many of these posts that it’s not zero at any time. That is, whenever there’s a problem that it isn’t the right time to work on, it’s still the right time to have some people asking if it’s the right time to work on it. You can’t have people asking a question unless they’re kind of working on it. They’d have to be thinking about it enough to be able to ask the question if it’s the right time to work on it.\nThat means you always need some core of people thinking about it, at least, in related areas such they are skilled enough to be able to ask the question, “Hey, what do you think? Is this time to turn and work on this area?” It’s a big world, and eventually this is a big thing, so hey, a dozen could be fine. Given how random academia of course and the intellectual world is, the intellectual world is not at all optimized in terms of number of people per topic. It’s really not.\nRelative to that standard, you could be not unusually misallocated if you were still pretty random about it. For that it’s more just: for the other purposes that academic fields exist and perpetuate themselves, how well is it doing for those other purposes? I would basically say, “Academia’s mainly about showing people credentialing impressiveness.” There’s all these topics that are neglected because you can’t credential and impress very well via them. If AI risk was a topic that happened to be unusually able to be impressive with, then it would be an unusually suitable topic for academics to work on.\nNot because it’s useful, just because that’s what academics do. That might well be true for ways in which AI problems brings up interesting new conceptual angles that you could explore, or pushes on concepts that you need to push on because they haven’t been generalized in that direction, or just doing formal theorems that are in a new space of theorems.\nLike pushing on decision theory, right? Certainly there’s a point of view from which decision theory was kind of stuck, and people weren’t pushing on it, and then AI risk people pushed on some dimensions of decision theory that people hadn’t… people had just different decision theory, not because it’s good for AI. How many people, again, it’s very sensitive to that, right? You might justify 100 people if it not only was about AI risk, but was really more about just pushing on these other interesting conceptual dimensions.\nThat’s why it would be hard to give a very precise answer there about how many. But I actually am less concerned about the number of academics working on it, and more about sort of the percentage of altruistic mind space it takes. Because it’s a much higher percentage of that than it is of actual serious research. That’s the part I’m a little more worried about. Especially the fraction of people thinking about the future. I think of, just in general, very few people seem to be that willing to think seriously about the future. As a percentage of that space, it’s huge.\nThat’s where I most think, “Now, that’s too high.” If you could say, “100 people will work on this as researchers, but then the rest of the people talk and think about the future.” If they can talk and think about something else, that would be a big win for me because there are tens and hundreds of thousands of people out there on the side just thinking about the future and so, so many of them are focused on this AI risk thing when they really can’t do much about it, but they’ve just told themselves that it’s the thing that they can talk about, and to really shame everybody into saying it’s the priority. Hey, there’s other stuff.\nNow of course, I completely have this whole other book, Age of Em, which is about a different kind of scenario that I think doesn’t get much attention, and I think it should get more attention relative to a range of options that people talk about. Again, the AI risk scenario so overwhelmingly sucks up that small fraction of the world. So a lot of this of course depends on your base. If you’re talking about the percentage of people in the world working on these future things, it’s large of course.\nIf you’re talking percentage of people who are serious researchers in AI risk relative to the world, it’s tiny of course. Obviously. If you’re talking about the percentage of people who think about AI risk, or talk about it, or treat it very seriously, relative to people who are willing to think and talk seriously about the future, it’s this huge thing.\nRobert Long:        Yeah. That’s perfect. I was just going to … I was already going to ask a follow-up just about what share of, I don’t know, effective altruists who are focused on affecting the long-term future do you think it should be? Certainly you think it should be far less than this, is what I’m getting there?\nRobin Hanson:       Right. First of all, things should be roughly proportional to probability, except with some kicker for extremity of consequences. But I think you don’t actually know about extremity of consequences until you explore a scenario. Right from the start you should roughly write down scenarios by probability, and then devote effort in proportion to the probability of scenarios.\nThen once you get into a scenario enough to say, “This looks like a less extreme scenario, this looks like a more extreme scenario,” at that point, you might be justified in adjusting some effort, in and out of areas based on that judgment. But that has to be a pretty tentative judgment so you can’t go too far there, because until you explore a scenario a lot, you really don’t know how extreme… basically it’s about extreme outcomes times the extreme leverage of influence at each point along the path multiplied by each other in hopes that you could be doing things thinking about it earlier and producing that outcome. That’s a lot of uncertainty to multiply though to get this estimate of how important a scenario is as a leverage to think about.\nRobert Long:        Right, yeah. Relatedly, I think one thing that people say about why AI should take up a large share is that there’s the sense that maybe we have some reason to think that AI is the only thing we’ve identified so far that could plausibly destroy all value, all life on earth, as opposed to other existential risks that we’ve identified. I mean, I can guess, but you may know that consideration or that argument.\nRobin Hanson:       Well, surely that’s hyperbole. Obviously anything that kills everybody destroys all value that arises from our source. Of course, there could be other alien sources out there, but even AI would only destroy things from our source relative to other alien sources that would potentially beat out our AI if it produces a bad outcome. Destroying all value is a little hyperbolic, even under the bad AI scenario.\nI do think there’s just a wide range of future scenarios, and there’s this very basic question, how different will our descendants be, and how far from our values will they deviate? It’s not clear to me AI is that much worse than other scenarios in terms of that range, or that variance. I mean, yes, AIs could vary a lot in whether they do things that we value or not, but so could a lot of other things. There’s a lot of other ways.\nSome people, I guess some people seem to think, “Well, as long as the future is human-like, then humans wouldn’t betray our values.” No, no, not humans. But machines, machines might do it. I mean, the difference between humans and machines isn’t quite that fundamental from the point of view of values. I mean, human values have changed enormously over a long time, we are now quite different in terms of our habits, attitudes, and values, than our distant ancestors.\nWe are quite capable of continuing to make huge value changes in many directions in the future. I can’t offer much assurance that because our descendants descended from humans that they would therefore preserve most of your values. I just don’t see that. To the extent that you think that our specific values are especially valuable and you’re afraid of value drift, you should be worried. I’ve written about this: basically in the Journal of Consciousness Studies I commented on a Chalmers paper, saying that generically through history, each generation has had to deal with the fact that the next and coming generations were out of their control.\nNot just that, they were out of their control and their values were changing. Unless you can find someway to put some bound on that sort of value change, you’ve got to model it as a random walk; you could go off to the edge if you go off arbitrarily far. That means, typically in history, people if they thought about it, they’d realize we got relatively little control about where this is all going. And that’s just been a generic problem we’ve all had to deal with, all through history, AI doesn’t fundamentally change that fact, people focusing on that thing that could happen with AI, too.\nI mean, obviously when we make our first AIs we will make them corresponding to our values in many ways, even if we don’t do it consciously, they will be fitting in our world. They will be agents of us, so they will have structures and arrangements that will achieve our ends. So then the argument is, “Yes, but they could drift from there, because we don’t have a very solid control mechanism to make sure they don’t change a lot, then they could change a lot.”\nThat’s very much true, but that’s still true for human culture and their descendants as well, that they can also change a lot. We don’t have very much assurance. I think it’s just some people say, “Yeah, but there’s just some common human nature that’ll make sure it doesn’t go too far.” I’m not seeing that. Sorry. There isn’t. That’s not much of an assurance. When people can change people, even culturally, and especially later on when we can change minds more directly, start tinkering, start shared minds, meet more directly, or just even today we have better propaganda, better mechanisms of persuasion. We can drift off in many directions a long way.\nRobert Long:        This is sort of switching topics a little bit, but it’s digging into your general disagreement with some key arguments about AI safety. It’s about your views on intelligence. So you’ve written that there may well be no powerful general theories to be discovered revolutionizing AI, and this is related to your view that most everything we’ve learned about intelligence suggests that the key to smarts is having many not-fully-general tools. Human brains are smart mainly by containing many powerful, not-fully-general modules and using many modules to do each task.\nYou’ve written that these considerations are one of the main reasons you’re skeptical about AI. I guess the question is, can you think of evidence that might change your mind? I mean, the general question is just to dig in on this train of thought; so is there evidence that would change your mind about this general view of intelligence? And relatedly, why do you think that other people arrive at different views of what intelligence is, and why we could have general laws or general breakthroughs in intelligence?\nRobin Hanson:       This is closely related to the lumpiness question. I mean, basically you can not only talk about the lumpiness of changes in capacities, i.e., lumpiness in innovations. You can also talk about the lumpiness of tools in our toolkit. If we just look in industry, if we look in academia, if we look in education, just look in a lot of different areas, you will find robustly that most tools are more specific tools.\nMost of the value of tools–of the integral–is in more specific tools, and relatively little of it is in the most general tools. Again, that’s true in things you learn in school, it’s true about things you learn on the job, it’s true about things that companies learn that can help them do things. It’s true about nation advantages that nations have over other nations. Again, just robustly, if you just look at what do you know and how valuable is each thing, most of the value is in lots of little things, and relatively few are big things.\nThere’s a power law distribution with most of the small things. It’s a similar sort of lumpiness distribution to the lumpiness of innovation. It’s understandable. If tools have that sort of lumpy innovation, then if each innovation is improving a tool by some percentage, even a distribution percentage, most of the improvements will be in small things, therefore most of the improvements will be small.\nFew of the improvements will be a big thing, even if it’s a big improvement in a big thing, that’ll be still a small part of the overall distribution. So lumpiness in the size of tools or the size of things that we have as tools predicts that, in intelligence as well, most of the things that make you intelligent are lumpy little things. It comes down to, “Is intelligence different?”\nAgain, that’s also the claim about, “Is intelligence innovation different?” If, of course, you thought intelligence was fundamentally different in there being fewer and bigger lumps to find, then that would predict that in the future we would find fewer, bigger lumps, because that’s what there is to find. You could say, “Well, yes. In the past we’ve only ever found small lumps, but that’s because we weren’t looking at the essential parts of intelligence.”\nOf course, I’ll very well believe that related to intelligence, there are lots of small things. You might believe that there are also a few really big things, and the reason that in the past, computer science or education innovation hasn’t found many of them is that we haven’t come to the mother lode yet. The mother lode is still yet to be found. When we find it, boy it’ll be big. The belief, you’ll find that in intelligence innovation, is related to a belief that it exists, that it’s a thing to find, which we can relatedly believe that fundamentally, intelligence is simple.\nFundamentally, there’s some essential simplicity to it that when you find it, the pieces will be … each piece is big, because there aren’t very many pieces, and that’s implied by it being simple. It can’t be simple unless … if there’s 100,000 pieces, it’s not simple. If there’s 10 pieces, it could be simple, but then each piece is big. Then the question is, “What reason do you have to believe that intelligence is fundamentally simple?”\nI think, in academia, we often try to find simple essence in various fields. So there’d be the simple theory of utilitarianism, or the simple theory of even physical particles, or simple theory of quantum mechanics, or … so if your world is thinking about abstract academic areas like that, then you might say, “Well, in most areas, the essence is a few really powerful, simple ideas.”\nYou could kind of squint and see academia in that way. You can’t see the industrial world that way. That is, we have much clearer data about the world of biological organisms competing, or firms competing, or even nations competing. We have much more solid data about that to say, “It’s really lots of little things.” Then it becomes, you might say, “Yeah, but intelligence. That’s more academic.” Because your idea of intelligence is sort of intrinsically academic, that you think of intelligence as the sort of thing that best exemplary happens in the best academics.\nIf your model is ordinary stupid people, they have a stupid, poor intelligence, but they just know a lot, or have some charisma, or whatever it is, but Von Neumann, look at that. That’s what real intelligence is. Von Neumann, he must’ve had just five things that were better. He couldn’t have been 100,000 things that were better, had to be five core things that were better, because you see, he’s able to produce these very simple, elegant things, and he was so much better, or something like that.\nI actually do think this account is true, that many people have these sort of core emotional attitudinal relationships to the concept of intelligence. And that colors a lot of what they think about intelligence, including about artificial intelligence. That’s not necessarily tied to sort of the data we have on variations, and productivity, and performance, and all that sort of thing. It’s more sort of essential abstract things. Certainly if you’re really into math, in the world of math there are core axioms or core results that are very lumpy and powerful.\nOf course even there, again, distribution of math citations follows exactly the same distribution as all the other fields. By the citation measure, math is not more lumpy. But still, when you think about math, you like to think about these core, elegant, powerful results. Seeing them as the essence of it all.\nRobert Long:        So you mentioned Von Neumann and people have a tendency to think that there must be some simple difference between Von Neumann and us. Obviously the other comparison people make which you’ve written about is the comparison between us as a species and other species. I guess, can you say a little bit about how you think about human uniqueness and maybe how that influences your viewpoint on intelligence?\nRobin Hanson:       Sure. That, we have literatures that I just defer to. I mean, I’ve read enough to think I know what they say and that they’re relatively in agreement and I just accept what they say. So what the standard story is then, humans’ key difference was an ability to support cultural evolution. That is, human mind capacities aren’t that different from a chimpanzee’s overall, and an individual [human] who hasn’t had the advantage of cultural evolution isn’t really much better.\nThe key difference is that we found a way to accumulate innovations culturally. Now obviously there’s some difference in the sense that it does seem hard, even though we’ve tried today to teach culture to chimps, we’ve also had some remarkable success. But still it’s plausible that there’s something they don’t have quite good enough yet that let’s the mdo that, but then the innovations that made a difference have to be centered around that in some sense.\nI mean, obviously most likely in a short period of time, a whole bunch of independent unusual things didn’t happen. More likely there was one biggest thing that happened that was the most important. Then the question is what that is. We know lots of differences of course. This is the “what made humans different” game. There’s all these literatures about all these different ways humans were different. They don’t have hair on their skin, they walk upright, they have fire, they have language, blah, blah, blah.\nThe question is, “Which of these matter?” Because they can’t all be the fundamental thing that matters. Presumably, if they all happen in a short time, something was more fundamental that caused most of them. The question is, “What is that?” But it seems to me that the standard answer is right, it was cultural evolution. And then the question is, “Well, okay. But what enabled cultural evolution?” Language certainly seems to be an important element, although it also seems like humans, even before they had language, could’ve had some faster cultural evolution than a lot of other animals.\nThen the question is, “How big a brain difference or structure difference would it take?” Then it seems like well, if you actually look at the mechanisms of cultural evolution, the key thing is sitting next to somebody else watching what they’re doing, trying to do what they’re doing. So that takes certain observation abilities, and it takes certain mirroring abilities, that is, the ability to just map what they’re doing onto what you’re doing. It takes sort of fine-grained motor control abilities to actually do whatever it is they’re doing.\nThose seem like just relatively modest incremental improvements on some parameters, like chimps weren’t quite up to that. Humans could be more up to that. Even our language ability seems like, well, we have modestly different structured mouths that can more precisely control sounds and chimps don’t quite do that, so it’s understandable why they can’t make as many sounds as distinctly. The bottom line is that our best answer is it looks like there was a threshold passed, sort of ability supporting cultural evolution, which included the ability to watch people, the ability to mirror it, the ability to do it yourself, the ability to tell people through language or through more things like that.\nIt looks roughly like there was just a threshold passed, and that threshold allowed cultural evolution, and that’s allowed humans to take off. If you’re looking for some fundamental, architectural thing, it’s probably not there. In fact, of course people have said when you look at chimp brains and human brains in fine detail, you see pretty much the same stuff. It isn’t some big overall architectural change, we can tell that. This is pretty much the same architecture.\nLooks like it’s some tools we are somewhat better at and plausibly those are the tools that allow us to do cultural evolution.\nRobert Long:        Yeah. I think that might be it for my questions on human uniqueness.\nAsya Bergal:        I want to briefly go back to, I think I sort of mentioned this question, but we didn’t quite address it. At what timescale do you think people–how far out do you think people should be starting maybe the field building stuff, or starting actually doing work on AI? Maybe number of years isn’t a good metric for this, but I’m still curious for your take.\nRobin Hanson:       Well, first of all, let’s make two different categories of effort. One category of effort is actually solving actual problems. Another category of effort might be just sort of generally thinking about the kind of problems that might appear and generally categorizing and talking about them. So most of the effort that will eventually happen will be in the first category. Overwhelmingly, most of the effort, and appropriately so.\nI mean, that’s true today for cars or nuclear weapons or whatever it is. Most of the effort is going to be dealing with the actual concrete problems right in front of you. That effort, it’s really hard to do much before you actually have concrete systems that you’re worried about, and the concrete things that can actually go wrong with them. That seems completely appropriate to me.\nI would say that sort of effort is mostly, well, you see stuff and it goes wrong, deal with it. Ahead of seeing problems, you shouldn’t be doing that. You could today be dealing with computer security, you can be dealing with hackers and automated tools to deal with them, you could be dealing with deep fakes. I mean, it’s fine time now to deal with actual, concrete problems that are in front of people today.\nBut thinking about problems that could occur in the future, that you haven’t really seen the systems that would produce them or even the scenarios that would play out, that’s much more the other category of effort, is just thinking abstractly about the kinds of things that might go wrong, and maybe the kinds of architectures and kinds of approaches, et cetera. That, again, is something that you don’t really need that many people to do. If you have 100 people doing it, probably enough.\nEven 10 people might be enough. It’s more about how many people, again, this mind space in altruistic futurism, you don’t need very much of that mind space to do it at all, really. Then that’s more the thing I complain that there’s too much of. Again, it comes down to how unusual will the scenarios be that are where the problem starts. Today, cars can have car crashes, but each crash is a pretty small crash, and happens relatively locally, and doesn’t kill that many people. You can wait until you see actual car crashes to think about how to deal with car crashes.\nThen the key question is, “How far do the scenarios we worry about deviate from that?” I mean, most problems in our world today are like that. Most things that go wrong in systems, we have our things that go wrong on a small scale pretty frequently, and therefore you can look at actual pieces of things that have gone wrong to inform your efforts. There are some times where we exceptionally anticipate problems that we never see. Then anticipate even institutional problems that we never see or even worry that by the time the problem gets here, it’ll be too late.\nThose are really unusual scenarios in problems. The big question about AI risk is what fraction of the problems that we will face about AI will be of that form. And then, to what extent can we anticipate those now? Because in the year 1,000, it would’ve been still pretty hard to figure out the unusual scenarios that might bedevil military hardware purchasing or something. Today we might say, “Okay, there’s some kind of military weapons we can build that yes, we can build them, but it might be better once we realize they can be built and then have a treaty with the other guys to have neither of us build them.”\nSometimes that’s good for weapons. Okay. That was not very common 1,000 years ago. That’s a newer thing today, but 1,000 years ago, could people have anticipated that, and then what usefully could they have done other than say, “Yeah, sometimes it might be worse having a treaty about not building a weapon if you figure out it’d be worse for you if you have both.” I’m mostly skeptical that there are sort of these big things that you have to coordinate ahead of time, that you have to anticipate, that if you wait it’s too late, that you won’t see actual concrete signatures of the problems before you have to invent them.\nEven today, large systems, you often tend to have to walk through a failure analysis. You build a large nuclear plant or something, and then you go through and try to ask everything that could go wrong, or every pair of things that could go wrong, and ask, “What scenarios would those produce?,” and try to find the most problematic scenarios. Then ask, “How can we change the design of it to fix those?”\nThat’s the kind of exercise we do today where we imagine problems that most of which never occur. But for that, you need a pretty concrete design to work with. You can’t do that very abstractly with the abstract idea. For that you need a particular plan in front of you, and now you can walk through concrete failure modes of all the combinations of this strut will break, or this pipe will burst, and all those you walk through. It’s definitely true that we often analyze problems that never appear, but it’s almost never in the context of really abstract sparse descriptions of systems.\nAsya Bergal:        Got you. Yeah. We’ve been asking people a standard question which I think I can maybe guess your answer to. But the question is: what’s your credence that in a world where we didn’t have these additional EA-inspired safety efforts, what’s your credence that in that world AI poses a significant risk of harm? I guess this question doesn’t really get at how much efforts now are useful, it’s just a question about general danger.\nRobin Hanson:       There’s the crying wolf effect, and I’m particularly worried about it. For example, space colonization is a thing that could happen eventually. And for the last 50 years, there have been enthusiasts who have been saying, “It’s now. It’s now. Now is the time for space colonization.” They’ve been consistently wrong. For the next 50 years, they’ll probably continue to be consistently wrong, but everybody knows there’s these people out there who say, “Space colonization. That’s it. That’s it.”\nWhenever they hear somebody say, “Hey, it’s time for space colonization,” they go, “Aren’t you one of those fan people who always says that?” The field of AI risk kind of has that same problem where again today, but for the last 70 years or even longer, there have been a subset of people who say, “The robots are coming, and it’s all going to be a mess, and it’s now. It’s about to be now, and we better deal with it now.” That creates sort of a skepticism in the wider world that you must be one of those crazies who keep saying that.\nThat can be worse for when there really is, when we really do have the possibility of space colonization, when it is really the right time, we might well wait too long after that, because people just can’t believe it, because they’ve been hearing this for so long. That makes me worried that this isn’t a positive effect. Calling attention to a problem, like a lot of attention to a problem, and then having people experience it as not a problem, when it looks like you didn’t realize that.\nNow, if you just say, “Hey, this nuclear power plant type could break. I’m not saying it will, but it could, and you ought to fix that,” that’s different than saying, “This pipe will break, and that’ll happen soon, and better do something.” Because then you lose credibility when the pipe doesn’t usually break.\nRobert Long:        Just as a follow-up, I suppose the official line for most people working on AI safety is, as it ought to be, there’s some small chance that this could matter a lot, and so we better work on it. Do you have thoughts on ways of communicating that that’s what you actually think so that you don’t have this crying wolf effect?\nRobin Hanson:       Well, if there are only the 100 experts, and not the 100,000 fans, this would be much easier. That does happen in other areas. There are areas in the world where there are only 100 experts and there aren’t 100,000 fans screaming about it. Then the experts can be reasonable and people can say, “Okay,” and take their word seriously, although they might not feel too much pressure to listen and do anything. If you can say that about computer security today, for example, the public doesn’t scream a bunch about computer security.\nThe experts say, “Hey, this stuff. You’ve got real computer security problems.” They say it cautiously and with the right degree of caveats that they’re roughly right. Computer security experts are roughly right about those computer security concerns that they warn you about. Most firms say, “Yeah, but I’ve got these business concerns immediately, so I’m just going to ignore you.” So we continue to have computer security problems. But at least from a computer security expert’s point of view, they aren’t suffering from the perception of hyperbole or actual hyperbole.\nBut that’s because there aren’t 100,000 fans of computer security out there yelling with them. But AI risk isn’t like that. AI risk, I mean, it’s got the advantage of all these people pushing and talking which has helped produce money and attention and effort, but it also means you can’t control the message.\nRobert Long:        Are you worried that this reputation effect or this impression of hyperbole could bleed over and harm other EA causes or EA’s reputation in general, and if so are there ways of mitigating that effect?\nRobin Hanson:       Well again, the more popular anything is, the harder it is for any center to mitigate whatever effects there are of popular periphery doing whatever they say and do. For example, I think there are really quite reasonable conservatives in the world who are at the moment quite tainted with the alt-right label, and there is an eager population of people who are eager to taint them with that, and they’re kind of stuck.\nAll they can do is use different vocabularies, have a different style and tone when they talk to each other, but they are still at risk for that tainting. A lot depends on the degree to which AI risk is seen as central to EA. The more it’s perceived as a core part of EA, then later on when it’s perceived as having been overblown and exaggerated, then that will taint EA. Not much way around that. I’m not sure that matters that much for EA though.\nI mean I don’t see EA as driven by popularity or popular attention. It seems it’s more a group of people who– it’s driven by the internal dynamics of the group and what they think about each other and whether they’re willing to be part of it. Obviously in the last century or so, we just had these cycles of hype about AI, so that’s … I expect that’s how this AI cycle will be framed– in the context of all the other concern about AI. I doubt most people care enough about EA for that to be part of the story.\nI mean, EA has just a little, low presence in people’s minds in general, that unless it got a lot bigger, it just would not be a very attractive element to put in the story to blame those people. They’re nobody. They don’t exist to most people. The computer people exaggerate. That’s a story that sticks better. That has stuck in the past.\nAsya Bergal:        Yeah. This is zooming out again, but I’m curious: kind of around AI optimism, but also just in general around any of the things you’ve talked about in this interview, what sort of evidence you think that either we could get now, or might plausibly see in the future would change your views one way or the other?\nRobin Hanson:       Well, I would like to see much more precise and elaborated data on the lumpiness of algorithm innovations and AI progress. And of course data on whether things are changing different[ly] now. For example, forgetting his name, somebody did a blog post a few years ago right after AlphaGo, saying this Go achievement seemed off trend if you think about it by time, but not if you thought about it by computing resources devoted to the problem. If you looked at past level of Go ability relative to computer resources, then it was on trend, it wasn’t an exception.\nAny case, that’s relevant to the lumpiness issue, right? So the more that we could do a good job of calibrating how unusual things are, the more that we could be able talk about whether we are seeing unusual stuff now. That’s kind of often the way this conversation goes is, “Is this time different? Are we seeing unusual stuff now?” In order to do that, you want us to be able to calibrate these progresses as clearly as possible.\nObviously certainly if you could make some metric for each AI progress being such that you could talk about how important it was by some relative weighting in different fields, and relevant weighting of different kinds of advances, and different kinds of metrics for advances, then you can have some statistics of tracking over time the size of improvements and whether that was changing.\nI mean, I’ll also make a pitch for the data thing that I’ve just been doing for the last few years, which is the data on automation per job in the US, and the determinants of that and how that’s changed over time, and its impact over time. Basically there’s a dataset called O*NET and they’re broken into 800 jobs categories and jobs in the US, and for each job in the last 20 years, at some random times, some actual people went and rated each job on a one to five scale of how automated it was.\nNow we have those ratings. We are able to say what predicts which jobs are how automated, and has that changed over time? Then the answer is, we can predict pretty well, just like 25 variables lets us predict half the variance in which jobs are automated, and they’re pretty mundane things, they’re not high-tech, sexy things. It hasn’t changed much in 20 years. In addition, we can ask when jobs get more or less automated, how does that impact the number of employees and their wages. We find almost no impact on those things. \nA data series like that, if you kept tracking it over time, if there were a deviation from trend, you might be able to see it, you might see that the determinants of automation were changing, that the impacts were changing. This is of course just tracking actual AI impacts, not sort of extreme tail possibilities of AI impacts, right?\nOf course, this doesn’t break it down into AI versus other sources of automation. Most automation has nothing to do with AI research. It’s making a machine that whizzes and does something that a person was doing before. But if you could then find a way to break that down by AI versus not, then you could more focus on, “Is AI having much impact on actual business practice?,” and seeing that.\nOf course, that’s not really supporting the early effort scenario. That would be in support of, “Is it time now to actually prepare people for major labor market impacts, or major investment market impacts, or major governance issues that are actually coming up because this is happening now?” But you’ve been asking about, “Well, what about doing stuff early?” Then the question is, “Well, what signs would you have that it’s soon enough?”\nHonestly, again, I think we know enough about how far away we are from where we need to be, and we know we’re not close, and we know that progress is not that lumpy. So we can see, we have a ways to go. It’s just not soon. We’re not close. It’s not time to be doing things you would do when you are close or soon. But the more that you could have these expert judgments of, “for any one problem, how close are we?,” and it could just be a list of problematic aspects of problems and which of them we can handle so far and which we can’t.\nThen you might be able to, again, set up a system that when you are close, you could trigger people and say, “Okay, now it’s time to do field building,” or public motivation, or whatever it is. It’s not time to do it now. Maybe it’s time to set up a tracking system so that you’ll find out when it’s time.\nRobert Long:        On that cluster of issues surrounding human uniqueness, other general laws of intelligence, is there evidence that could change your mind on that? I don’t know. Maybe it could come from psychology, or maybe it could come from anthropology, new theories of human uniqueness, something like that?\nRobin Hanson:       The most obvious thing is to show me actual big lumpy, lumpy innovations that made a big difference to the performance of the system. That would be the thing. Like I said, for many years I was an AI researcher, and I noticed that researchers often created systems, and systems have architectures. So their paper would have a box diagram for an architecture, and explain that their system had an architecture and that they were building on that architecture.\nBut it seemed to me that in fact, the architectures didn’t make as much difference as they were pretending. In the performance of the system, most systems that were good, were good because they just did a lot of work to make that whole architecture work. But you could imagine doing counterfactual studies where you vary the effort you go into filling the concept of a system and you vary the architecture. You quantitatively find out how much does architecture matter.\nThere could be even already existing data out there in some form or other that somebody has done the right sort of studies. So it’s obvious that architecture makes some difference. Is it a factor of two? Is it 10%? Is it a factor of 100? Or is it 1%? I mean, that’s really what we’re arguing about. If it’s a factor of 10% then you say, “Okay, it matters. You should do it. You should pay attention to that 10%. It’s well worth putting the effort into getting that 10%.”\nBut it doesn’t make that much of a difference in when this happens and how big it happens. Right? Or if architecture is a factor of 10 or 100, now you can have a scenario where somebody finds a better architecture and suddenly they’re a factor of 100 better than other people. Now that’s a huge thing. That would be a way to ask a question, “How much of an advance can a new system get relative to other systems?,” would be to say, “how much of a difference does a better architecture matter?”\nAnd that’s a thing you can actually study directly by having people make systems with different architectures, put different spots of reference into it, et cetera, and see what difference it makes.\nRobert Long:        Right. And I suspect that some people think that homo sapiens are such a data point, and that it sounds like you disagree with how they’ve construed that. Do you think there’s empirical evidence waiting to change your mind, or do you think people are just sort of misconstruing it, or are ignorant, or just not thinking correctly about what we should make of the fact of our species dominating the planet?\nRobin Hanson:       Well, there’s certainly a lot of things we don’t know as well about primate abilities, so again, I’m reflecting what I’ve read about cultural evolution and the difference between humans and primates. But you could do more of that, and maybe the preliminary indications that I’m hearing about are wrong. Maybe you’ll find out that no, there is this really big architectural difference in the brain that they didn’t notice, or that there’s some more fundamental capability introduction.\nFor example, abstraction is something we humans do, and we don’t see animals doing much of it, but this construal-level theory thing I described and standard brain architecture says actually all brains have been organized by abstraction for a long time. That is, we see a dimension of the brain which is the abstract to the concrete, and we see how it’s organized that way. But we humans seem to be able to talk about abstractions in ways that other animals don’t.\nSo a key question is, “Do we have some extra architectural thing that lets us do more with abstraction?” Because again, most brains are organized by abstraction and concrete. That’s just one of the main dimensions of brains. The forebrain versus antebrain is concrete versus abstraction. Then the more we just knew about brain architecture and why it was there, the more we can concretely say whether there was a brain architectural innovation from primates to humans.\nBut everything I’ve heard says it seems to be mostly a matter of relevant emphasis of different parts, rather than some fundamental restructuring. But even small parts can be potent. So one way actually to think about it is that most ordinary programs spend most of the time in just a few lines of code. Then so if you have 100,000 lines of code, it could still only be 100 lines, there’s 100 lines of code where 90% of the time is being spent. That doesn’t mean those 100,000 lines don’t matter. When you think about implementing code on the brain, you realize because the brain is parallel, whatever 90% of the code has been, that’s going to be 90% of the volume of the brain.\nThose other 100,000 lines of code will take up relatively little space, but they’re still really important. A key issue at the brain is you might find out that you understand 90% of the volume as a simple structure following a simple algorithm and you can still hardly understand anything about this total algorithm, because it’s all the other parts that you don’t understand where stuff isn’t executing very often, but it still needs to be there to make the whole thing work. That’s a very problematic thing about understanding brain organization at all.\nYou’re tempted to go by volume and try to understand because volume is visible first, and whatever volume you can opportunistically understand, but you could still be a long way off from understanding. Just like if you had any big piece of code and you understood 100 lines of it, out of 100,000 lines, you might not understand very much at all. Of course, if that was the 100 lines that was being executed most often, you’d understand what it was doing most of the time. You’d definitely have a handle on that, but how much of the system would you really understand?\nAsya Bergal:        We’ve been interviewing a bunch of people. Are there other people who you think have well-articulated views that you think it would be valuable for us to talk to or interview?\nRobin Hanson:       My experience is that I’ve just written on this periodically over the years, but I get very little engagement. Seems to me there’s just a lack of a conversation here. Early on, Eliezer Yudkowsky and I were debating, and then as soon as he and other people just got funding and recognition from other people to pursue, then they just stopped engaging critics and went off on pursuing their stuff.\nWhich makes some sense, but these criticisms have just been sitting and waiting. Of course, what happens periodically is they are most eager to engage the highest status people who criticize them. So periodically over the years, some high-status person will make a quip, not very thought out, at some conference panel or whatever, and they’ll be all over responding to that, and sending this guy messages and recruiting people to talk to him saying, “Hey, you don’t understand. There’s all these complications.”\nWhich is different from engaging the people who are the longest, most thoughtful critics. There’s not so much of that going on. You are perhaps serving as an intermediary here. But ideally, what you do would lead to an actual conversation. And maybe you should apply for funding to have an actual event where people come together and talk to each other. Your thing could be a preliminary to get them to explain how they’ve been misunderstood, or why your summary missed something; that’s fine. If it could just be the thing that started that actual conversation it could be well worth the trouble.\nAsya Bergal:        I guess related to that, is there anything you wish we had asked you, or any other things sort of you would like to be included in this interview?\nRobin Hanson:       I mean, you sure are relying on me to know what the main arguments are that I’m responding to, hence you’re sort of shy about saying, “And here are the main arguments, what’s your response?” Because you’re shy about putting words in people’s mouths, but it makes it harder to have this conversation. If you were taking a stance and saying, “Here’s my positive argument,” then I could engage you more.\nI would give you a counterargument, you might counter-counter. If you’re just trying to roughly summarize a broad range of views then I’m limited in how far I can go in responding here.\nAsya Bergal:        Right. Yeah. I mean, I don’t think we were thinking about this as sort of a proxy for a conversation.\nRobin Hanson:       But it is.\nAsya Bergal:        But it is. But it is, right? Yeah. I could maybe try to summarize some of the main arguments. I don’t know if that seems like something that’s interesting to you? Again, I’m at risk of really strawmanning some stuff.\nRobin Hanson:       Well, this is intrinsic to your project. You are talking to people and then attempting to summarize them.\nAsya Bergal:        That’s right, that’s right.\nRobin Hanson:       If you thought it was actually feasible to summarize people, then what you would do is produce tentative summaries, and then ask for feedback and go back and forth in rounds of honing and improving the summaries. But if you don’t do that, it’s probably because you think even the first round of summaries will not be to their satisfaction and you won’t be able to improve it much.\nWhich then says you can’t actually summarize that well. But what you can do is attempt to summarize and then use that as an orienting thing to get a lot of people to talk and then just hand people the transcripts and they can get what they can get out of it. This is the nature of summarizing conversation; this is the nature of human conversation.\nAsya Bergal:        Right. Right. Right. Of course. Yeah. So I’ll go out on a limb. We’ve been talking largely to people who I think are still more pessimistic than you, but not as pessimistic as say, MIRI. I think the main difference between you and the people we’ve been talking to is… I guess two different things.\nThere’s a sort of general issue which is, how much time do we have between now and when AI is coming, and related to that, which I think we also largely discussed, is how useful is it to do work now? So yeah, there’s sort of this field building argument, and then there are arguments that if we think something is 20 years away, maybe we can make more robust claims about what the geopolitical situation is going to look like.\nOr we can pay more attention to the particular organizations that might be making progress on this, and how things are going to be. There’s a lot of work around assuming that maybe AGI’s actually going to look somewhat like current techniques. It’s going to look like deep reinforcement and ML techniques, plus maybe a few new capabilities. Maybe from that perspective we can actually put effort into work like interpretability, like adversarial training, et cetera.\nMaybe we can actually do useful work to progress that. A concrete version of this, Paul Christiano has this approach that I think MIRI is very skeptical of, addressing prosaic –AI that looks very similar to the way AI looks now. I don’t know if you’re familiar with iterated distillation and amplification, but it’s sort of treating this AI system as a black box, which is a lot of what it looks like if they’re in a world that’s close to the one now, because neural nets are sort of black box-y.\nTreating it as a black box, there’s some chance that this approach where we basically take a combination of smart AIs and use that to sort of verify the safety of a slightly smarter AI, and sort of do that process, bootstrapping. And maybe we have some hope of doing that, even if we don’t have access to the internals of the AI itself. Does that make sense? The idea is sort of to have an approach that works even with black box sort of AIs that might look similar to the neural nets we have now.\nRobin Hanson:       Right. I would just say the whole issue is how plausible is it that within 20 years we’ll have human level, broad human-level AI on the basis of these techniques that we see now? Obviously the higher probability you think that is, then the more you think it’s worth doing that. I don’t have any objection at all with conditional on that assumption, his strategies. It would just be, how likely is that? And not only–it’s okay for him to work on that–it’s just more, how big a fraction of mind space does that take up among the wider space of people worried about AI risk?\nAsya Bergal:        Yeah. Many of the people that we’ve talked to have actually agreed that it’s taking up too much mind space, or they’ve made arguments of the form, “Well, I am a very technical person, who has a lot of compelling thoughts about AI safety, and for me personally I think it makes sense to work on this. Not as sure that as many resources should be devoted to it.” I think at least a reasonable fraction of people would agree with that. [Note: It’s wrong that many of the people we interviewed said this. This comment was on the basis of non-public conversations that I’ve had.]\nRobin Hanson:       Well, then maybe an interesting follow-up conversation topic would be to say, “what concretely could change the percentage of mind space?” That’s different than … The other policy question is like, “How many research slots should be funded?” You’re asking what are the concrete policy actions that could be relevant to what you’re talking about. The most obvious one I would think is people are thinking in terms of how many research slots should be funded of what sort, when.\nBut with respect to the mind space, that’s not the relevant policy question. The policy question might be some sense of how many scenarios should these people be thinking in terms of. Or what other scenarios should get more attention.\nAsya Bergal:        Yeah, I guess I’m curious on your take on that. If you could just control the mind space in some way, or sort of set what people were thinking about or what directions, what do you think it would look like?\nRobert Long:        Very quickly, I think one concrete operationalization of “mind space resource” is what 80,000 Hours tells people to do, with young, talented people say.\nRobin Hanson:       That’s even more plausible. I mean, I would just say, study the future. Study many scenarios in the future other than this scenario. Go actually generate scenarios, explore them, tell us what you found. What are the things that could go wrong there? What are the opportunities? What are the uncertainties? Just explore a bunch of future scenarios and report. That’s just a thing that needs to happen.\nOther than AI risk. I mean, AI risk is focused on one relatively narrow set of scenarios, and there’s a lot of other scenarios to explore, so that would be a sense of mind space and career work is just say, “There’s 10 or 100 people working in this other area, I’m not going to be that …”\nThen you might just say, concretely, the world needs more futurists. If under these … the future is a very important place, but we’re not sure how much leverage we have about it. We just need more scenarios explored, including for each scenario asking what leverage there might be. \nThen I might say we’ve had a half-dozen books in the last few years about AI risks. How about a book that has a whole bunch of other scenarios, one of which is AI risk which takes one chapter out of 20, and 19 other chapters on other scenarios? And then if people talked about that and said it was a cool book and recommended it, and had keynote speakers about that sort of thing, then it would shift the mind space. People would say, “Yeah. AI risk is definitely one thing, people should be looking at it, but here’s a whole bunch of other scenarios.”\nAsya Bergal:        Right. I guess I could also try a little bit to zero in…  I think a lot of the differences in terms of people’s estimates for numbers of years are modeling differences. I think you have this more outside view model of what’s going on, looking at lumpiness.\nI think one other common modeling choice is to say something like, “We think progress in this field is powered by compute; here’s some extrapolation that we’ve made about how compute is going to grow,” and maybe our estimates of how much compute is needed to do some set of powerful things. I feel like with those estimates, then you might think things are going to happen sooner? I don’t know how familiar you are with that space of arguments or what your take is like.\nRobin Hanson:       I have read most all of the AI Impacts blog posts over the years, just to be clear.\nAsya Bergal:        Great. Great.\nRobin Hanson:       You have a set of posts on that. So the most obvious data point is maybe we’re near the human equivalent compute level now, but not quite there. We passed the mice level a while ago, right? Well, we don’t have machines remotely capable of doing what mice do. So it’s clear that merely having the computing-power equivalent is not enough. We have machines that went past the cockroach far long ago. We certainly don’t have machines that can do all the things cockroaches can do.\nIt’s just really obvious I think, looking at examples like that, that computing power is not enough. We might hit a point where we have so much computing power that you can do some sort of fast search. I mean, that’s sort of the difference between machine learning and AI as ways to think about this stuff. When you thought about AI you just thought about, “Well, you have to do a lot of work to make the system,” and it was computing. And then it was kind of obvious, well, duh, well you need software, hardware’s not enough.\nWhen you say machine learning people tend to have more hope– Well, we just need some general machine learning algorithm and then you turn that on and then you find the right system and then the right system is much cheaper to execute computationally. The threshold you need is a lot more computing power than the human brain has to execute the search, but it won’t be that long necessarily before we have a lot more.\nThen now it’s an issue of how simple is this thing you’re searching for and how close are current machine learning systems to what you need? The more you think that a machine learning system like we have now could basically do everything, if only it were big enough and had enough data and computing power, it’s a different perspective than if you think we’re not even close to having the right machine learning techniques. There’s just a bunch of machine learning problems that we know we’ve solved that these systems just don’t solve.\nAsya Bergal:        Right.\nRobert Long:        So on that question, I can’t pull up the exact quote quickly enough, but I may insert it in the transcript, with permission. Paul Christiano has said more or less, in an 80,000 Hours interview, that he’s very unsure, but he suspects that we might be at insect-level capabilities if we devoted, if we wanted to, if people took it upon themselves to take the compute we have and the resources that we have, we could do what insects do.1\nHe’s interested in maybe concretely testing this hypothesis that you just mentioned, humans and cockroaches. But it sounds like you’re just very skeptical of it. It sounds like you’re already quite confident that we are not at insect level. Can you just say a little bit more about why you think that?\nRobin Hanson:       Well, there’s doing something a lot like what insects do, and then there’s doing exactly what insects do. And those are really quite different tasks, and the difference is in part how forgiving you are about a bunch of details. I mean, there’s some who may say an image recognition or something, or even Go… Cockroaches are actually managing a particular cockroach body in a particular environment. They’re pretty damn good at that.\nIf you wanted to make an artificial cockroach that was as good as cockroaches at the thing that the cockroach does, I think we’re a long way off from that. But you might think most of those little details aren’t that important. They’re just a lot of work and that maybe you could make a system that did what you think of as the essential core problems similarly.\nNow we’re back to this key issue of the division between a few essential core problems and a lot of small messy problems. I basically think the game is in doing them all. Do it until you do them all. When doing them all, include a lot of the small messy things. So that’s the idea that your brain is 100,000 lines of code, and 90% of the brain volume is 100 of those lines, and then there’s all these little small, swirly structures in your brain that manage the small little swirly tasks that don’t happen very often, but when they do, that part needs to be there.\nWhat percentage of your brain volume would be enough to replicate before you thought you were essentially doing what a human does? I mean, that is sort of an essential issue. If you thought there were just 100 key algorithms and once you got 100 of them then you were done, that’s different than thinking, “Sure, there’s 100 main central algorithms, plus there’s another 100,000 lines of code that just is there to deal with very, very specific things that happen sometimes.”\nAnd that evolution has spent a long time searching in the space of writing that code and found these things and there’s no easy learning algorithm that will find it that isn’t in the environment that you were in. This is a key question about the nature of intelligence, really.\nRobert Long:        Right. I’m now hijacking this interview to be about this insect project that AI Impacts is also doing, so apologies for that. We were thinking maybe you can isolate some key cognitive tasks that bees can do, and then in simulation have something roughly analogous to that. But it sounds like you’re not quite satisfied with this as a test of the hypothesis, where you can do all the little bee things and control bee body and wiggle around just like bees do and so forth?\nRobin Hanson: I mean, if you could attach it to an artificial bee body and put it in a hive and see what happens, then I’m much more satisfied. If you say it does the bee dance, it does the bee smell, it does the bee touch, I’ll go, “That’s cute, but it’s not doing the bee.”\nRobert Long:        Then again, it just sounds like how satisfied you are with these abstractions, depends on your views of intelligence and how much can be abstracted away–\nRobin Hanson:       It depends on your view of the nature of the actual problems that most animals and humans face. They’re a mixture of some structures with relative uniformity across a wide range; that’s when abstraction is useful. Plus, a whole bunch of messy details that you just have to get right.\nIn some sense I’d be more impressed if you could just make an artificial insect that in a complex environment can just be an insect, and manage the insect colonies, right? I’m happy to give you a simulated house and some simulated dog food, and simulated predators, who are going to eat the insects, and I’m happy to let you do it all in simulation. But you’ve got to show me a complicated world, with all the main actual obstacles that insects have to surviving and existing, including parasites and all sorts of things, right?\nAnd just show me that you can have something that robustly works in an environment like that. I’m much more impressed by that than I would be by you showing an actual physical device that does a bee dance.\nAsya Bergal:        Yeah. I mean, to be clear, I think the project is more about actually finding a counterexample. If we could find a simple case where we can’t even do this with neural networks then it’s fairly … there’s a persuasive case there.\nRobin Hanson:       But then of course people might a month later say, “Oh, yeah?” And then they work on it and they come up with a way to do that, and there will never be an end to that game. The moment you put up this challenge and they haven’t done it yet–\nAsya Bergal:        Yeah. I mean, that’s certainly a possibility.\nRobert Long:        Cool. I guess I’m done for now hijacking this interview to be about bees, but that’s just been something I’ve been thinking about lately.\nAsya Bergal:        I would love to sort of engage with you on your disagreements, but I think a lot of them are sort of like … I think a lot of it is in this question of how close are we? And I think I only know in the vaguest terms people’s models for this.\nI feel like I’m not sure how good in an interview I could be at trying to figure out which of those models is more compelling. Though I do think it’s sort of an interesting project because it seems like lots of people just have vastly different sorts of timelines models, which they use to produce some kind of number.\nRobin Hanson:       Sure. I suppose you might want to ask people you ask after me sort of the relative status of inside and outside arguments. And who sort of has the burden of proof with respect to which audiences.\nAsya Bergal:        Right. Right. I think that’s a great question.\nRobin Hanson:       If we’ve agreed that the outside view doesn’t support short time scales of things happening, and we say, “But yes, some experts think they see something different in their expert views of things with an inside view,” then we can say, “Well, how often does that happen?” We can make the outside view of that. We can say, “Well, how often do inside experts think they see radical potential that they are then inviting other people to fund and support, and how often are they right?”\nAsya Bergal:        Right. I mean, I don’t think it’s just inside/outside view. I think there are just some outside view arguments that make different modeling choices that come to different conclusions.\nRobin Hanson:       I’d be most willing to engage those. I think a lot of people are sort of making an inside/outside argument where they’re saying, “Sure, from the outside this doesn’t look good, but here’s how I see it from the inside.” That’s what I’ve heard from a lot of people.\nAsya Bergal:        Yeah. Honestly my impression is that I think not a lot of people have spent … a lot of people when they give us numbers are like, “this is really a total guess.” So I think a lot of the argument is either from people who have very specific compute-based models for things that are short [timelines], and then there’s also people who I think haven’t spent that much time creating precise models, but sort of have models that are compelling enough. They’re like, “Oh, maybe I should work on this slash the chance of this is scary enough.” I haven’t seen a lot of very concrete models. Partially I think that’s because there’s an opinion in the community that if you have concrete models, especially if they argue for things being very soon, maybe you shouldn’t publish those.\nRobin Hanson: Right, but you could still ask the question, “Set aside everything you know except what this looks like from the outside. Looking at that, would you still predict stuff happening soon?”\nAsya Bergal: Yeah, I think that’s a good question to ask. We can’t really go back and add that to what we’ve asked people, but yeah.\nRobin Hanson: I think more people, even most, would say, “Yeah, from the outside, this doesn’t look so compelling.” That’s my judgement, but again, they might say, “Well, the usual way of looking at it from the outside doesn’t, but then, here’s this other way of looking at it from the outside that other people don’t use.” That would be a compromise sort of view. And again, I guess there’s this larger meta-question really of who should reasonably be moved by these things? That is, if there are people out there who specialize in chemistry or business ethics or something else, and they hear these people in AI risk saying there’s these big issues, you know, can the evidence that’s being offered by these insiders– is it the sort of thing that they think should be compelling to these outsiders?\nAsya Bergal: Yeah, I think I have a question about that too. Especially, I think–we’ve been interviewing largely AI safety researchers, but I think the arguments around why they think AI might be soon or far, look much more like economic arguments. They don’t necessarily look like arguments from an inside, very technical perspective on the subject. So it’s very plausible to me that there’s no particular reason to weigh the opinions of people working on this, other than that they’ve thought about it a little bit more than other people have. [Note: I say ‘soon or far’ here, but I mean to say ‘more or less likely to be harmful’.]\nRobin Hanson: Well, as a professional economist, I would say, if you have good economic arguments, shouldn’t you bring them to the attention of economists and have us critique them? Wouldn’t that be the way this should go? I mean, not all economics arguments should start with economists, but wouldn’t it make sense to have them be part of the critique evaluation cycle?\nAsya Bergal: Yeah, I think the real answer is that these all exist vaguely in people’s heads, and they don’t even make claims to having super-articulated and written-down models.\nRobin Hanson: Well, even that is an interesting thing if people agree on it. You could say, “You know a lot of people who agree with you that AI risk is big and that we should deal with something soon. Do you know anybody who agrees with you for the same reasons?”\nIt’s interesting, so I did a poll, I’ve done some Twitter polls lately, and I did one on “Why democracy?” And I gave four different reasons why democracy is good. And I noticed that there was very little agreement, that is, relatively equal spread across these four reasons. And so, I mean that’s an interesting fact to know about any claim that many people agree on, whether they agree on it for the same reasons. And it would be interesting if you just asked people, “Whatever your reason is, what percentage of people interested in AI risk agree with your claim about it for the reason that you do?” Or, “Do you think your reason is unusual?”\nBecause if most everybody thinks their reason is unusual, then basically there isn’t something they can all share with the world to convince the world of it. There’s just the shared belief in this conclusion, based on very different reasons. And then it’s more on their authority of who they are and why they as a collective are people who should be listened to or something.\nAsya Bergal: Yeah, I agree that that is an interesting question. I don’t know if I have other stuff, Rob, do you?\nRobert Long: I don’t think I do at this time.\nRobin Hanson: Well I perhaps, compared to other people, am happy to do a second round should you have questions you generate.\nAsya Bergal: Yeah, I think it’s very possible, thanks so much. Thanks so much for talking to us in general.\nRobin Hanson: You’re welcome. It’s a fun topic, especially talking with reasonable people.\nRobert Long: Oh thank you, I’m glad we were reasonable.\nAsya Bergal: Yeah, I’m flattered.\nRobin Hanson: You might think that’s a low bar, but it’s not.\nRobert Long: Great, we’re going to include that in the transcript. Thank you for talking to us. Have a good rest of your afternoon.\nRobin Hanson: Take care, nice talking to you.\n", "url": "https://aiimpacts.org/conversation-with-robin-hanson/", "title": "Conversation with Robin Hanson", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-11-13T21:40:05+00:00", "paged_url": "https://aiimpacts.org/feed?paged=10", "authors": ["Asya Bergal"], "id": "e4bf7a407ed871670ae3342a8bef79f4", "summary": []} {"text": "Etzioni 2016 survey\n\nOren Etzioni surveyed 193 AAAI fellows in 2016 and found that 67% of them expected that ‘we will achieve Superintelligence’ someday, but in more than 25 years. \nDetails\nOren Etzioni, CEO of the Allen Institute for AI,1 reported on a survey in an MIT Tech Review article published on 20 Sep 2016.2 The rest of this article summarizes information from that source, except where noted.\nIn March 2016, on behalf of Etzioni, the American Association for AI (AAAI) sent out an anonymous survey to 193 of their Fellows (“individuals who have made significant, sustained contributions — usually over at least a ten-year period — to the field of artificial intelligence.”3).\nThe survey contained one question:\n“In his book, Nick Bostrom has defined Superintelligence as ‘an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.’ When do you think we will achieve Superintelligence?”\nIt seems that responses were entered by selecting one of four categories4 although it is possible that they were entered as real numbers and then grouped. \nThere were 80 responses, for a response rate of 41%. They were:\n“In the next 10 years”: 0%“In the next 10-25 years”: 7.5%“In more than 25 years”: 67.5% “Never.”: 25%\nFigure 1: graph of responses from Etzioni’s article.\nNotes", "url": "https://aiimpacts.org/etzioni-2016-survey/", "title": "Etzioni 2016 survey", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-11-06T18:41:19+00:00", "paged_url": "https://aiimpacts.org/feed?paged=10", "authors": ["Katja Grace"], "id": "cce4bed058f9ab010b5a52b47c556a0d", "summary": []} {"text": "Rohin Shah on reasons for AI optimism\n\nBy Asya Bergal, 31 October 2019\nRohin Shah\nI along with several AI Impacts researchers recently talked to Rohin Shah about why he is relatively optimistic about AI systems being developed safely. Rohin Shah is a 5th year PhD student at the Center for Human-Compatible AI (CHAI) at Berkeley, and a prominent member of the Effective Altruism community.\nRohin reported an unusually large (90%) chance that AI systems will be safe without additional intervention. His optimism was largely based on his belief that AI development will be relatively gradual and AI researchers will correct safety issues that come up.\nHe reported two other beliefs that I found unusual: He thinks that as AI systems get more powerful, they will actually become more interpretable because they will use features that humans also tend to use. He also said that intuitions from AI/ML make him skeptical of claims that evolution baked a lot into the human brain, and he thinks there’s a ~50% chance that we will get AGI within two decades via a broad training process that mimics the way human babies learn.\nA full transcript of our conversation, lightly edited for concision and clarity, can be found here.", "url": "https://aiimpacts.org/rohin-shah-on-reasons-for-ai-optimism/", "title": "Rohin Shah on reasons for AI optimism", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-10-31T12:02:46+00:00", "paged_url": "https://aiimpacts.org/feed?paged=10", "authors": ["Asya Bergal"], "id": "971b9afc5d5cf0c9bd268e6259f01647", "summary": []} {"text": "Conversation with Rohin Shah\n\nAI Impacts talked to AI safety researcher Rohin Shah about his views on AI risk. With his permission, we have transcribed this interview.\nParticipants\nRohin Shah — PhD student at the Center for Human-Compatible AI, UC BerkeleyAsya Bergal – AI ImpactsRobert Long – AI ImpactsSara Haxhia — Independent researcher\nSummary\nWe spoke with Rohin Shah on August 6, 2019. Here is a brief summary of that conversation:\nBefore taking into account other researchers’ opinions, Shah guesses an extremely rough~90% chance that even without any additional intervention from current longtermists, advanced AI systems will not cause human extinction by adversarially optimizing against humans. He gives the following reasons, ordered by how heavily they weigh in his consideration:Gradual development and take-off of AI systems is likely to allow for correcting the AI system online, and AI researchers will in fact correct safety issues rather than hacking around them and redeploying.Shah thinks that institutions developing AI are likely to be careful because human extinction would be just as bad for them as for everyone else.As AI systems get more powerful, they will likely become more interpretable and easier to understand because they will use features that humans also tend to use.Many arguments for AI risk go through an intuition that AI systems can be decomposed into an objective function and a world model, and Shah thinks this isn’t likely to be a good way to model future AI systems.Shah believes that conditional on misaligned AI leading to extinction, it almost certainly goes through deception.Shah very uncertainly guesses that there’s a ~50% that we will get AGI within two decades:He gives a ~30% – 40% chance that it will be via essentially current techniques.He gives a ~70% that conditional on the two previous claims, it will be a mesa optimizer.Shah’s model for how we get to AGI soon has the following features:AI will be trained on a huge variety of tasks, addressing the usual difficulty of generalization in ML systemsAI will learn the same kinds of useful features that humans have learned.This process of research and training the AI will mimic the ways that evolution produced humans who learn.Gradient descent is simple and inefficient, so in order to do sophisticated learning, the outer optimization algorithm used in training will have to produce a mesa optimizer.Shah is skeptical of more ‘nativist’ theories where human babies are born with a lot of inductive biases, rather than learning almost everything from their experiences in the world.Shah thinks there are several things that could change his beliefs, including:If he learned that evolution actually baked a lot into humans (‘nativism’), he would lengthen the amount of time he thinks there will be before AGI.Information from historical case studies or analyses of AI researchers could change his mind around how the AI community would by default handle problems that arise.Having a better understanding of the disagreements he has with MIRI:Shah believes that slow takeoff is much more likely than fast takeoff.Shah doesn’t believe that any sufficiently powerful AI system will look like an expected utility maximizer.Shah believes less in crisp formalizations of intelligence than MIRi does.Shah has more faith in AI researchers fixing problems as they come up.Shah has less faith than MIRI in our ability to write proofs of the safety of our AI systems.\nThis transcript has been lightly edited for concision and clarity. \nTranscript\nAsya Bergal: We haven’t really planned out how we’re going to talk to people in general, so if any of these questions seem bad or not useful, just give us feedback. I think we’re particularly interested in skepticism arguments, or safe by default style arguments– I wasn’t sure from our conversation whether you partially endorse that, or you just are familiar with the argumentation style and think you could give it well or something like that.\nRohin Shah: I think I partially endorse it.\nAsya Bergal: Okay, great. If you can, it would be useful if you gave us the short version of your take on the AI risk argument and the place where you feel you and people who are more convinced of things disagree. Does that make sense?\nRobert Long: Just to clarify, maybe for my own… What’s ‘convinced of things’? I’m thinking of the target proposition as something like “it’s extremely high value for people to be doing work that aims to make AGI more safe or beneficial”.\nAsya Bergal: Even that statement seems a little imprecise because I think people have differing opinions about what the high value work is. But that seems like approximately the right proposition.\nRohin Shah: Okay. So there are some very obvious ones which are not the ones that I endorse, but things like, do you believe in longtermism? Do you buy into the total view of population ethics? And if your answer is no, and you take a more standard version, you’re going to drastically reduce how much you care about AI safety. But let’s see, the ones that I would endorse-\nRobert Long: Maybe we should work on this set of questions. I think this will only come up with people who are into rationalism. I think we’re primarily focused just on empirical sources of disagreement, whereas these would be ethical.\nRohin Shah: Yup.\nRobert Long: Which again, you’re completely right to mention these things.\nRohin Shah: So, there’s… okay. The first one I had listed is that continual or gradual or slow takeoff, whatever you want to call it, allows you to correct the AI system online. And also it means that AI systems are likely to fail in not extinction-level ways before they fail in extinction-level ways, and presumably we will learn from that and not just hack around it and fix it and redeploy it. I think I feel fairly confident that there are several people who will disagree with exactly the last thing I said, which is that people won’t just hack around it and deploy it– like fix the surface-level problem and then just redeploy it and hope that everything’s fine.\nI am not sure what drives the difference between those intuitions. I think they would point to neural architecture search and things like that as examples of, “Let’s just throw compute at the problem and let the compute figure out a bunch of heuristics that seem to work.” And I would point at, “Look, we noticed that… or, someone noticed that AI systems are not particularly fair and now there’s just a ton of research into fairness.”\nAnd it’s true that we didn’t stop deploying AI systems because of fairness concerns, but I think that is actually just the correct decision from a societal perspective. The benefits from AI systems are in fact– they do in fact outweigh the cons of them not being fair, and so it doesn’t require you to not deploy the AI system while it’s being fixed.\nAsya Bergal: That makes sense. I feel like another common thing, which is not just “hack around and fix it”, is that people think that it will fail in ways that we don’t recognize and then we’ll redeploy some bigger cooler version of it that will be deceptively aligned (or whatever the problem is). How do you feel about arguments of that form: that we just won’t realize all the ways in which the thing is bad?\nRohin Shah: So I’m thinking: the AI system tries to deceive us, so I guess the argument would be, we don’t realize that the AI system was trying to deceive us and instead we’re like, “Oh, the AI system just failed because it was off distribution or something.”\nIt seems strange that we wouldn’t see an AI system deliberately hide information from us. And then we look at this and we’re like, “Why the hell didn’t this information come up? This seems like a clear problem.” And then do some sort of investigation into this.\nI suppose it’s possible we wouldn’t be able to tell it’s intentionally doing this because it thinks it could get better reward by doing so. But that doesn’t… I mean, I don’t have a particular argument why that couldn’t happen but it doesn’t feel like…\nAsya Bergal: Yeah, to be fair I’m not sure that one is what you should expect… that’s just a thing that I commonly hear.\nRohin Shah: Yes. I also hear that.\nRobert Long: I was surprised at your deception comment… You were talking about, “What about scenarios where nothing seems wrong until you reach a certain level?”\nAsya Bergal: Right. Sorry, that doesn’t have to be deception. I think maybe I mentioned deception because I feel like I often commonly also see it.\nRohin Shah: I guess if I imagine “How did AI lead to extinction?”, I don’t really imagine a scenario that doesn’t involve deception. And then I claim that conditional on that scenario having happened, I am very surprised by the fact that we did not know this deception in any earlier scenario that didn’t lead to extinction. And I don’t really get people’s intuitions for why that would be the case. I haven’t tried to figure that one out though.\nSara Haxhia: So do you have no model of how people’s intuitions differ? You can’t see it going wrong aside from if it was deceptively aligned? Why?\nRohin Shah: Oh, I feel like most people have the intuition that conditional on extinction, it happened by the AI deceiving us. [Note: In this interview, Rohin was only considering risks arising because of AI systems that try to optimize for goals that are not our own, not other forms of existential risks from AI.]\nAsya Bergal: I think there’s another class of things which is something not necessarily deceiving us, as in it has a model of our goals and intentionally presents us with deceptive output, and just like… it has some notion of utility function and optimizes for that poorly. It doesn’t necessarily have a model of us, it just optimizes the paperclips or something like that, and we didn’t realize before that it is optimizing. I think when I hear deceptive, I think “it has a model of human behavior that is intentionally trying to do things that subvert our expectations”. And I think there’s also a version where it just has goals unaligned with ours and doesn’t spend any resources in modeling our behavior.\nRohin Shah: I think in that scenario, usually as an instrumental goal, you need to deceive humans, because if you don’t have a model of human behavior– if you don’t model the fact that humans are going to interfere with your plans– humans just turn you off and nothing, there’s no extinction.\nRobert Long: Because we’d notice. You’re thinking in the non-deception cases, as with the deception cases, in this scenario we’d probably notice.\nSara Haxhia: That clarifies my question. Great.\nRohin Shah: As far as I know, this is an accepted thing among people who think about AI x-risk.\nAsya Bergal: The accepted thing is like, “If things go badly, it’s because it’s actually deceiving us on some level”?\nRohin Shah: Yup. There are some other scenarios which could lead to us not being deceived and bad things still happen. These tend to be things like, we build an economy of AI systems and then slowly humans get pushed out of the economy of AI systems and… \nThey’re still modeling us. I just can’t really imagine the scenario in which they’re not modeling us. I guess you could imagine one where we slowly cede power to AI systems that are doing things better than we could. And at no point are they actively trying to deceive us, but at some point they’re just like… they’re running the entire economy and we don’t really have much say in it.\nAnd perhaps this could get to a point where we’re like, “Okay, we have lost control of the future and this is effectively an x-risk, but at no point was there really any deception.”\nAsya Bergal: Right. I’m happy to move on to other stuff.\nRohin Shah: Cool. Let’s see. What’s the next one I have? All right. This one’s a lot sketchier-\nAsya Bergal: So sorry, what is the thing that we’re listing just so-\nRohin Shah: Oh, reasons why AI safety will be fine by default.\nAsya Bergal: Right. Gotcha, great.\nRohin Shah: Okay. These two points were both really one point. So then the next one was… I claimed that as AI systems get more powerful, they will become more interpretable and easier to understand, just because they’re using– they will probably be able to get and learn features that humans also tend to use.\nI don’t think this has really been debated in the community very much and– sorry, I don’t mean that there’s agreement on it. I think it is just not a hypothesis that has been promoted to attention in the community. And it’s not totally clear what the safety implications are. It suggests that we could understand AI systems more easily and sort of in combination with the previous point it says, “Oh, we’ll notice things– we’ll be more able to notice things than today where we’re like, ‘Here’s this image classifier. Does it do good things? Who the hell knows? We tried it on a bunch of inputs and it seemed like it was doing the right stuff, but who knows what it’s doing inside.'”\nAsya Bergal: I’m curious why you think it’s likely to use features that humans tend to use. It’s possible the answer is some intuition that’s hard to describe.\nRohin Shah: Intuition that I hope to describe in a year. Partly it’s that in the very toy straw model, there are just a bunch of features in the world that an AI system can pay attention to in order to make good predictions. When you limit the AI system to make predictions on a very small narrow distribution, which is like all AI systems today, there are lots of features that the AI system can use for that task that we humans don’t use because they’re just not very good for the rest of the distribution.\nAsya Bergal: I see. It seems like implicitly in this argument is that when humans are running their own classifiers, they have some like natural optimal set of features that they use for that distribution?\nRohin Shah: I don’t know if I’d say optimal, but yeah. Better than the features that the AI system is using.\nRobert Long: In the space of better features, why aren’t they going past us or into some other optimal space of feature world?\nRohin Shah: I think they would eventually.\nRobert Long: I see, but they might have to go through ours first?\nRohin Shah: So A) I think they would go through ours, B) I think my intuition is something like the features– and this one seems like more just raw intuition and I don’t really have an argument for it– but the features… things like agency, optimization, want, deception, manipulation seem like things that are useful for modeling the world.\nI would be surprised if an AI system went so far beyond that those features didn’t even enter into its calculations. Or, I’d be surprised if that happened very quickly, maybe. I don’t want to make claims about how far past those AI systems could go, but I do think that… I guess I’m also saying that we should be aiming for AI systems that are like… This is a terrible way to operationalize it, but AI systems that are 10X as intelligent as humans, what do we have to do for them? And then once we’ve got AI systems that are 10 x smarter than us, then we’re like, “All right, what more problems could arise in the future?” And ask the AI systems to help us with that as well.\nAsya Bergal: To clarify, the thing you’re saying is… By the time AI systems are good and more powerful, they will have some conception of the kind of features that humans use, and be able to describe their decisions in terms of those features? Or do you think inherently, there’ll be a point where AI systems use the exact same features that humans use?\nRohin Shah: Not the exact same features, but broadly similar features to the ones that humans use.\nRobert Long: Where examples of those features would be like objects, cause, agent, the things that we want interpreted in deep nets but usually can’t.\nRohin Shah: Yes, exactly.\nAsya Bergal: Again, so you think in some sense that that’s a natural way to describe things? Or there’s only one path through getting better at describing things, and that has to go through the way that humans describe things? Does that sound right?\nRohin Shah: Yes.\nAsya Bergal: Okay. Does that also feel like an intuition?\nRohin Shah: Yes.\nRobert Long: Sorry, I think I did a bad interviewer thing where I started listing things, I should have just asked you to list some of the features which I think-\nRohin Shah: Well I listed them, like, optimization, want, motivation before, but I agree causality would be another one. But yeah, I was thinking more the things that safety researchers often talk about. I don’t know, what other features do we tend to use a lot? Object’s a good one… the conception of 3D space is one that I don’t think these classifiers have and that we definitely have.\nAnd the concept of 3D space seems like it’s probably going to be useful for an AI system no matter how smart it gets. Currently, they might have a concept of 3D space, but it’s not obvious that they do. And I wouldn’t be surprised if they don’t.\nAt some point, I want to take this intuition and run with it and see where it goes. And try to argue for it more.\nRobert Long: But I think for the purposes of this interview, I think we do understand how this is something that would make things safe by default. At least, in as much as interpretability conduces to safety. Because we could be able to interpret them in and still fuck shit up.\nRohin Shah: Yep. Agreed. Cool.\nSara Haxhia: I guess I’m a little bit confused about how it makes the code more interpretable. I can see how if it uses human brains, we can model it better because we can just say, “These are human things and this means we can make predictions better.” But if you’re looking at a neural net or something, it doesn’t make it more interpretable.\nRohin Shah: If you mean the code, I agree with that.\nSara Haxhia: Okay. So, is this kind of like external, like you being able to model that thing?\nRohin Shah: I think you could look at the… you take a particular input to neural net, you pass it through layers, you see what the activations are. I don’t think if you just look directly at the activations, you’re going to get anything sensible, in the same way that if you look at electrical signals in my brain you’re not going to be able to understand them.\nSara Haxhia: So, is your point that the reason it becomes more interpretable is something more like, you understand its motivations?\nRohin Shah: What I mean is… Are you familiar with Chris Olah’s work?\nSara Haxhia: I’m not.\nRohin Shah: Okay. So Chris Olah does interpretability work with image classifiers. One technique that he uses is: Take a particular neuron in the neural net, say, “I want to maximize the activation of this neuron,” and then do gradient descent on your input image to see what image maximally activates that neuron. And this gives you some insight into what that neuron is detecting. I think things like that will be easier as time goes on.\nRobert Long: Even if it’s not just that particular technique, right? Just the general task?\nRohin Shah: Yes.\nSara Haxhia: How does that relate to the human values thing? It felt like you were saying something like it’s going to model the world in a similar way to the way we do, and that’s going to make it more interpretable. And I just don’t really see the link.\nRohin Shah: A straw version of this, which isn’t exactly what I mean but sort of is the right intuition, would be like maybe if you run the same… What’s the input that maximizes the output of this neuron? You’ll see that this particular neuron is a deception classifier. It looks at the input and then based on something, does some computation with the input, maybe the input’s like a dialogue between two people and then this neuron is telling you, “Hey, is person A trying to deceive person B right now?” That’s an example of the sort of thing I am imagining.\nAsya Bergal: I’m going to do the bad interviewer thing where I put words in your mouth. I think one problem right now is you can go a few layers into a neural network and the first few layers correspond to things you can easily tell… Like, the first layer is clearly looking at all the different pixel values, and maybe the second layer is finding lines or something like that. But then there’s this worry that later on, the neurons will correspond to concepts that we have no human interpretation for, so it won’t even make sense to interpret them. Whereas Rohin is saying, “No, actually the neurons will correspond to, or the architecture will correspond to some human understandable concept that it makes sense to interpret.” Does that seem right?\nRohin Shah: Yeah, that seems right. I am maybe not sure that I tie it necessarily to the architecture, but actually probably I’d have to one day.\nAsya Bergal: Definitely, you don’t need to. Yeah.\nRohin Shah: Anyway, I haven’t thought about that enough, but that’s basically that. If you look at current late layers in image classifiers they are often like, “Oh look, this is a detector for lemon tennis balls,” and you’re just like, “That’s a strange concept you’ve got there, neural net, but sure.”\nRobert Long: Alright, cool. Next way of being safe?\nRohin Shah: They’re getting more and more sketchy. I have an intuition that… I should rephrase this. I have an intuition that AI systems are not well-modeled as, “Here’s the objective function and here is the world model.” Most of the classic arguments are: Suppose you’ve got an incorrect objective function, and you’ve got this AI system with this really, really good intelligence, which maybe we’ll call it a world model or just general intelligence. And this intelligence can take in any utility function, and optimize it, and you plug in the incorrect utility function, and catastrophe happens.\nThis does not seem to be the way that current AI systems work. It is the case that you have a reward function, and then you sort of train a policy that optimizes that reward function, but… I explained this the wrong way around. But the policy that’s learned isn’t really… It’s not really performing an optimization that says, “What is going to get me the most reward? Let me do that thing.”\nIt has been given a bunch of heuristics by gradient descent that tend to correlate well with getting high reward and then it just executes those heuristics. It’s kind of similar to… If any of you are fans of the sequences… Eliezer wrote a sequence on evolution and said… What was it? Humans are not fitness maximizers, they are adaptation executors, something like this. And that is how I view neural nets today that are trained by RL. They don’t really seem like expected utility maximizers the way that it’s usually talked about by MIRI or on LessWrong.\nI mostly expect this to continue, I think conditional on AGI being developed soon-ish, like in the next decade or two, with something kind of like current techniques. I think it would be… AGI would be a mesa optimizer or inner optimizer, whichever term you prefer. And that that inner optimizer will just sort of have a mishmash of all of these heuristics that point in a particular direction but can’t really be decomposed into ‘here are the objectives, and here is the intelligence’, in the same way that you can’t really decompose humans very well into ‘here are the objectives and here is the intelligence’.\nRobert Long: And why does that lead to better safety?\nRohin Shah: I don’t know that it does, but it leads to not being as confident in the original arguments. It feels like this should be pushing in the direction of ‘it will be easier to correct or modify or change the AI system’. Many of the arguments for risk are ‘if you have a utility maximizer, it has all of these convergent instrumental sub-goals’ and, I don’t know, if I look at humans they kind of sort of pursued convergent instrumental sub-goals, but not really.\nYou can definitely convince them that they should have different goals. They change the thing they are pursuing reasonably often. Mostly this just reduces my confidence in existing arguments rather than gives me an argument for safety.\nRobert Long: It’s like a defeater for AI safety arguments that rely on a clean separation between utility…\nRohin Shah: Yeah, which seems like all of them. All of the most crisp ones. Not all of them. I keep forgetting about the… I keep not taking into account the one where your god-like AI slowly replace humans and humans lose control of the future. That one still seems totally possible in this world.\nRobert Long: If AGI is through current techniques, it’s likely to have systems that don’t have this clean separation.\nRohin Shah: Yep. A separate claim that I would argue for separately– I don’t think they interact very much– is that I would also claim that we will get AGI via essentially current techniques. I don’t know if I should put a timeline on it, but two decades seems plausible. Not saying it’s likely, maybe 50% or something. And that the resulting AGI will look like mesa optimizer.\nAsya Bergal: Yeah. I’d be very curious to delve into why you think that.\nRobert Long: Yeah, me too. Let’s just do that because that’s fast. Also your… What do you mean by current techniques, and what’s your credence in that being what happens?\nSara Haxhia: And like what’s your model for how… where is this coming from?\nRohin Shah: So on the meta questions, first, the current techniques would be like deep learning, gradient descent broadly, maybe RL, maybe meta-learning, maybe things sort of like it, but back propagation or something like that is still involved.\nI don’t think there’s a clean line here. Something like, we don’t look back and say: That. That was where the ML field just totally did a U-turn and did something else entirely.\nRobert Long: Right. Everything that’s involved in the building of the AGI is something you can roughly find in current textbooks or like conference proceedings or something. Maybe combined in new cool ways.\nRohin Shah: Yeah. Maybe, yeah. Yup. And also you throw a bunch of compute at it. That is part of my model. So that was the first one. What is current techniques? Then you asked credence.\nCredence in AGI developed in two decades by current-ish techniques… Depends on the definition of current-ish techniques, but something like 30, 40%. Credence that it will be a mesa optimizer, maybe conditional on this being… The previous thing being true, the credence on it being a mesa optimizer, 60, 70%. Yeah, maybe 70%.\nAnd then the actual model for why this is… it’s sort of related to the previous points about features wherein there are lots and lots of features and humans have settled on the ones that are broadly useful across a wide variety of contexts. I think that in that world, what you want to do to get AGI is train an AI system on a very broad… train an AI system maybe by RL or something else, I don’t know. Probably RL.\nOn a very large distribution of tasks or a large distribution of something, maybe they’re tasks, maybe they’re not like, I don’t know… Human babies aren’t really training on some particular task. Maybe it’s just a bunch of unsupervised learning. And in doing so over a lot of time and a lot of compute, it will converge on the same sorts of features that humans use.\nI think the nice part of this story is that it doesn’t require that you explain how the AI system generalizes– generalization in general is just a very difficult property to get out of ML systems if you want to generalize outside of the training distribution. You mostly don’t require that here because, A) it’s being trained on a very wide variety of tasks and B) it’s sort of mimicking the same sort of procedure that was used to create humans. Where, with humans you’ve also got the sort of… evolution did a lot of optimization in order to create creatures that were able to work effectively in the environment, the environment’s super complicated, especially because there are other creatures that are trying to use the same resources.\nAnd so that’s where you get the wide variety or, the very like broad distribution of things. Okay. What have I not said yet?\nRobert Long: That was your model. Are you done with the model of how that sort of thing happens or-\nRohin Shah: I feel like I’ve forgotten aspects, forgotten to say aspects of the model, but maybe I did say all of it.\nRobert Long: Well, just to recap: One thing you really want is a generalization, but this is in some sense taken care of because you’re just training on a huge bunch of tasks. Secondly, you’re likely to get them learning useful features. And one-\nRohin Shah: And thirdly, it’s mimicking what evolution did, which is the one example we have of a process that created general intelligence.\nAsya Bergal: It feels like implicit in this sort of claim for why it’s soon is that compute will grow sufficiently to accommodate this process, which is similar to evolution. It feels like there’s implicit there, a claim that compute will grow and a claim that however compute will grow, that’s going to be enough to do this thing.\nRohin Shah: Yeah, that’s fair. I think actually I don’t have good reasons for believing that, maybe I should reduce my credences on these a bit, but… That’s basically right. So, it feels like for the first time I’m like, “Wow, I can actually use estimates of human brain computation and it actually makes sense with my model.”\nI’m like, “Yeah, existing AI systems seem more expensive to run than the human brain… Sorry, if you compare dollars per hour of human brain equivalent. Hiring a human is what? Maybe we call it $20 an hour or something if we’re talking about relatively simple tasks. And then, I don’t think you could get an equivalent amount of compute for $20 for a while, but maybe I forget what number it came out to, I got to recently. Yeah, actually the compute question feels like a thing I don’t actually know the answer to.\nAsya Bergal: A related question– this is just to clarify for me– it feels like maybe the relevant thing to compare to is not the amount of compute it takes to run a human brain, but like-\nRohin Shah: Evolution also matters.\nAsya Bergal: Yeah, the amount of compute to get to the human brain or something like that.\nRohin Shah: Yes, I agree with that, that that is a relevant thing. I do think we can be way more efficient than evolution.\nAsya Bergal: That sounds right. But it does feel like that’s… that does seem like that’s the right sort of quantity to be looking at? Or does it feel like-\nRohin Shah: For training, yes.\nAsya Bergal: I’m curious if it feels like the training is going to be more expensive than the running in your model.\nRohin Shah: I think the… It’s a good question. It feels like we will need a bunch of experimentation, figuring out how to build essentially the equivalent of the human brain. And I don’t know how expensive that process will be, but I don’t think it has to be a single program that you run. I think it can be like… The research process itself is part of that.\nAt some point I think we build a system that is initially trained by gradient descent, and then the training by gradient descent is comparable to humans going out in the world and acting and learning based on that. A pretty big uncertainty here is: How much has evolution put in a bunch of important priors into human brains? Versus how much are human brains actually just learning most things from scratch? Well, scratch or learning from their parents.\nPeople would claim that babies have lots of inductive biases, I don’t know that I buy it. It seems like you can learn a lot with a month of just looking at the world and exploring it, especially when you get way more data than current AI systems get. For one thing, you can just move around in the world and notice that it’s three dimensional.\nAnother thing is you can actually interact with stuff and see what the response is. So you can get causal intervention data, and that’s probably where causality becomes such an ingrained part of us. So I could imagine that these things that we see as core to human reasoning, things like having a notion of causality or having a notion, I think apparently we’re also supposed to have as babies an intuition about statistics and like counterfactuals and pragmatics.\nBut all of these are done with brains that have been in the world for a long time, relatively speaking, relative to AI systems. I’m not actually sure if I buy that this is because we have really good priors.\nAsya Bergal: I recently heard… Someone was talking to me about an argument that went like: Humans, in addition to having priors, built-ins from evolution and learning things in the same way that neural nets do, learn things through… you go to school and you’re taught certain concepts and algorithms and stuff like that. And that seems distinct from learning things in a gradient descenty way. Does that seem right?\nRohin Shah: I definitely agree with that.\nAsya Bergal: I see. And does that seem like a plausible thing that might not be encompassed by some gradient descenty thing?\nRohin Shah: I think the idea there would be, you do the gradient descenty thing for some time. That gets you in the AI system that now has inside of it a way to learn. That’s sort of what it means to be a mesa optimizer. And then that mesa optimizer can go and do its own thing to do better learning. And maybe at some point you just say, “To hell with this gradient descent, I’ll turn it off.” Probably humans don’t do that. Maybe humans do that, I don’t know.\nAsya Bergal: Right. So you do gradient descent to get to some place. And then from there you can learn in the same way– where you just read articles on the internet or something?\nRohin Shah: Yeah. Oh, another reason that I think this… Another part of my model for why this is more likely– I knew there was more– is that, exactly that point, which is that learning probably requires some more deliberate active process than gradient descent. Gradient design feels really relatively dumb, not as dumb as evolution, but close. And the only plausible way I’ve seen so far for how that could happen is by mesa optimization. And it also seems to be how it happened with humans. I guess you could imagine the meta-learning system that’s explicitly trying to develop this learning algorithm.\nAnd then… okay, by the definition of mesa optimizers, that would not be a mesa optimizer, it would be an inner optimizer. So maybe it’s an inner optimizer instead if we use-\nAsya Bergal: I think I don’t quite understand what it means that learning requires, or that the only way to do learning is through mesa optimization\nRohin Shah: I can give you a brief explanation of what it means to me in a minute or two. I’m going to go and open my summary because that says it better than I can.\nLearned optimization, that’s what it was called. All right. Suppose you’re searching over a space of programs to find one that plays tic-tac-toe well. And initially you find a program that says, “If the board is empty, put something in the center square,” or rather, “If the center square is empty, put something there. If there’s two in a row somewhere of yours, put something to complete it. If your opponent has two in a row somewhere, make sure to block it,” and you learn a bunch of these heuristics. Those are some nice, interpretable heuristics but maybe you’ve got some uninterpretable ones too.\nBut as you search more and more, eventually someday you stumble upon the minimax algorithm, which just says, “Play out the game all the way until the end. See whether in all possible moves that you could make, and all possible moves your opponent could make, and search for the path where you are guaranteed to win.”\nAnd then you’re like, “Wow, this algorithm, it just always wins. No one can ever beat it. It’s amazing.” And so basically you have this outer optimization loop that was searching over a space of programs, and then it found a program, so one element of the space, that was itself performing optimization, because it was searching through possible moves or possible paths in the game tree to find the actual policy it should play.\nAnd so your outer optimization algorithm found an inner optimization algorithm that is good, or it solves the task well. And the main claim I will make, and I’m not sure if… I don’t think the paper makes it, but the claim I will make is that for many tasks if you’re using gradient descent as your optimizer, because gradient descent is so annoyingly slow and simple and inefficient, the best way to actually achieve the task will be to find a mesa optimizer. So gradient descent finds parameters that themselves take an input, do some sort of optimization, and then figure out an output.\nAsya Bergal: Got you. So I guess part of it is dividing into sub-problems that need to be optimized and then running… Does that seem right?\nRohin Shah: I don’t know that there’s necessarily a division into sub problems, but it’s a specific kind of optimization that’s tailored for the task at hand. Maybe another example would be… I don’t know, that’s a bad example. I think the analogy to humans is one I lean on a lot, where evolution is the outer optimizer and it needs to build things that replicate a bunch.\nIt turns out having things replicate a bunch is not something you can really get by heuristics. What you need to do is to create humans who can themselves optimize and figure out how to… Well, not replicate a bunch, but do things that are very correlated with replicating a bunch. And that’s how you get very good replicators.\nAsya Bergal: So I guess you’re saying… often the gradient descent process will– it turns out that having an optimizer as part of the process is often a good thing. Yeah, that makes sense. I remember them in the mesa optimization stuff.\nRohin Shah: Yeah. So that intuition is one of the reasons I think that… It’s part of my model for why AGI will be a mesa optimizer. Though I do– in the world where we’re not using current ML techniques I’m like, “Oh, anything can happen.”\nAsya Bergal: That makes sense. Yeah, I was going to ask about that. Okay. So conditioned on current ML techniques leading to it, it’ll probably go through mesa optimizers?\nRohin Shah: Yeah. I might endorse the claim with much weaker confidence even without current ML techniques, but I’d have to think a lot more about that. There are arguments for why mesa optimization is the thing you want– is the thing that happens– that are separate from deep learning. In fact, the whole paper doesn’t really talk about deep learning very much.\nRobert Long: Cool. So that was digging into the model of why and how confident we should be on current technique AGI, prosaic AI I guess people call it? And seems like the major sources of uncertainty there are: does compute actually go up, considerations about evolution and its relation to human intelligence and learning and stuff?\nRohin Shah: Yup. So the Median Group, for example, will agree with most of this analysis… Actually no. The Median Group will agree with some of this analysis but then say, and therefore, AGI is extremely far away, because evolution threw in some horrifying amount of computation and there’s no way we can ever match that.\nAsya Bergal: I’m curious if you still have things on your list of like safety by default arguments, I’m curious to go back to that. Maybe you covered them.\nRohin Shah: I think I have covered them.  The way I’ve listed this last one is ‘AI systems will be optimizers in the same way that humans are optimizers, not like Eliezer-style EU maximizers’… which is basically what I’ve just been saying.\nSara Haxhia: But it seems like it still feels dangerous.. if a human had loads of power, it could do things that… even if they aren’t maximizing some utility.\nRohin Shah: Yeah, I agree, this is not an argument for complete safety. I forget where I was initially going with this point. I think my main point here is that mesa optimizers don’t nice… Oh, right, they don’t nicely factor into utility function and intelligence. And that reduces my credence in existing arguments, and there are still issues which are like, with a mesa optimizer, your capabilities generalize with distributional shift, but your objective doesn’t.\nHumans are not really optimizing for reproductive success. And arguably, if someone had wanted to create things that were really good at reproducing, they might have used evolution as a way to do it. And then humans showed up and were like, “Oh, whoops, I guess we’re not doing that anymore.”\nI mean, the mesa optimizers paper is a very pessimistic paper. In their view, mesa optimization is a bad thing that leads to danger and that’s… I agree that all of the reasons they point out for mesa optimization being dangerous are in fact reasons that we should be worried about mesa optimization.\nI think mostly I see this as… convergent instrumental sub-goals are less likely to be obviously a thing that this pursues. And that just feels more important to me. I don’t really have a strong argument for why that consideration dominates-\nRobert Long: The convergent instrumental sub-goals consideration?\nRohin Shah: Yeah.\nAsya Bergal: I have a meta credence question, maybe two layers of them. The first being, do you consider yourself optimistic about AI for some random qualitative definition of optimistic? And the follow-up is, what do you think is the credence that by default things go well, without additional intervention by us doing safety research or something like that?\nRohin Shah: I would say relative to AI alignment researchers, I’m optimistic. Relative to the general public or something like that, I might be pessimistic. It’s hard to tell. I don’t know, credence that things go well? That’s a hard one. Intuitively, it feels like 80 to 90%, 90%, maybe. 90 feels like I’m being way too confident and like, “What? You only assign 10%, even though you have literally no… you can’t predict the future and no one can predict the future, why are you trying to do it?” It still does feel more like 90%.\nAsya Bergal: I think that’s fine. I guess the follow-up is sort of like, between the sort of things that you gave, which were like: Slow takeoff allows for correcting things, things that are more powerful will be more interpretable, and I think the third one being, AI systems not actually being… I’m curious how much do you feel like your actual belief in this leans on these arguments? Does that make sense?\nRohin Shah: Yeah. I think the slow takeoff one is the biggest one. If I believe that at some point we would build an AI system that within the span of a week was just way smarter than any human, and before that the most powerful AI system was below human level, I’m just like, “Shit, we’re doomed.”\nRobert Long: Because there it doesn’t matter if it goes through interpretable features particularly.\nRohin Shah: There I’m like, “Okay, once we get to something that’s super intelligent, it feels like the human ant analogy is basically right.” And unless we… Maybe we could still be fine because people thought about it and put in… Maybe I’m still like, “Oh, AI researchers would have been able to predict that this would’ve happened and so were careful.”\nI don’t know, in a world where fast takeoff is true, lots of things are weird about the world, and I don’t really understand the world. So I’m like, “Shit, it’s quite likely something goes wrong.” I think the slow takeoff is definitely a crux. Also, we keep calling it slow takeoff and I want to emphasize that it’s not necessarily slow in calendar time. It’s more like gradual.\nAsya Bergal: Right, like ‘enough time for us to correct things’ takeoff.\nRohin Shah: Yeah. And there’s no discontinuity between… you’re not like, “Here’s a 2X human AI,” and a couple of seconds later it’s now… Not a couple of seconds later, but like, “Yeah, we’ve got 2X AI,” for a few months and then suddenly someone deploys a 10,000X human AI. If that happened, I would also be pretty worried.\nIt’s more like there’s a 2X human AI, then there’s like a 3X human AI and then a 4X human AI. Maybe this happens from the same AI getting better and learning more over time. Maybe it happens from it designing a new AI system that learns faster, but starts out lower and so then overtakes it sort of continuously, stuff like that.\nSo that I think, yeah, without… I don’t really know what the alternative to it is, but in the one where it’s not human level, and then 10,000X human in a week and it just sort of happened, that I’m like, I don’t know, 70% of doom or something, maybe more. That feels like I’m… I endorse that credence even less than most just because I feel like I don’t know what that world looks like. Whereas on the other ones I at least have a plausible world in my head.\nAsya Bergal: Yeah, that makes sense. I think you’ve mentioned, in a slow takeoff scenario that… Some people would disagree that in a world where you notice something was wrong, you wouldn’t just hack around it, and keep going.\nAsya Bergal: I have a suggestion which it feels like maybe is a difference and I’m very curious for your take on whether that seems right or seems wrong. It seems like people believe there’s going to be some kind of pressure for performance or competitiveness that pushes people to try to make more powerful AI in spite of safety failures. Does that seem untrue to you or like you’re unsure about it?\nRohin Shah: It seems somewhat untrue to me. I recently made a comment about this on the Alignment Forum. People make this analogy between AI x-risk and risk of nuclear war, on mutually assured destruction. That particular analogy seems off to me because with nuclear war, you need the threat of being able to hurt the other side whereas with AI x-risk, if the destruction happens, that affects you too. So there’s no mutually assured destruction type dynamic.\nYou could imagine a situation where for some reason the US and China are like, “Whoever gets to AGI first just wins the universe.” And I think in that scenario maybe I’m a bit worried, but even then, it seems like extinction is just worse, and as a result, you get significantly less risky behavior? But I don’t think you get to the point where people are just literally racing ahead with no thought to safety for the sake of winning.\nI also don’t think that you would… I don’t think that differences in who gets to AGI first are going to lead to you win the universe or not. I think it leads to pretty continuous changes in power balance between the two.\nI also don’t think there’s a discrete point at which you can say, “I’ve won the race.” I think it’s just like capabilities keep improving and you can have more capabilities than the other guy, but at no point can you say, “Now I have won the race.” I suppose if you could get a decisive strategic advantage, then you could do it. And that has nothing to do with what your AI capability… If you’ve got a decisive strategic advantage that could happen.\nI would be surprised if the first human-level AI allowed you to get anything close to a decisive strategic advantage. Maybe when you’re at 1000X human level AI, perhaps. Maybe not a thousand. I don’t know. Given slow takeoff, I’d be surprised if you could knowably be like, “Oh yes, if I develop this piece of technology faster than my opponent, I will get a decisive strategic advantage.”\nAsya Bergal: That makes sense. We discussed a lot of cruxes you have. Do you feel like there’s evidence that you already have pre-computed that you think could move you in one direction or another on this? Obviously, if you’ve got evidence that X was true, that would move you, but are there concrete things where you’re like, “I’m interested to see how this will turn out, and that will affect my views on the thing?”\nRohin Shah: So I think I mentioned the… On the question of timelines, they are like the… How much did evolution actually bake in to humans? It seems like a question that could put… I don’t know if it could be answered, but maybe you could answer that one. That would affect it… I lean on the side of not really, but it’s possible that the answer is yes, actually quite a lot. If that was true, I just lengthen my timelines basically.\nSara Haxhia: Can you also explain how this would change your behavior with respect to what research you’re doing, or would it not change that at all?\nRohin Shah: That’s a good question. I think I would have to think about that one for longer than two minutes.\nAs background on that, a lot of my current research is more trying to get AI researchers to be thinking about what happens when you deploy, when you have AI systems working with humans, as opposed to solving alignment. Mostly because I for a while couldn’t see research that felt useful to me for solving alignment. I think I’m now seeing more things that I can do that seem more relevant and I will probably switch to doing them possibly after graduating because thesis, and needing to graduate, and stuff like that.\nRohin Shah: Yes, but you were asking evidence that would change my mind-\nAsya Bergal: I think it’s also reasonable to be not sure exactly about concrete things. I don’t have a good answer to this question off the top of my head.\nRohin Shah: It’s worth at least thinking about for a couple of minutes. I think I could imagine getting more information from either historical case studies of how people have dealt with new technologies, or analyses of how AI researchers currently think about things or deal with stuff, could change my mind about whether I think the AI community would by default handle problems that arise, which feels like an important crux between me and others.\nI think currently my sense is if the like… You asked me this, I never answered it. If the AI safety field just sort of vanished, but the work we’ve done so far remained and conscientious AI researchers remained, or people who are already AI researchers and already doing this sort of stuff without being influenced by EA or rationality, then I think we’re still fine because people will notice failures and correct them.\nI did answer that question. I said something like 90%. This was a scenario I was saying 90% for. And yeah, that one feels like a thing that I could get evidence on that would change my mind.\nI can’t really imagine what would cause me to believe that AI systems will actually do a treacherous turn without ever trying to deceive us before that. But there might be something there. I don’t really know what evidence would move me, any sort of plausible evidence I could see that would move me in that direction.\nSlow takeoff versus fast takeoff…. I feel like MIRI still apparently believes in fast takeoff. I don’t have a clear picture of these reasons, I expect those reasons would move me towards fast takeoff.\nOh, on the expected utility max or the… my perception of MIRI, or of Eliezer and also maybe MIRI, is that they have this position that any AI system, any sufficiently powerful AI system, will look to us like an expected utility maximizer, therefore convergent instrumental sub-goals and so on. I don’t buy this. I wrote a post explaining why I don’t buy this.\nYeah, there’s a lot of just like.. MIRI could say their reasons for believing things and that would probably cause me to update. Actually, I have enough disagreements with MIRI that they may not update me, but it could in theory update me.\nAsya Bergal: Yeah, that’s right. What are some disagreements you have with MIRI?\nRohin Shah: Well, the ones I just mentioned. There is this great post from maybe not a year ago, but in 2018, called ‘Realism about Rationality’, which is basically this perspective that there is the one true learning algorithm or the one correct way of doing exploration, or just, there is a platonic ideal of intelligence. We could in principle find it, code it up, and then we would have this extremely good AI algorithm.\nThen there is like, to the extent that this was a disagreement back in 2008, Robin Hanson would have been on the other side saying, “No, intelligence is just like a broad… just like conglomerate of a bunch of different heuristics that are all task specific, and you can’t just take one and apply it on the other space. It is just messy and complicated and doesn’t have a nice crisp formalization.”\nAnd, I fall not exactly on Robin Hanson’s side, but much more on Robin Hanson’s side than the ‘rationality is a real formalizable natural thing in the world’.\nSara Haxhia: Do you have any idea where the cruxes of disagreement are at all?\nRohin Shah: No, that one has proved very difficult to…\nRobert Long: I think that’s an AI Impacts project, or like a dissertation or something. I feel like there’s just this general domain specificity debate, how general is rationality debate…\nI think there are these very crucial considerations about the nature of intelligence and how domain specific it is and they were an issue between Robin and Eliezer and no one… It’s hard to know what evidence, what the evidence is in this case.\nRohin Shah: Yeah. But I basically agree with this and that it feels like a very deep disagreement that I have never had any success in coming to a resolution to, and I read arguments by people who believe this and I’m like, “No.”\nSara Haxhia: Have you spoken to people?\nRohin Shah: I have spoken to people at CHAI, I don’t know that they would really be on board this train. Hold on, Daniel probably would be. And that hasn’t helped that much. Yeah. This disagreement feels like one where I would predict that conversations are not going to help very much.\nRobert Long: So, the general question here was disagreements with MIRI, and then there’s… And you’ve mentioned fast takeoff and maybe relatedly, the Yudkowsky-Hanson–\nRohin Shah: Realism about Rationality is how I’d phrase it. There’s also the– are AI researchers conscientious? Well, actually I don’t know that they would say they are not conscientious. Maybe they’d say they’re not paying attention or they have motivated reasoning for ignoring the issues… lots of things like that.\nRobert Long: And this issue of do advanced intelligences look enough like EU maximizers…\nRohin Shah: Oh, yes. That one too. Yeah, sorry. That’s one of the major ones. Not sure how I forgot that.\nRobert Long: I remember it because I’m writing it all down, so… again, you’ve been talking about very complicated things.\nRohin Shah: Yeah. Related to the Realism about Rationality point is the use of formalism and proof. Nor formalism, but proof at least. I don’t know that MIRI actually believes that what we need to do is write a bunch of proofs about our AI system, but it sure sounds like it, and that seems like a too difficult, and basically impossible task to me, if the proofs that we’re trying to write are about alignment or beneficialness or something like that.\nThey also seem to… No, maybe all the other disagreements can be traced back to these disagreements. I’m not sure.", "url": "https://aiimpacts.org/conversation-with-rohin-shah/", "title": "Conversation with Rohin Shah", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-10-31T12:02:15+00:00", "paged_url": "https://aiimpacts.org/feed?paged=10", "authors": ["Asya Bergal"], "id": "1a6350dcf91b7f0d17efa60dca0f5b6b", "summary": []} {"text": "The unexpected difficulty of comparing AlphaStar to humans\n\nBy Rick Korzekwa, 17 September 2019\n Artificial intelligence defeated a pair of professional Starcraft II players for the first time in December 2018. Although this was generally regarded as an impressive achievement, it quickly became clear that not everybody was satisfied with how the AI agent, called AlphaStar, interacted with the game, or how its creator, DeepMind, presented it. Many observers complained that, in spite of DeepMind’s claims that it performed at similar speeds to humans, AlphaStar was able to control the game with greater speed and accuracy than any human, and that this was the reason why it prevailed.\nAlthough I think this story is mostly correct, I think it is harder than it looks to compare AlphaStar’s interaction with the game to that of humans, and to determine to what extent this mattered for the outcome of the matches. Merely comparing raw numbers for actions taken per minute (the usual metric for a player’s speed) does not tell the whole story, and appropriately taking into account mouse accuracy, the differences between combat actions and non-combat actions, and the control of the game’s “camera” turns out to be quite difficult.\nHere, I begin with an overview of Starcraft II as a platform for AI research, a timeline of events leading up to AlphaStar’s success, and a brief description of how AlphaStar works. Next, I explain why measuring performance in Starcraft II is hard, show some analysis on the speed of both human and AI players, and offer some preliminary conclusions on how AlphaStar’s speed compares to humans. After this, I discuss the differences in how humans and AlphaStar “see” the game and the impact this has on performance. Finally, I give an update on DeepMind’s current experiments with Starcraft II and explain why I expect we will encounter similar difficulties when comparing human and AI performance in the future. \n Why Starcraft is a Target for AI Research \n Starcraft II has been a target for AI for several years, and some readers will recall that Starcraft II appeared on our 2016 expert survey. But there are many games and many AIs that play them, so it may not be obvious why Starcraft II is a target for research or why it is of interest to those of us that are trying to understand what is happening with AI. \nFor the most part, Starcraft II was chosen because it is popular, and it is difficult for AI. Starcraft II is a real time strategy game, and like similar games, it requires a variety of tasks: harvesting resources, constructing bases, researching technology, building armies, and attempting to destroy the opponent’s base are all part of the game. Playing it well requires balancing attention between many things at once: planning ahead, ensuring that one’s units1 are good counters for the enemy’s units, predicting opponents’ moves, and changing plans in response to new information. There are other aspects that make it difficult for AI in particular: it has imperfect information2, an extremely large action space, and takes place in real time. When humans play, they engage in long term planning, making the best use of their limited capacity for attention, and crafting ploys to deceive the other players.\nThe game’s popularity is important because it makes it a good source of extremely high human talent and increases the number of people that will intuitively understand how difficult the task is for a computer. Additionally, as a game that is designed to be suitable for high-level competition, the game is carefully balanced so that competition is fair, does not favor just one strategy3, and does not rely too heavily on luck. \n Timeline of Events \n To put AlphaStar’s performance in context, it helps to understand the timeline of events over the past few years:\nNovember 2016: Blizzard and DeepMind announce they are launching a new project in Starcraft II AI\nAugust 2017: DeepMind releases the Starcraft II API, a set of tools for interfacing AI with the game\nMarch 2018: Oriol Vinyals gives an update, saying they’re making progress, but he doesn’t know if their agent will be able to beat the best human players\nNovember 3, 2018: Oriol Vinyals gives another update at a Blizzcon panel, and shares a sequence of videos demonstrating AlphaStar’s progress in learning the game, including leaning to win against the hardest built-in AI. When asked if they could play against it that day, he says “For us, it’s still a bit early in the research.”\nDecember 12, 2018: AlphaStar wins five straight matches against TLO, a professional Starcraft II player, who was playing as Protoss4, which is off-race for him. DeepMind keeps the matches secret.\nDecember 19, 2018: AlphaStar, given an additional week of training time5, wins five consecutive Protoss vs Protoss matches vs MaNa, a pro Starcraft II player who is higher ranked than TLO and specializes in Protoss. DeepMind continues to keep the victories a secret.\nJanuary 24, 2019: DeepMind announces the successful test matches vs TLO and MaNa in a live video feed. MaNa plays a live match against a version of AlphaStar which had more constraints on how it “saw” the map, forcing it to interact with the game in a way more similar to humans6. AlphaStar loses when MaNa finds a way to exploit a blatant failure of the AI to manage its units sensibly. The replays of all the matches are released, and people start arguing7 about how (un)fair the matches were, whether AlphaStar is any good at making decisions, and how honest DeepMind was in presenting the results of the matches.\nJuly 10, 2019: DeepMind and Blizzard announce that they will allow an experimental version of AlphaStar to play on the European ladder8, for players who opt in. The agent will play anonymously, so that most players will not know that they are playing against a computer. Over the following weeks, players attempt to discern whether they played against the agent, and some post replays of matches in which they believe they were matched with the agent. \n How AlphaStar works \n The best place to learn about AlphaStar is from DeepMind’s page about it. There are a few particular aspects of the AI that are worth keeping in mind:\nIt does not interact with the game like a human does: Humans interact with the game by looking at a screen, listening through headphones or speakers, and giving commands through a mouse and keyboard. AlphaStar is given a list of units or buildings and their attributes, which includes things like their location, how much damage they’ve taken, and which actions they’re able to take, and gives commands directly, using coordinates and unit identifiers. For most of the matches, it had access to information about anything that wouldn’t normally be hidden from a human player, without needing to control a “camera” that focuses on only one part of the map at a time. For the final match, it had a camera restriction similar to humans, though it still was not given screen pixels as input. Because it gives commands directly through the game, it does not need to use a mouse accurately or worry about tapping the wrong key by accident.\nIt is trained first by watching human matches, and then through self-play: The neural network is trained first on a large database of matches between humans, and then by playing against versions of itself.\nIt is a set of agents selected from a tournament: Hundreds of versions of the AI play against each other, and the ones that perform best are selected to play against human players. Each one has its own set of units that it is incentivized to use via reinforcement learning, so that they each play with different strategies. TLO and MaNa played against a total of 11 agents, all of which were selected from the same tournament, except the last one, which had been substantially modified. The agents that defeated MaNa had each played for hundreds of years in the virtual tournament9. \n January/February Impressions Survey \n Before deciding to focus my investigation on a comparison between human and AI performance in Starcraft II, I conducted an informal survey with my Facebook friends, my colleagues at AI Impacts, and a few people from an effective altruism Facebook group. I wanted to know what they were thinking about the matches in general, with an emphasis on which factors most contributed to the outcome of the matches. I’ve put details about my analysis and the full results of the survey in the appendix at the end of this article, but I’ll summarize a few major results here. \nForecasts\nThe timing and nature of AlphaStar’s success seems to have been mostly in line with people’s expectations, at least at the time of the announcement. Some respondents did not expect to see it for a year or two, but on average, AlphaStar was less than a year earlier than expected. It is probable that some respondents had been expecting it to take longer, but updated their predictions in 2016 after finding out that DeepMind was working on it. For future expectations, a majority of respondents expect to see an agent (not necessarily AlphaStar) that can beat the best humans without any of the current caveats within two years. In general, I do not think that I worded the forecasting questions carefully enough to infer very much from the answers given by survey respondents.\nSome readers may be wondering how these survey results compare to those of our more careful 2016 survey, or how we should view the earlier survey results in light of MaNa and TLOs defeat at the hands of AlphaStar. The 2016 survey specified an agent that only receives a video of the screen, so that prediction has not yet resolved. But the median respondent assigned 50% probability of seeing such an agent that can defeat the top human players at least 50% of the time by 202110. I don’t personally know how hard it is to add in that capability, but my impression from speaking to people with greater machine learning expertise than mine is that this is not out of reach, so these predictions still seem reasonable, and are not generally in disagreement with the results from my informal survey.\nSpeed\nNearly everyone thought that AlphaStar was able to give commands faster and more accurately than humans, and that this advantage was an important factor in the outcome of the matches. I looked into this in more detail, and wrote about it in the next section.\nCamera\nAs I mentioned in the description of AlphaStar, it does not see the game the same way that humans do. Its visual field covered the entire map, though its vision was still affected by the usual fog of war11. Survey respondents ranked this as an important factor in the outcome of the matches.\nGiven these results, I decided to look into the speed and camera issues in more detail. \n The Speed Controversy \n Starcraft is a game that rewards the ability to micromanage many things at once and give many commands in a short period of time. Players must simultaneously build their bases, manage resource collection, scout the map, research better technology, build individual units to create an army, and fight battles against other players. The combat is sufficiently fine grained that a player who is outnumbered or outgunned can often come out ahead by exerting better control over the units that make up their military forces, both on a group level and an individual level. For years, there have been simple Starcraft II bots that, although they cannot win a match against a highly-skilled human player, can do amazing things that humans can’t do, by controlling dozens of units individually during combat. In practice, human players are limited by how many actions they can take in a given amount of time, usually measured in actions per minute (APM).  Although DeepMind imposed restrictions on how quickly AlphaStar could react to the game and how many actions it could take in a given amount of time, many people believe that the agent was sometimes able to act with superhuman speed and precision.  \n Here is a graph12 of the APM for MaNa (red) and AlphaStar (blue), through the second match, with five-second bins: \n Actions per minute for MaNa (red) and AlphaStar (blue) in their second game. The horizontal axis is time, and the vertical axis is 5 second average APM. \nAt first glance, this looks reasonably even. AlphaStar has both a lower average APM (180 vs MaNa’s 270) for the whole match, and a lower peak 5 second APM (495 vs Mana’s 615). This seems consistent with DeepMind’s claim that AlphaStar was restricted to human-level speed. But a more detailed look at which actions are actually taken during these peaks reveals some crucial differences. Here’s a sample of actions taken by each player during their peaks:\nLists of commands for MaNa and AlphaStar during each player’s peak APM for game 2\n MaNa hit his APM peaks early in the game by using hot keys to twitchily switch back and forth between control groups13 for his workers and the main building in his base. I don’t know why he’s doing this: maybe to warm up his fingers (which apparently is a thing), as a way to watch two things at once, to keep himself occupied during the slow parts of the early game, or some other reason understood only by the kinds of people that can produce Starcraft commands faster than I can type. But it drives up his peak APM, and probably is not very important to how the game unfolds14. Here’s what MaNa’s peak APM looked like at the beginning of Game 2 (if you look at the bottom of the screen, you can see that the units he has selected switches back-and-forth between his workers and the building that he uses to make more workers): \nMaNa’s play during his peak APM for match 2. Most of his actions consist of switching between control groups without giving new commands to any units or buildings\n AlphaStar hit peak APM in combat. The agent seems to reserve a substantial portion of its limited actions budget until the critical moment when it can cash them in to eliminate enemy forces and gain an advantage. Here’s what that looked like near the end of game 2, when it won the engagement that probably won it the match (while still taking a few actions back at its base to keep its production going): \nAlphaStar’s play during its peak APM in match 2. Most of its actions are related to combat, and require precise timing.\n It may be hard to see what exactly is happening here for people who have not played the game. AlphaStar (blue) is using extremely fine-grained control of its units to defeat MaNa’s army (red) in an efficient way. This involves several different actions: Commanding units to move to different locations so they can make their way into his base while keeping them bunched up and avoiding spots that make them vulnerable, focusing fire on MaNa’s units to eliminate the most vulnerable ones first, using special abilities to lift MaNa’s units off the ground and disable them, and redirecting units to attack MaNa’s workers once a majority of MaNa’s military units are taken care of. \n Given these differences between how MaNa and AlphaStar play, it seems clear that we can’t just use raw match-wide APM to compare the two, which most people paying attention seem to have noticed fairly quickly after the matches. The more difficult question is whether AlphaStar won primarily by playing with a level of speed and accuracy that humans are incapable of, or by playing better in other ways. Though based on the analysis that I am about to present I think the answer is probably that AlphaStar won through speed, I also think the question is harder to answer definitively than many critics of DeepMind are making it out to be. \n A very fast human can average well over 300 APM for several minutes, with 5 second bursts at over 600 APM. Although these bursts are not always throwaway commands like those from the MaNa vs AlphaStar matches, they tend not to be commands that require highly accurate clicking, or rapid movement across the map. Take, for example, this 10 second, 600 APM peak from current top player Serral: \nSerral’s play during a 10 second, 600 APM peak\n Here, Serral has just finished focusing on a pair of battles with the other player, and is taking care of business in his base, while still picking up some pieces on the battlefield. It might not be obvious why he is issuing so many commands during this time, so let’s look at the list of commands: \n\n The lines that say “Morph to Hydralisk” and “Morph to Roach” represent a series of repeats of that command. For a human player, this is a matter of pressing the same hotkey many times, or even just holding down the key to give the command very rapidly15. You can see this in the gif by looking at the bottom center of the screen where he selects a bunch of worm-looking things and turns them all into a bunch of egg-looking things (it happens very quickly, so it can be easy to miss).\nWhat Serral is doing here is difficult, and the ability to do it only comes with years of practice. But the raw numbers don’t tell the whole story. Taking 100 actions in 10 seconds is much easier when a third of those actions come from holding down a key for a few hundred milliseconds than when they each require a press of a different key or a precise mouse click. And this is without all the extraneous actions that humans often take (as we saw with MaNa).\nBecause it seems to be the case that peak human APM happens outside of combat, while AlphaStar’s wins happened during combat APM peaks, we need to do a more detailed analysis to determine the highest APM a human player can achieve during combat. To try to answer this question, I looked at approximately ten APM for each of the 5 games between AlphaStar and MaNa, as well as each of another 15 replays between professional Starcraft II players. The peaks were chosen so that roughly half were the largest peak at any time during the match and the rest were strictly during combat. My methodology for this is given in the appendix. Here are the results for just the human vs human matches:\nHistogram of 5-second APM peaks from analyzed matches between human professional players in a tournament setting The blue bars are peaks achieved outside of combat, while the red bars are those achieved during combat.\n Provisionally, it looks like pro players frequently hit approximately 550 to 600 APM outside of combat before the distribution starts to fall off, and they peak at around 200-350 during combat, with a long right tail. As I was doing this, however, I found that all of the highest APM peaks had one thing in common with each other that they did not have in common with all of the lower APM peaks, which is that it was difficult to tell when a player’s actions are primarily combat-oriented commands, and when they are mixed in with bursts of commands for things like training units. In particular, I found that the combat situations with high APM tended to be similar to the Serral gif above, in that they involve spam clicking and actions related to the player’s economy and production, which was probably driving up the numbers. I give more details in the appendix, but I don’t think I can say with confidence that any players were achieving greater than 400-450 APM in combat, in the absence of spurious actions or macromanagement commands. \n The more pertinent question might what the lowest APM is that a player can have while still succeeding at the highest level. Since we know that humans can succeed without exceeding this APM, it is not an unreasonable limitation to put on AlphaStar. The lowest peak APM in combat I saw for a winning player in my analysis was 215, though it could be that I missed a higher peak during combat in that same match. \n Here is a histogram of AlphaStar’s combat APM: \n\n The smallest 5-second APM that AlphaStar needed to win a match against MaNa was just shy of 500. I found 14 cases in which the agent was able to average over 400 APM for 5 seconds in combat, and six times when the agent averaged over 500 APM for more than 5 seconds. This was done with perfect accuracy and no spam clicking or control group switching, so I think we can safely say that its play was faster than is required for a human to win a match in a professional tournament. Given that I found no cases where a human was clearly achieving this speed in combat, I think I can comfortably say that AlphaStar had a large enough speed advantage over MaNa to have substantially influenced the match.\nIt’s easy to get lost in numbers, so it’s good to take a step back and remind ourselves of the insane level of skill required to play Starcraft II professionally. The top professional players already play with what looks to me like superhuman speed, precision, and multitasking, so it is not surprising that the agent that can beat them is so fast. Some observers, especially those in the Starcraft community, have indicated that they will not be impressed until AI can beat humans at Starcraft II at sub-human APM. There is some extent to which speed can make up for poor strategy and good strategy can make up for a lack of speed, but it is not clear what the limits are on this trade-off. It may be very difficult to make an agent that can beat professional Starcraft II players while restricting its speed to an undisputedly human or sub-human level, or it may simply be a matter of a couple more weeks of training time.\n The Camera\n As I explained earlier, the agent interacts with the game differently than humans. As with other games, humans look at a screen to know what’s happening, use a mouse and keyboard to give commands, and need to move the game’s ‘camera’ to see different parts of the play area. With the exception of the final exhibition match against MaNa, AlphaStar was able to see the entire map at once (though much of it is concealed by the fog of war most of the time), and had no need to select units to get information about them. It’s unclear just how much of an advantage this was for the agent, but it seems likely that it was significant, if nothing else because it did not suffer from the APM overhead just to look around and get information from the game. Furthermore, seeing the entire map makes it easier to simultaneously control units across the map, which AlphaStar used to great effect in the first five matches against MaNa.\nFor the exhibition match in January, DeepMind trained a version of AlphaStar that had similar camera control to human players. Although the agent still saw the game in a way that was abstracted from the screen pixels that humans see, it only had access to about one screen’s worth of information at a time, and it needed to spend actions to look at different parts of the map. A further disadvantage was that this version of the agent only had half as much training time as the agents that beat MaNa.\nHere are three factors that may have contributed to AlphaStar’s loss:\n\nThe agent was unable to deal effectively with the added complication of controlling the camera\nThe agent had insufficient training time\nThe agent had easily exploitable flaws the whole time, and MaNa figured out how to use them in match 6\n\nFor the third factor, I mean that the agent had sufficiently many exploitable flaws that were obvious enough to human players that any skilled human player could find at least one during a small number of games. The best humans do not have a sufficient number of such flaws to influence the game with any regularity. Matches in professional tournaments are not won by causing the other player to make the same obvious-to-humans mistake over and over again.I suspect that AlphaStar’s loss in January is mainly due to the first two factors. In support of 1, AlphaStar seemed less able to simultaneously deal with things happening on opposite sides of the map, and less willing to split its forces, which could plausibly be related to an inability to simultaneously look at distant parts of the map. It’s not just that the agent had to move the camera to give commands on other parts of the map. The agent had to remember what was going on globally, rather than being able to see it all the time. In support of 2, the agent that MaNa defeated had only as much training time as the agents that went up against TLO, and those agents lost to the agents that defeated MaNa 94% of the time during training16.\nStill, it is hard to dismiss the third factor. One way in which an agent can improve through training is to encounter tactics that it has not seen before, so that it can react well if it sees it in the future. But the tactics that it encounters are only those that another agent employed, and without seeing the agents during training, it is hard to know if any of them learned the harassment tactics that MaNa used in game 6, so it is hard to know if the agents that defeated MaNa were susceptible to the exploit that he used to defeat the last agent. So far, the evidence from DeepMind’s more recent experiment pitting AlphaStar against the broader Starcraft community (which I will go into in the next section) suggests that the agents do not tend to learn defenses to these types of exploits, though it is hard to say if this is a general problem or just one associated with low training time or particular kinds of training data. \n AlphaStar on the Ladder \n For the past couple months, as of this writing, skilled European players have had the opportunity to play against AlphaStar as part of the usual system for matching players with those of similar skill. For the version of AlphaStar that plays on the European ladder, DeepMind claims to have made changes that address the camera and action speed complaints from the January matches. The agent needs to control the camera, and they say they have placed restrictions on AlphaStar’s performance in consultation with pro players, particularly the maximum actions per minute and per second that the agent can make. I will be curious to see what numbers they arrive at for this. If this was done in an iterative way, such that pro players were allowed to see the agent play or to play against it, I expect they were able to arrive at a good constraint. Given the difficulty that I had with arriving at a good value for a combat APM restriction, I’m less confident that they would get a good value just by thinking about it, though if they were sufficiently conservative, they probably did alright.\nAnother reason to expect a realistic APM constraint is that DeepMind wanted to run the European ladder matches as a blind study, in which the human players did not know they were playing against an AI. If the agent were to play with the superhuman speed and accuracy that AlphaStar did in January, it would likely give it away and spoil the experiment.\nAlthough it is unclear that any players were able to tell they were playing against an AI during their match, it does seem that some were able to figure it out after the fact. One example comes from Lowko, who is a Dutch player who streams and does commentary for games. During a stream of a ladder match in Starcraft II, he noticed the player was doing some strange things near the end of the match, like lifting their buildings17 when the match had clearly been lost, and air-dropping workers into Lowko’s base to kill units. Lowko did eventually win the match. Afterward, he was able to view the replay from the match and see that the player he had defeated did some very strange things throughout the entire match, the most notable of which was how the player controlled their units. The player used no control groups at all, which is, as far as I know, not something anybody does at high-level play18. There were many other quirks, which he describes in his entertaining video, which I highly recommend to anyone who is interested.\nOther players have released replay files from matches against players they believed were AlphaStar, and they show the same lack of control groups. This is great, because it means we can get a sense of what the new APM restriction is on AlphaStar. There are now dozens of replay files from players who claim to have played against the AI. Although I have not done the level of analysis that I did with the matches in the APM section, it seems clear that they have drastically lowered the APM cap, with the matches I have looked at topping out at 380 APM peaks, which did not even occur in combat.\nIt seems to be the case that DeepMind has brought the agent’s interaction with the game more in line with human capability, but we will probably need to wait until they release the details of the experiment before we can say for sure.\nAnother notable aspect of the matches that people are sharing is that their opponent will do strange things that human players, especially skilled human players almost never do, most of which are detrimental to their success. For example, they will construct buildings that block them into their own base, crowd their units into a dangerous bottleneck to get to a cleverly-placed enemy unit, and fail to change tactics when their current strategy is not working. These are all the types of flaws that are well-known to exist in game-playing AI going back to much older games, including the original Starcraft, and they are similar to the flaw that MaNa exploited to defeat AlphaStar in game 6.\nAll in all, the agents that humans are uncovering seem to be capable, but not superhuman. Early on, the accounts that were identified as likely candidates for being AlphaStar were winning about 90-95% of their matches on the ladder, achieving Grandmaster rank, which is reserved for only the top 200 players in each region. I have not been able to conduct a careful investigation to determine the win rate or Elo rating for the agents. However, based on the videos and replays that have been released, plausible claims from reddit users, and my own recollection of the records for the players that seemed likely to be AlphaStar19, a good estimate is that they were winning a majority of matches among Grandmaster players, but did not achieve an Elo rating that would suggest a favorable outcome in a rematch vs TLO20.\nAs with AlphaStar’s January loss, it is hard to say if this is the result of insufficient training time, additional restrictions on camera control and APM, or if the flaws are a deeper, harder to solve problem for AI. It may seem unreasonable to chalk this up to insufficient training time given that it has been several months since the matches in December and January, but it helps to keep in mind that we do not yet know what DeepMind’s research goals are. It is not hard to imagine that their goals are based around sample efficiency or some other aspect of AI research that requires such restrictions. As with the APM restrictions, we should learn more when we get results published by DeepMind. \n Discussion \n I have been focusing on what many onlookers have been calling a lack of “fairness” of the matches, which seems to come from a sentiment that the AI did not defeat the best humans on human terms. I think this is a reasonable concern; if we’re trying to understand how AI is progressing, one of our main interests is when it will catch up with us, so we want to compare its performance to ours. Since we already know that computers can do the things they’re able to do faster than we can do them, we should be less interested in artificial intelligence that can do things better than we can by being faster or by keeping track of more things at once. We are more interested in AI that can make better decisions than we can. \nGoing into this project, I thought that the disagreements surrounding the fairness of the matches were due to a lack of careful analysis, and I expected it to be very easy to evaluate AlphaStar’s performance in comparison to human-level performance. After all, the replay files are just lists of commands, and when we run them through the game engine, we can easily see the outcome of those commands. But it turned out to be harder than I had expected. Separating careful, necessary combat actions (like targeting a particular enemy unit) from important but less precise actions (like training new units) from extraneous, unnecessary actions (like spam clicks) turned out to be surprisingly difficult. I expect if I were to spend a few months learning a lot more about how the game is played and writing my own software tools to analyze replay files, I could get closer to a definitive answer, but I still expect there would be some uncertainty surrounding what actually constitutes human performance.\nIt is unclear to me where this leaves us. AlphaStar is an impressive achievement, even with the speed and camera advantages. I am excited to see the results of DeepMind’s latest experiment on the ladder, and I expect they will have satisfied most critics, at least in terms of the agent’s speed. But I do not expect it to become any easier to compare humans to AI in the future. If this sort of analysis is hard in the context of a game where we have access to all the inputs and outputs, we should expect it to be even harder once we’re looking at tasks for which success is less clear cut or for which the AI’s output is harder to objectively compare to humans. This includes some of the major targets for AI research in the near future. Driving a car does not have a simple win-loss condition, and novel writing does not have clear metrics for what good performance looks like.\nThe answer may be that, if we want to learn things from future successes or failures of AI,  we need to worry less about making direct comparisons between human performance and AI performance, and keep watching the broad strokes of what’s going on. From AlphaStar, we’ve learned that one of two things is true: Either AI can do long-term planning, solve basic game theory problems, balance different priorities against each other, and develop tactics that work, or that there are tasks which seem at first to require all of these things but did not, at least not at a high level.\nBy Rick Korzekwa\nThis post was edited to correct errors and add the 2018 Blizzcon Panel to the events timeline on September 18, 2019.\nAcknowledgements\nThanks to Gillian Ring for lending her expertise in e-sports and for helping me understanding some of the nuances of the game. Thanks to users of the Starcraft subreddit for helping me track down some of the fastest players in the world. And thanks to Blizzard and DeepMind for making the AlphaStar match replays available to the public.All mistakes are my own, and should be pointed out to me via email at rick@aiimpacts.org.\n\nAppendix I: Survey Results in Detail\nI received a total of 22 submissions, which wasn’t bad, given its length. Two respondents failed to correctly answer the question designed to filter out people that are goofing off or not paying attention, leaving 20 useful responses. Five people who filled out the survey were affiliated in some way with AI Impacts. Here are the responses for respondents’ self-reported level of expertise in Starcraft II and artificial intelligence:\n\n\nSurvey respondents’ mean expertise rating was 4.6/10 for Starcraft II and 4.9/10 for AI. \nQuestions About AlphaStar’s Performance\nHow fair were the AlphaStar matches?\nFor this one, it seems easiest to show a screenshot from the survey:\n\nThe results from this indicated that people thought the match was unfair and favored AlphaStar:\n\nI asked respondents to rate AlphaStar’s overall performance, as well as its “micro” and “macro”. The term “micro” is used to refer to a player’s ability to control units in combat, and is greatly improved by speed. There seems to have been some misunderstanding about how to use the word “macro”. Based on comments from respondents and looking around to see how people use the term on the Internet, it seems that that there are at least three somewhat distinct ways that people use the phrase, and I did not clarify which I meant, so I’ve discarded the results from that question. For the next two questions, the scale ranges from 0 to 10, with 0 labeled “AlphaStar is much worse” and 10 labeled “AlphaStar is much better”\nOverall, how do you think AlphaStar’s performance compares to the best humans?\n\nI found these results interesting, because AlphaStar was able to consistently defeat professional players, so some survey respondents felt the outcome alone was not enough to rate it as at least as good as the best humans.\nHow do you think AlphaStar’s micro compares to the best humans? \n\nSurvey respondents unanimously reported that they thought AlphaStar’s combat micromanagement was an important factor in the outcome of the matches.\nForecasting Questions\nRespondents were split on whether they expected to see AlphaStar’s level of Starcraft II performance by this time:\nDid you expect to see AlphaStar’s level of performance in a Starcraft II agent:\nBefore Now1Around this time8Later than now7I had no expectation either way4\nRespondents who indicated that they expected it sooner or later than now were also asked by how many years their expectation differed from reality. If we assign negative numbers to “before now”, positive numbers to “Later than now”, zero to “Around this time”, ignore those with no expectation, and weight responses by level of expertise, we find respondents’ mean expectation was just 9 months later the announcement, and the median respondent expected to see it around this time. Here is a histogram of these results, without expertise weighting:\n\nThese results do not generally indicate too much surprise about seeing a Starcraft II agent of AlphaStar’s ability now. \nHow many years do you think it will be until we see (in public) an agent which only gets screen pixels as input, has human-level apm and reaction speed, and is very clearly better than the best humans?\nThis question was intended to outline an AI that would satisfy almost anybody that Starcraft II is a solved game, such that AI is clearly better than humans, and not for “boring” reasons like superior speed. Most survey respondents expected to see such an agent in two-ish years, with a few a little longer, and two that expected it to take much longer. Respondents had a median prediction of two years and an expertise-weighted mean prediction of a little less than four years.\n\nQuestions About Relevant Considerations\nHow important do you think the following were in determining the outcome of the AlphaStar vs MaNa matches?\nI listed 12 possible considerations to be rated in importance, from 1 to 5, with 1 being “not at all important” and 5 being “extremely important”. The expertise weighted mean for each question is given below:\n\nRespondents rated AlphaStar’s peak APM and camera control as the two most important factors in determining the outcome of the matches, and the particular choice of map and professional player as the two least important considerations. \nWhen thinking about AlphaStar as a benchmark for AI progress in general, how important do you think the following considerations are?\nAgain, respondents rated a series of considerations by importance, this time for thinking about AlphaStar in a broader context. This included all of the considerations from the previous question, plus several others. Here are the results, again with expertise weighted averaging.\n\nFor these two sets of questions, there was almost no difference between the mean scores if I used only Starcraft II expertise weighting, only AI expertise weighting, or ignored expertise weighting entirely.\nFurther questions\nThe rest of the questions were free-form to give respondents a chance to tell me anything else that they thought was important. Although these answers were thoughtful and shaped my thinking about AlphaStar, especially early on in the project, I won’t summarize them here.\nAppendix II: APM Measurement Methodology\nI created a list of professional players by asking users of the Starcraft subreddit which players they thought were exceptionally fast. Replays including these players were found by searching Spawning Tool for replays from tournament matches which included at least one player from the list of fast players. This resulted in 51 replay files.\nSeveral of the replay files were too old, so that they could no longer be opened by the current version of Starcraft II, and I ignored them. Others were ignored because they included players, race matchups, or maps that were already represented in other matches. Some were ignored because we did not get to them before we had collected what seemed to be enough data. This left 15 replays that made it into the analysis.\nI opened each file using Scelight, and the time and APM values were recorded for the top three peaks on the graph of that player’s APM, using 5-second bins. Next, I opened the replay file in Starcraft II, and for each peak recorded earlier, we wrote down whether that player was primarily engaging in combat at the time or not. Additionally, I recorded the time and APM for each player for 2-4 5-second durations of the game in which the players were primarily engaged in combat.\nAll of the APM values which came from combat and from outside of combat were aggregated into the histogram shown in the ‘Speed Controversy’ section of this article.\nThere are several potential sources of bias or error in this:\n\nOur method for choosing players and matches may be biased. We were seeking examples of humans playing with speed and precision, but it’s possible that by relying on input from a relatively small number of Reddit users (as well as some personal friends), we missed something.\nThis measurement relies entirely on my subjective evaluation of whether the players are mostly engaged in combat. I am not an expert on the game, and it seems likely that I missed some things, at least some of the time.\nThe tool I used for this seems to mismatch events in the game by a few seconds. Since I was using 5-second bins, and sometimes a player’s APM will change greatly between 5-second bins, it’s possible that this introduced a significant error.\nThe choice of 5 second bins (as opposed to something shorter or longer) is somewhat arbitrary, but it is what some people in the Starcraft community were using, so I’m using it here.\nSome actions are excluded from the analysis automatically. These include camera updates, and this is probably a good thing, but I did not look carefully at the source code for the tool, so it may be doing something I don’t know about.\n\nFootnotes", "url": "https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/", "title": "The unexpected difficulty of comparing AlphaStar to humans", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-09-18T02:11:55+00:00", "paged_url": "https://aiimpacts.org/feed?paged=11", "authors": ["richardkorzekwa"], "id": "ae79fbc85e35f6e8ed55bedd6c57624c", "summary": []} {"text": "Conversation with Paul Christiano\n\nAI Impacts talked to AI safety researcher Paul Christiano about his views on AI risk. With his permission, we have transcribed this interview.\nParticipants\nPaul Christiano — OpenAI safety teamAsya Bergal – AI ImpactsRonny Fernandez – AI ImpactsRobert Long – AI Impacts\nSummary\nWe spoke with Paul Christiano on August 13, 2019. Here is a brief summary of that conversation:\nAI safety is worth working on because AI poses a large risk and AI safety is neglected, and tractable.Christiano is more optimistic about the likely social consequences of advanced AI than some others in AI safety, in particular researchers at the Machine Intelligence Research Institute (MIRI), for the following reasons:The prior on any given problem reducing the expected value of the future by 10% should be low.There are several ‘saving throws’–ways in which, even if one thing turns out badly, something else can turn out well, such that AI is not catastrophic.Many algorithmic problems are either solvable within 100 years, or provably impossible; this inclines Christiano to think that AI safety problems are reasonably likely to be easy.MIRI thinks success is guaranteeing that unaligned intelligences are never created, whereas Christiano just wants to leave the next generation of intelligences in at least as good of a place as humans were when building them. ‘Prosaic AI’ that looks like current AI systems will be less hard to align than MIRI thinks: Christiano thinks there’s at least a one-in-three chance that we’ll be able to solve AI safety on paper in advance.A common view within ML is that that we’ll successfully solve problems as they come up.Christiano has relatively less confidence in several inside view arguments for high levels of risk:Building safe AI requires hitting a small target in the space of programs, but building any AI also requires hitting a small target.Because Christiano thinks that the state of evidence is less clear-cut than MIRI does, Christiano also has a higher probability that people will become more worried in the future. Just because we haven’t solved many problems in AI safety yet doesn’t mean they’re intractably hard– many technical problems feel this way and then get solved in 10 years of effort.Evolution is often used as an analogy to argue that general intelligence (humans with their own goals) becomes dangerously unaligned with the goals of the outer optimizer (evolution selecting for reproductive fitness). But this analogy doesn’t make Christiano feel so pessimistic, e.g. he thinks that if we tried, we could breed animals that are somewhat smarter than humans and are also friendly and docile.Christiano is optimistic about verification, interpretability, and adversarial training for inner alignment, whereas MIRI is pessimistic.MIRI thinks the outer alignment approaches Christiano proposes are just obscuring the core difficulties of alignment, while Christiano is not yet convinced there is a deep core difficulty.Christiano thinks there are several things that could change his mind and optimism levels, including:Learning about institutions and observing how they solve problems analogous to AI safety.Seeing whether AIs become deceptive and how they respond to simple oversight.Seeing how much progress we make on AI alignment over the coming years.Christiano is relatively optimistic about his iterated amplification approach:Christiano cares more about making aligned AIs that are competitive with unaligned AIs, whereas MIRI is more willing to settle for an AI with very narrow capabilities.Iterated amplification is largely based on learning-based AI systems, though it may work in other cases.Even if iterated amplification isn’t the answer to AI safety, it’s likely to have subproblems in common with problems that are important in the future.There are still many disagreements between Christiano and the Machine Intelligence Research Institute (MIRI) that are messy and haven’t been made precise.\nThis transcript has been lightly edited for concision and clarity.\n Transcript\nAsya Bergal: Okay. We are recording. I’m going to ask you a bunch of questions related to something like AI optimism.\nI guess the proposition that we’re looking at is something like ‘is it valuable for people to be spending significant effort doing work that purports to reduce the risk from advanced artificial intelligence’? The first question would be to give a short-ish version of the reasoning around that.\nPaul Christiano: Around why it’s overall valuable?\nAsya Bergal: Yeah. Or the extent to which you think it’s valuable.\nPaul Christiano: I don’t know, this seems complicated. I’m acting from some longtermerist perspective, I’m like, what can make the world irreversibly worse? There aren’t that many things, we go extinct. It’s hard to go extinct, doesn’t seem that likely.\nRobert Long: We keep forgetting to say this, but we are focusing less on ethical considerations that might affect that. We’ll grant…yeah, with all that in the background….\nPaul Christiano: Granting long-termism, but then it seems like it depends a lot on what’s the probability? What fraction of our expected future do we lose by virtue of messing up alignment * what’s the elasticity of that to effort / how much effort?\nRobert Long: That’s the stuff we’re curious to see what people think about.\nAsya Bergal: I also just read your 80K interview, which I think probably covered like a lot of the reasoning about this.\nPaul Christiano: They probably did. I don’t remember exactly what’s in there, but it was a lot of words.\nI don’t know. I’m like, it’s a lot of doom probability. Like maybe I think AI alignment per se is like 10% doominess. That’s a lot. Then it seems like if we understood everything in advance really well, or just having a bunch of people working on now understanding what’s up, could easily reduce that by a big chunk.\nRonny Fernandez: Sorry, what do you mean by 10% doominesss?\nPaul Christiano: I don’t know, the future is 10% worse than it would otherwise be in expectation by virtue of our failure to align AI. I made up 10%, it’s kind of a random number. I don’t know, it’s less than 50%. It’s more than 10% conditioned on AI soon I think.\nRonny Fernandez: And that’s change in expected value.\nPaul Christiano: Yeah. Anyway, so 10% is a lot. Then I’m like, maybe if we sorted all our shit out and had a bunch of people who knew what was up, and had a good theoretical picture of what was up, and had more info available about whether it was a real problem. Maybe really nailing all that could cut that risk from 10% to 5% and maybe like, you know, there aren’t that many people who work on it, it seems like a marginal person can easily do a thousandth of that 5% change. Now you’re looking at one in 20,000 or something, which is a good deal.\nAsya Bergal: I think my impression is that that 10% is lower than some large set of people. I don’t know if other people agree with that.\nPaul Christiano: Certainly, 10% is lower than lots of people who care about AI risk. I mean it’s worth saying, that I have this slightly narrow conception of what is the alignment problem. I’m not including all AI risk in the 10%. I’m not including in some sense most of the things people normally worry about and just including the like ‘we tried to build an AI that was doing what we want but then it wasn’t even trying to do what we want’. I think it’s lower now or even after that caveat, than pessimistic people. It’s going to be lower than all the MIRI folks, it’s going to be higher than almost everyone in the world at large, especially after specializing in this problem, which is a problem almost no one cares about, which is precisely how a thousand full time people for 20 years can reduce the whole risk by half or something.\nAsya Bergal: I’m curious for your statement as to why you think your number is slightly lower than other people.\nPaul Christiano: Yeah, I don’t know if I have a particularly crisp answer. Seems like it’s a more reactive thing of like, what are the arguments that it’s very doomy? A priori you might’ve been like, well, if you’re going to build some AI, you’re probably going to build the AI so it’s trying to do what you want it to do. Probably that’s that. Plus, most things can’t destroy the expected value of the future by 10%. You just can’t have that many things, otherwise there’s not going to be any value left in the end. In particular, if you had 100 such things, then you’d be down to like 1/1000th of your values. 1/10 hundred thousandth? I don’t know, I’m not good at arithmetic.\nAnyway, that’s a priori, just aren’t that many things are that bad and it seems like people would try and make AI that’s trying to do what they want. Then you’re like, okay, we get to be pessimistic because of some other argument about like, well, we don’t currently know how to build an AI which will do what we want. We’re like, there’s some extrapolation of current techniques on which we’re concerned that we wouldn’t be able to. Or maybe some more conceptual or intuitive argument about why AI is a scary kind thing, and AIs tend to want to do random shit.\nThen like, I don’t know, now we get into, how strong is that argument for doominess? Then a major thing that drives it is I am like, reasonable chance there is no problem in fact. Reasonable chance, if there is a problem we can cope with it just by trying. Reasonable chance, even if it will be hard to cope with, we can sort shit out well enough on paper that we really nail it and understand how to resolve it. Reasonable chance, if we don’t solve it the people will just not build AIs that destroy everything they value.\nIt’s lots of saving throws, you know? And you multiply the saving throws together and things look better. And they interact better than that because– well, in one way worse because it’s correlated: If you’re incompetent, you’re more likely to fail to solve the problem and more likely to fail to coordinate not to destroy the world. In some other sense, it’s better than interacting multiplicatively because weakness in one area compensates for strength in the other. I think there are a bunch of saving throws that could independently make things good, but then in reality you have to have a little bit here and a little bit here and a little bit here, if that makes sense. We have some reasonable understanding on paper that makes the problem easier. The problem wasn’t that bad. We wing it reasonably well and we do a bunch of work and in fact people are just like, ‘Okay, we’re not going to destroy the world given the choice.’ I guess I have this somewhat distinctive last saving throw where I’m like, ‘Even if you have unaligned AI, it’s probably not that bad.’\nThat doesn’t do much of the work, but you know you add a bunch of shit like that together.\nAsya Bergal: That’s a lot of probability mass on a lot of different things. I do feel like my impression is that, on the first step of whether by default things are likely to be okay or things are likely to be good, people make arguments of the form, ‘You have a thing with a goal and it’s so hard to specify. By default, you should assume that the space of possible goals to specify is big, and the one right goal is hard to specify, hard to find.’ Obviously, this is modeling the thing as an agent, which is already an assumption.\nPaul Christiano: Yeah. I mean it’s hard to run or have much confidence in arguments of that form. I think it’s possible to run tight versions of that argument that are suggestive. It’s hard to have much confidence in part because you’re like, look, the space of all programs is very broad, and the space that do your taxes is quite small, and we in fact are doing a lot of selecting from the vast space of programs to find one that does your taxes– so like, you’ve already done a lot of that.\nAnd then you have to be getting into more detailed arguments about exactly how hard is it to select. I think there’s two kinds of arguments you can make that are different, or which I separate. One is the inner alignment treacherous turney argument, where like, we can’t tell the difference between AIs that are doing the right and wrong thing, even if you know what’s right because blah blah blah. The other is well, you don’t have this test for ‘was it right’ and so you can’t be selecting for ‘does the right thing’.\nThis is a place where the concern is disjunctive, you have like two different things, they’re both sitting in your alignment problem. They can again interact badly. But like, I don’t know, I don’t think you’re going to get to high probabilities from this. I think I would kind of be at like, well I don’t know. Maybe I think it’s more likely than not that there’s a real problem but not like 90%, you know? Like maybe I’m like two to one that there exists a non-trivial problem or something like that. All of the numbers I’m going to give are very made up though. If you asked me a second time you’ll get all different numbers.\nAsya Bergal: That’s good to know.\nPaul Christiano: Sometimes I anchor on past things I’ve said though, unfortunately.\nAsya Bergal: Okay. Maybe I should give you some fake past Paul numbers.\nPaul Christiano: You could be like, ‘In that interview, you said that it was 85%’. I’d be like, ‘I think it’s really probably 82%’.\nAsya Bergal: I guess a related question is, is there plausible concrete evidence that you think could be gotten that would update you in one direction or the other significantly?\nPaul Christiano: Yeah. I mean certainly, evidence will roll in once we have more powerful AI systems.\nOne can learn… I don’t know very much about any of the relevant institutions, I may know a little bit. So you can imagine easily learning a bunch about them by observing how well they solve analogous problems or learning about their structure, or just learning better about the views of people. That’s the second category.\nWe’re going to learn a bunch of shit as we continue thinking about this problem on paper to see like, does it look like we’re going to solve it or not? That kind of thing. It seems like there’s lots of sorts of evidence on lots of fronts, my views are shifting all over the place. That said, the inconsistency between one day and the next is relatively large compared to the actual changes in views from one day to the next.\nRobert Long: Could you say a little bit more about evidence from once more advanced AI starts coming in? Like what sort things you’re looking for that would change your mind on things?\nPaul Christiano: Well you get to see things like, on inner alignment you get to see to what extent do you have the kind of crazy shit that people are concerned about? The first time you observe some crazy shit where your AI is like, ‘I’m going to be nice in order to assure that you think I’m nice so I can stab you in the back later.’ You’re like, ‘Well, I guess that really does happen despite modest effort to prevent it.’ That’s a thing you get. You get to learn in general about how models generalize, like to what extent they tend to do– this is sort of similar to what I just said, but maybe a little bit broader– to what extent are they doing crazy-ish stuff as they generalize?\nYou get to learn about how reasonable simple oversight is and to what extent do ML systems acquire knowledge that simple overseers don’t have that then get exploited as they optimize in order to produce outcomes that are actually bad. I don’t have a really concise description, but sort of like, to the extent that all these arguments depend on some empirical claims about AI, you get to see those claims tested increasingly.\nRonny Fernandez: So the impression I get from talking to other people who know you, and from reading some of your blog posts, but mostly from others, is that you’re somewhat more optimistic than most people that work in AI alignment. It seems like some people who work on AI alignment think something like, ‘We’ve got to solve some really big problems that we don’t understand at all or there are a bunch of unknown unknowns that we need to figure out.’ Maybe that’s because they have a broader conception of what solving AI alignment is like than you do?\nPaul Christiano: That seems like it’s likely to be part of it. It does seem like I’m more optimistic than people in general, than people who work in alignment in general. I don’t really know… I don’t understand others’ views that well and I don’t know if they’re that– like, my views aren’t that internally coherent. My suspicion is others’ views are even less internally coherent. Yeah, a lot of it is going to be done by having a narrower conception of the problem.\nThen a lot of it is going to be done by me just being… in terms of do we need a lot of work to be done, a lot of it is going to be me being like, I don’t know man, maybe. I don’t really understand when people get off the like high probability of like, yeah. I don’t see the arguments that are like, definitely there’s a lot of crazy stuff to go down. It seems like we really just don’t know. I do also think problems tend to be easier. I have more of that prior, especially for problems that make sense on paper. I think they tend to either be kind of easy, or else– if they’re possible, they tend to be kind of easy. There aren’t that many really hard theorems.\nRobert Long: Can you say a little bit more of what you mean by that? That’s not a very good follow-up question, I don’t really know what it would take for me to understand what you mean by that better. \nPaul Christiano: Like most of the time, if I’m like, ‘here’s an algorithms problem’, you can like– if you just generate some random algorithms problems, a lot of them are going to be impossible. Then amongst the ones that are possible, a lot of them are going to be soluble in a year of effort and amongst the rest, a lot of them are going to be soluble in 10 or a hundred years of effort. It’s just kind of rare that you find a problem that’s soluble– by soluble, I don’t just mean soluble by human civilization, I mean like, they are not provably impossible– that takes a huge amount of effort.\nIt normally… it’s less likely to happen the cleaner the problem is. There just aren’t many very clean algorithmic problems where our society worked on it for 10 years and then we’re like, ‘Oh geez, this still seems really hard.’ Examples are kind of like… factoring is an example of a problem we’ve worked a really long time on. It kind of has the shape, and this is the tendency on these sorts of problems, where there’s just a whole bunch of solutions and we hack away and we’re a bit better and a bit better and a bit better. It’s a very messy landscape, rather than jumping from having no solution to having a solution. It’s even rarer to have things where going from no solution to some solution is really possible but incredibly hard. There were some examples.\nRobert Long: And you think that the problems we face are sufficiently similar?\nPaul Christiano: I mean, I think this is going more into the like, ‘I don’t know man’ but my what do I think when I say I don’t know man isn’t like, ‘Therefore, there’s an 80% chance that it’s going to be an incredibly difficult problem’ because that’s not what my prior is like. I’m like, reasonable chance it’s not that hard. Some chance it’s really hard. Probably more chance that– if it’s really hard, I think it’s more likely to be because all the clean statements of the problem are impossible. I think as statements get messier it becomes more plausible that it just takes a lot of effort. The more messy a thing is, the less likely it is to be impossible sometimes, but also the more likely it’s just a bunch of stuff you have to do.\nRonny Fernandez: It seems like one disagreement that you have with MIRI folks is that you think prosaic AGI will be easier to align than they do. Does that perception seem right to you?\nPaul Christiano: I think so. I think they’re probably just like, ‘that seems probably impossible’. Was related to the previous point.\nRonny Fernandez: If you had found out that prosaic AGI is nearly impossible to align or is impossible to align, how much would that change your-\nPaul Christiano: It depends exactly what you found out, exactly how you found it out, et cetera. One thing you could be told is that there’s no perfectly scalable mechanism where you can throw in your arbitrarily sophisticated AI and turn the crank and get out an arbitrarily sophisticated aligned AI. That’s a possible outcome. That’s not necessarily that damning because now you’re like okay, fine, you can almost do it basically all the time and whatever.\nThat’s a big class of worlds and that would definitely be a thing I would be interested in understanding– how large is that gap actually, if the nice problem was totally impossible? If at the other extreme you just told me, ‘Actually, nothing like this is at all going to work, and it’s definitely going to kill everyone if you build an AI using anything like an extrapolation of existing techniques’, then I’m like, ‘Sounds pretty bad.’ I’m still not as pessimistic as MIRI people.\nI’m like, maybe people just won’t destroy the world, you know, it’s hard to say. It’s hard to say what they’ll do. It also depends on the nature of how you came to know this thing. If you came to know it in a way that’s convincing to a reasonably broad group of people, that’s better than if you came to know it and your epistemic state was similar to– I think MIRI people feel more like, it’s already known to be hard, and therefore you can tell if you can’t convince people it’s hard. Whereas I’m like, I’m not yet convinced it’s hard, so I’m not so surprised that you can’t convince people it’s hard.\nThen there’s more probability, if it was known to be hard, that we can convince people, and therefore I’m optimistic about outcomes conditioned on knowing it to be hard. I might become almost as pessimistic as MIRI if I thought that the problem was insolubly hard, just going to take forever or whatever, huge gaps aligning prosaic AI, and there would be no better evidence of that than currently exists. Like there’s no way to explain it better to people than MIRI currently can. If you take those two things, I’m maybe getting closer to MIRI’s levels of doom probability. I might still not be quite as doomy as them.\nRonny Fernandez: Why does the ability to explain it matter so much?\nPaul Christiano: Well, a big part of why you don’t expect people to build unaligned AI is they’re like, they don’t want to. The clearer it is and the stronger the case, the more people can potentially do something. In particular, you might get into a regime where you’re doing a bunch of shit by trial and error and trying to wing it. And if you have some really good argument that the winging it is not going to work, then that’s a very different state than if you’re like, ‘Well, winging it doesn’t seem that good. Maybe it’ll fail.’ It’s different to be like, ‘Oh no, here’s an argument. You just can’t… It’s just not going to work.’\nI don’t think we’ll really be in that state, but there’s like a whole spectrum from where we’re at now to that state and I expect to be further along it, if in fact we’re doomed. For example, if I personally would be like, ‘Well, I at least tried the thing that seemed obvious to me to try and now we know that doesn’t work.’ I sort of expect very directly from trying that to learn something about why that failed and what parts of the problem seem difficult.\nRonny Fernandez: Do you have a sense of why MIRI thinks aligning prosaic AI is so hard?\nPaul Christiano: We haven’t gotten a huge amount of traction on this when we’ve debated it. I think part of their position, especially on the winging it thing, is they’re like – Man, doing things right generally seems a lot harder than doing them. I guess probably building an AI will be harder in a way that’s good, for some arbitrary notion of good– a lot harder than just building an AI at all.\nThere’s a theme that comes up frequently trying to hash this out, and it’s not so much about a theoretical argument, it’s just like, look, the theoretical argument establishes that there’s something a little bit hard here. And once you have something a little bit hard and now you have some giant organization, people doing the random shit they’re going to do, and all that chaos, and like, getting things to work takes all these steps, and getting this harder thing to work is going to have some extra steps, and everyone’s going to be doing it. They’re more pessimistic based on those kinds of arguments.\nThat’s the thing that comes up a lot. I think probably most of the disagreement is still in the, you know, theoretically, how much– certainly we disagree about like, can this problem just be solved on paper in advance? Where I’m like, reasonable chance, you know? At least a third chance, they’ll just on paper be like, ‘We have nailed it.’ There’s really no tension, no additional engineering effort required. And they’re like, that’s like zero. I don’t know what they think it is. More than zero, but low.\nRonny Fernandez: Do you guys think you’re talking about the same problem exactly?\nPaul Christiano: I think there we are probably. At that step we are. Just like, is your AI trying to destroy everything? Yes. No. The main place there’s some bleed over–  the main thing that MIRI maybe considers in scope and I don’t is like, if you build an AI, it may someday have to build another AI. And what if the AI it builds wants to destroy everything? Is that our fault or is that the AI’s fault? And I’m more on like, that’s the AI’s fault. That’s not my job. MIRI’s maybe more like not distinguishing those super cleanly, but they would say that’s their job. The distinction is a little bit subtle in general, but-\nRonny Fernandez: I guess I’m not sure why you cashed out in terms of fault.\nPaul Christiano: I think for me it’s mostly like: there’s a problem we can hope to resolve. I think there’s two big things. One is like, suppose you don’t resolve that problem. How likely is it that someone else will solve it? Saying it’s someone else’s fault is in part just saying like, ‘Look, there’s this other person who had a reasonable opportunity to solve it and it was a lot smarter than us.’ So the work we do is less likely to make the difference between it being soluble or not. Because there’s this other smarter person.\nAnd then the other thing is like, what should you be aiming for? To the extent there’s a clean problem here which one could hope to solve, or one should bite off as a chunk, what fits in conceptually the same problem versus what’s like– you know, an analogy I sometimes make is, if you build an AI that’s doing important stuff, it might mess up in all sorts of ways. But when you’re asking, ‘Is my AI going to mess up when building a nuclear reactor?’ It’s a thing worth reasoning about as an AI person, but also like it’s worth splitting into like– part of that’s an AI problem, and part of that’s a problem about understanding managing nuclear waste. Part of that should be done by people reasoning about nuclear waste and part of it should be done by people reasoning about AI.\nThis is a little subtle because both of the problems have to do with AI. I would say my relationship with that is similar to like, suppose you told me that some future point, some smart people might make an AI. There’s just a meta and object level on which you could hope to help with the problem.\nI’m hoping to help with the problem on the object level in the sense that we are going to do research which helps people align AI, and in particular, will help the future AI align the next AI. Because it’s like people. It’s at that level, rather than being like, ‘We’re going to construct a constitution of that AI such that when it builds future AI it will always definitely work’. This is related to like– there’s this old argument about recursive self-improvement. It’s historically figured a lot in people’s discussion of why the problem is hard, but on a naive perspective it’s not obvious why it should, because you do only a small number of large modifications before your systems are sufficiently intelligent relative to you that it seems like your work should be obsolete. Plus like, them having a bunch of detailed knowledge on the ground about what’s going down.\nIt seems unclear to me how– yeah, this is related to our disagreement– how much you’re happy just deferring to the future people and being like, ‘Hope that they’ll cope’. Maybe they won’t even cope by solving the problem in the same way, they might cope by, the crazy AIs that we built reach the kind of agreement that allows them to not build even crazier AIs in the same way that we might do that. I think there’s some general frame of, I’m just taking responsibility for less, and more saying, can we leave the future people in a situation that is roughly as good as our situation? And by future people, I mean mostly AIs.\nRonny Fernandez: Right. The two things that you think might explain your relative optimism are something like: Maybe we can get the problem to smarter agents that are humans. Maybe we can leave the problem to smarter agents that are not humans.\nPaul Christiano: Also a lot of disagreement about the problem. Those are certainly two drivers. They’re not exhaustive in the sense that there’s also a huge amount of disagreement about like, ‘How hard is this problem?’ Which is some combination of like, ‘How much do we know about it?’ Where they’re more like, ‘Yeah, we’ve thought about it a bunch and have some views.’ And I’m like, ‘I don’t know, I don’t think I really know shit.’ Then part of it is concretely there’s a bunch of– on the object level, there’s a bunch of arguments about why it would be hard or easy so we don’t reach agreement. We consistently disagree on lots of those points.\nRonny Fernandez: Do you think the goal state for you guys is the same though? If I gave you guys a bunch of AGIs, would you guys agree about which ones are aligned and which ones are not? If you could know all of their behaviors?\nPaul Christiano: I think at that level we’d probably agree. We don’t agree more broadly about what constitutes a win state or something. They have this more expansive conception– or I guess it’s narrower– that the win state is supposed to do more. They are imagining more that you’ve resolved this whole list of future challenges. I’m more not counting that.\nWe’ve had this… yeah, I guess I now mostly use intent alignment to refer to this problem where there’s risk of ambiguity… the problem that I used to call AI alignment. There was a long obnoxious back and forth about what the alignment problem should be called. MIRI does use aligned AI to be like, ‘an AI that produces good outcomes when you run it’. Which I really object to as a definition of aligned AI a lot. So if they’re using that as their definition of aligned AI, we would probably disagree.\nRonny Fernandez: Shifting terms or whatever… one thing that they’re trying to work on is making an AGI that has a property that is also the property you’re trying to make sure that AGI has.\nPaul Christiano: Yeah, we’re all trying to build an AI that’s trying to do the right thing.\nRonny Fernandez: I guess I’m thinking more specifically, for instance, I’ve heard people at MIRI say something like, they want to build an AGI that I can tell it, ‘Hey, figure out how to copy a strawberry, and don’t mess anything else up too badly.’ Does that seem like the same problem that you’re working on?\nPaul Christiano: I mean it seems like in particular, you should be able to do that. I think it’s not clear whether that captures all the complexity of the problem. That’s just sort of a question about what solutions end up looking like, whether that turns out to have the same difficulty. \nThe other things you might think are involved that are difficult are… well, I guess one problem is just how you capture competitiveness. Competitiveness for me is a key desideratum. And it’s maybe easy to elide in that setting, because it just makes a strawberry. Whereas I am like, if you make a strawberry literally as well as anyone else can make a strawberry, it’s just a little weird to talk about. And it’s a little weird to even formalize what competitiveness means in that setting. I think you probably can, but whether or not you do that’s not the most natural or salient aspect of the situation. \nSo I probably disagree with them about– I’m like, there are probably lots of ways to have agents that make strawberries and are very smart. That’s just another disagreement that’s another function of the same basic, ’How hard is the problem’ disagreement. I would guess relative to me, in part because of being more pessimistic about the problem, MIRI is more willing to settle for an AI that does one thing. And I care more about competitiveness.\nAsya Bergal: Say you just learn that prosaic AI is just not going to be the way we get to AGI. How does that make you feel about the IDA approach versus the MIRI approach?\nPaul Christiano: So my overall stance when I think about alignment is, there’s a bunch of possible algorithms that you could use. And the game is understanding how to align those algorithms. And it’s kind of a different game. There’s a lot of common subproblems in between different algorithms you might want to align, it’s potentially a different game for different algorithms. That’s an important part of the answer. I’m mostly focusing on the ‘align this particular’– I’ll call it learning, but it’s a little bit more specific than learning– where you search over policies to find a policy that works well in practice. If we’re not doing that, then maybe that solution is totally useless, maybe it has common subproblems with the solution you actually need. That’s one part of the answer.\nAnother big difference is going to be, timelines views will shift a lot if you’re handed that information. So it will depend exactly on the nature of the update. I don’t have a strong view about whether it makes my timelines shorter or longer overall. Maybe you should bracket that though.\nIn terms of returning to the first one of trying to align particular algorithms, I don’t know. I think I probably share some of the MIRI persp– well, no. It feels to me like there’s a lot of common subproblems. Aligning expert systems seems like it would involve a lot of the same reasoning as aligning learners. To the extent that’s true, probably future stuff also will involve a lot of the same subproblems, but I doubt the algorithm will look the same. I also doubt the actual algorithm will look anything like a particular pseudocode we might write down for iterated amplification now.\nAsya Bergal: Does iterated amplification in your mind rely on this thing that searches through policies for the best policy? The way I understand it, it doesn’t feel like it necessarily does.\nPaul Christiano: So, you use this distillation step. And the reason you want to do amplification, or this short-hop, expensive amplification, is because you interleave it with this distillation step. And I normally imagine the distillation step as being, learn a thing which works well in practice on a reward function defined by the overseer. You could imagine other things that also needed to have this framework, but it’s not obvious whether you need this step if you didn’t somehow get granted something like the–\nAsya Bergal: That you could do the distillation step somehow.\nPaul Christiano: Yeah. It’s unclear what else would– so another example of a thing that could fit in, and this maybe makes it seem more general, is if you had an agent that was just incentivized to make lots of money. Then you could just have your distillation step be like, ‘I randomly check the work of this person, and compensate them based on the work I checked’. That’s a suggestion of how this framework could end up being more general.\nBut I mostly do think about it in the context of learning in particular. I think it’s relatively likely to change if you’re not in that setting. Well, I don’t know. I don’t have a strong view. I’m mostly just working in that setting, mostly because it seems reasonably likely, seems reasonably likely to have a bunch in common, learning is reasonably likely to appear even if other techniques appear. That is, learning is likely to play a part in powerful AI even if other techniques also play a part.\nAsya Bergal: Are there other people or resources that you think would be good for us to look at if we were looking at the optimism view?\nPaul Christiano: Before we get to resources or people, I think one of the basic questions is, there’s this perspective which is fairly common in ML, which is like, ‘We’re kind of just going to do a bunch of stuff, and it’ll probably work out’. That’s probably the basic thing to be getting at. How right is that?\nThis is the bad view of safety conditioned on– I feel like prosaic AI is in some sense the worst– seems like about as bad as things would have gotten in terms of alignment. Where, I don’t know, you try a bunch of shit, just a ton of stuff, a ton of trial and error seems pretty bad. Anyway, this is a random aside maybe more related to the previous point. But yeah, this is just with alignment. There’s this view in ML that’s relatively common that’s like, we’ll try a bunch of stuff to get the AI to do what we want, it’ll probably work out. Some problems will come up. We’ll probably solve them. I think that’s probably the most important thing in the optimism vs pessimism side.\nAnd I don’t know, I mean this has been a project that like, it’s a hard project. I think the current state of affairs is like, the MIRI folk have strong intuitions about things being hard. Essentially no one in… very few people in ML agree with those, or even understand where they’re coming from. And even people in the EA community who have tried a bunch to understand where they’re coming from mostly don’t. Mostly people either end up understanding one side or the other and don’t really feel like they’re able to connect everything. So it’s an intimidating project in that sense. I think the MIRI people are the main proponents of the everything is doomed, the people to talk to on that side. And then in some sense there’s a lot of people on the other side who you can talk to, and the question is just, who can articulate the view most clearly? Or who has most engaged with the MIRI view such that they can speak to it?\nRonny Fernandez: Those are people I would be particularly interested in. If there are people that understand all the MIRI arguments but still have broadly the perspective you’re describing, like some problems will come up, probably we’ll fix them.\nPaul Christiano: I don’t know good– I don’t have good examples of people for you. I think most people just find the MIRI view kind of incomprehensible, or like, it’s a really complicated thing, even if the MIRI view makes sense in its face. I don’t think people have gotten enough into the weeds. It really rests a lot right now on this fairly complicated cluster of intuitions. I guess on the object level, I think I’ve just engaged a lot more with the MIRI view than most people who are– who mostly take the ‘everything will be okay’ perspective. So happy to talk on the object level, and speaking more to arguments. I think it’s a hard thing to get into, but it’s going to be even harder to find other people in ML who have engaged with the view that much.\nThey might be able to make other general criticisms of like, here’s why I haven’t really… like it doesn’t seem like a promising kind of view to think about. I think you could find more people who have engaged at that level. I don’t know who I would recommend exactly, but I could think about it. Probably a big question will be who is excited to talk to you about it.\nAsya Bergal: I am curious about your response to MIRI’s object level arguments. Is there a place that exists somewhere?\nPaul Christiano: There’s some back and forth on the internet. I don’t know if it’s great. There’s some LessWrong posts. Eliezer for example wrote this post about why things were doomed, why I in particular was doomed. I don’t know if you read that post.\nAsya Bergal: I can also ask you about it now, I just don’t want to take too much of your time if it’s a huge body of things.\nPaul Christiano: The basic argument would be like, 1) On paper I don’t think we yet have a good reason to feel doomy. And I think there’s some basic research intuition about how much a problem– suppose you poke at a problem a few times, and you’re like ‘Agh, seems hard to make progress’. How much do you infer that the problem’s really hard? And I’m like, not much. As a person who’s poked at a bunch of problems, let me tell you, that often doesn’t work and then you solve in like 10 years of effort.\nSo that’s one thing. That’s a point where I have relatively little sympathy for the MIRI way. That’s one set of arguments: is there a good way to get traction on this problem? Are there clever algorithms? I’m like, I don’t know, I don’t feel like the kind of evidence we’ve seen is the kind of evidence that should be persuasive. As some evidence in that direction, I’d be like, I have not been thinking about this that long. I feel like there have often been things that felt like, or that MIRI would have defended as like, here’s a hard obstruction. Then you think about it and you’re actually like, ‘Here are some things you can do.’ And it may still be a obstruction, but it’s no longer quite so obvious where it is, and there were avenues of attack.\nThat’s one thing. The second thing is like, a metaphor that makes me feel good– MIRI talks a lot about the evolution analogy. If I imagine the evolution problem– so if I’m a person, and I’m breeding some animals, I’m breeding some superintelligence. Suppose I wanted to breed an animal modestly smarter than humans that is really docile and friendly. I’m like, I don’t know man, that seems like it might work. That’s where I’m at. I think they are… it’s been a little bit hard to track down this disagreement, and I think this is maybe in a fresher, rawer state than the other stuff, where we haven’t had enough back and forth.\nBut I’m like, it doesn’t sound necessarily that hard. I just don’t know. I think their position, their position when they’ve written something has been a little bit more like, ‘But you couldn’t breed a thing, that after undergoing radical changes in intelligence or situation would remain friendly’. But then I’m normally like, but it’s not clear why that’s needed? I would really just like to create something slightly superhuman, and it’s going to work with me to breed something that’s slightly smarter still that is friendly.\nWe haven’t really been able to get traction on that. I think they have an intuition that maybe there’s some kind of invariance and things become gradually more unraveled as you go on. Whereas I have more intuition that it’s plausible. After this generation, there’s just smarter and smarter people thinking about how to keep everything on the rails. It’s very hard to know.\nThat’s the second thing. I have found that really… that feels like it gets to the heart of some intuitions that are very different, and I don’t understand what’s up there. There’s a third category which is like, on the object level, there’s a lot of directions that I’m enthusiastic about where they’re like, ‘That seems obviously doomed’. So you could divide those up into the two problems. There’s the family of problems that are more like the inner alignment problem, and then outer alignment stuff.\nOn the inner alignment stuff, I haven’t thought that much about it, but examples of things that I’m optimistic about that they’re super pessimistic about are like, stuff that looks more like verification, or maybe stepping back even for that, there’s this basic paradigm of adversarial training, where I’m like, it seems close to working. And you could imagine it being like, it’s just a research problem to fill in the gaps. Whereas they’re like, that’s so not the kind of thing that would work. I don’t really know where we’re at with that. I do see there are formal obstructions to adversarial training in particular working. I’m like, I see why this is not yet a solution. For example, you can have this case where there’s a predicate that the model checks, and it’s easy to check but hard to construct examples. And then in your adversarial training you can’t ever feed an example where it’ll fail. So we get into like, is it plausible that you can handle that problem with either 1) Doing something more like verification, where you say, you ask them not to perform well on real inputs but on pseudo inputs. Or like, you ask the attacker just to show how it’s conceivable that the model could do a bad thing in some sense.\nThat’s one possible approach, where the other would be something more like interpretability, where you say like, ‘Here’s what the model is doing. In addition to it’s behavior we get this other signal that the paper was depending on this fact, its predicate paths, which it shouldn’t have been dependent on.’ The question is, can either of those yield good behavior? I’m like, I don’t know, man. It seems plausible. And they’re like ‘Definitely not.’ And I’m like, ‘Why definitely not?’ And they’re like ‘Well, that’s not getting at the real essence of the problem.’ And I’m like ‘Okay, great, but how did you substantiate this notion of the real essence of the problem? Where is that coming from? Is that coming from a whole bunch of other solutions that look plausible that failed?’ And their take is kind of like, yes, and I’m like, ‘But none of those– there weren’t actually even any candidate solutions there really that failed yet. You’ve got maybe one thing, or like, you showed there exists a problem in some minimal sense.’ This comes back to the first of the three things I listed. But it’s a little bit different in that I think you can just stare at particular things and they’ll be like, ‘Here’s how that particular thing is going to fail.’ And I’m like ‘I don’t know, it seems plausible.’\nThat’s on inner alignment. And there’s maybe some on outer alignment. I feel like they’ve given a lot of ground in the last four years on how doomy things seem on outer alignment. I think they still have some– if we’re talking about amplification, I think the position would still be, ‘Man, why would that agent be aligned? It doesn’t at all seem like it would be aligned.’ That has also been a little bit surprisingly tricky to make progress on. I think it’s similar, where I’m like, yeah, I grant the existence of some problem or some thing which needs to be established, but I don’t grant– I think their position would be like, this hasn’t made progress or just like, pushed around the core difficulty. I’m like, I don’t grant the conception of the core difficulty in which this has just pushed around the core difficulty. I think that… substantially in that kind of thing, being like, here’s an approach that seems plausible, we don’t have a clear obstruction but I think that it is doomed for these deep reasons. I have maybe a higher bar for what kind of support the deep reasons need.\nI also just think on the merits, they have not really engaged with– and this is partly my responsibility for not having articulated the arguments in a clear enough way– although I think they have not engaged with even the clearest articulation as of two years ago of what the hope was. But that’s probably on me for not having an even clearer articulation than that, and also definitely not up to them to engage with anything. To the extent it’s a moving target, not up to them to engage with the most recent version. Where, most recent version– the proposal doesn’t really change that much, or like, the case for optimism has changed a little bit. But it’s mostly just like, the state of argument concerning it, rather than the version of the scheme.", "url": "https://aiimpacts.org/conversation-with-paul-christiano/", "title": "Conversation with Paul Christiano", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-09-11T23:05:12+00:00", "paged_url": "https://aiimpacts.org/feed?paged=11", "authors": ["Asya Bergal"], "id": "bd1d91144e616b56048446a0d8291e45", "summary": []} {"text": "Paul Christiano on the safety of future AI systems\n\nBy Asya Bergal, 11 September 2019\nPaul Christiano\nAs part of our AI optimism project, we talked to Paul Christiano about why he is relatively hopeful about the arrival of advanced AI going well. Paul Christiano works on AI alignment on the safety team at OpenAI. He is also a research associate at FHI, a board member at Ought, and a recent graduate of the theory group at UC Berkeley.\n\nPaul gave us a number of key disagreements he has with researchers at the Machine Intelligence Research Institute (MIRI), including:\n\nPaul thinks there isn’t good evidence now to justify a confident pessimistic position on AI, so the fact that people aren’t worried doesn’t mean they won’t be worried when we’re closer to human-level intelligence.\nPaul thinks that many algorithmic or theoretical problems are either solvable within 100 years or provably impossible.\nPaul thinks not having solved many AI safety problems yet shouldn’t give us much evidence about their difficulty.\nPaul’s criterion for alignment success isn’t ensuring that all future intelligences are aligned, it’s leaving the next generation of intelligences in at least as good of a place as humans were when building them.\nPaul doesn’t think that the evolution analogy suggests that we are doomed in our attempts to align smarter AIs; e.g. if we tried, it seems likely that we could breed animals that are slightly smarter than humans and are also friendly and docile.\nPaul cares about trying to build aligned AIs that are competitive with unaligned AIs, whereas MIRI is going for a less ambitious goal of building a narrow aligned AI without destroying the world.\nUnlike MIRI, Paul is relatively optimistic about verification and interpretability for the inner alignment of AI systems.\n\nPaul also talked about evidence that might change his views; in particular, observing whether future AIs become deceptive and how they respond to simple oversight. He also described how research into the AI alignment approach he’s working on now, iterated distillation and amplification (IDA), is likely to be useful even if human-level intelligences don’t look like the AI systems we have today.\nA full transcript of our conversation, lightly edited for concision and clarity, can be found here.", "url": "https://aiimpacts.org/paul-christiano-on/", "title": "Paul Christiano on the safety of future AI systems", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-09-11T21:40:03+00:00", "paged_url": "https://aiimpacts.org/feed?paged=11", "authors": ["Asya Bergal"], "id": "5c0e5568c595e1ab270f0053da23c5ae", "summary": []} {"text": "Soft takeoff can still lead to decisive strategic advantage\n\nBy Daniel Kokotajlo, 11 September 2019\nCrossposted from the AI Alignment Forum. May contain more technical jargon than usual.\n[Epistemic status: Argument by analogy to historical cases. Best case scenario it’s just one argument among many. Edit: Also, thanks to feedback from others, especially Paul, I intend to write a significantly improved version of this post in the next two weeks.]\nI have on several occasions heard people say things like this:\n\nThe original Bostrom/Yudkowsky paradigm envisioned a single AI built by a single AI project, undergoing intelligence explosion all by itself and attaining a decisive strategic advantage as a result. However, this is very unrealistic. Discontinuous jumps in technological capability are very rare, and it is very implausible that one project could produce more innovations than the rest of the world combined. Instead we should expect something more like the Industrial Revolution: Continuous growth, spread among many projects and factions, shared via a combination of trade and technology stealing. We should not expect any one project or AI to attain a decisive strategic advantage, because there will always be other projects and other AI that are only slightly less powerful, and coalitions will act to counterbalance the technological advantage of the frontrunner. (paraphrased)\n\nProponents of this view often cite Paul Christiano in support. Last week I heard him say he thinks the future will be “like the Industrial Revolution but 10x-100x faster.”\nIn this post, I assume that Paul’s slogan for the future is correct and then nevertheless push back against the view above. Basically, I will argue that even if the future is like the industrial revolution only 10x-100x faster, there is a 30%+ chance that it will involve a single AI project (or a single AI) with the ability to gain a decisive strategic advantage, if they so choose. (Whether or not they exercise that ability is another matter.)\nWhy am I interested in this? Do I expect some human group to take over the world? No; instead what I think is that (1) an unaligned AI in the leading project might take over the world, and (2) A human project that successfully aligns their AI might refrain from taking over the world even if they have the ability to do so, and instead use their capabilities to e.g. help the United Nations enforce a ban on unauthorized AGI projects.\nNational ELO ratings during the industrial revolution and the modern era\nIn chess (and some other games) ELO rankings are used to compare players. An average club player might be rank 1500; the world chess champion might be 2800; computer chess programs are even better. If one player has 400 points more than another, it means the first player would win with ~90% probability.\nWe could apply this system to compare the warmaking abilities of nation-states and coalitions of nation-states. For example, in 1941 perhaps we could say that the ELO rank of the Axis powers was ~300 points lower than the ELO rank of the rest of the world combined (because what in fact happened was the rest of the world combining to defeat them, but it wasn’t a guaranteed victory). We could add that in 1939 the ELO rank of Germany was ~400 points higher than that of Poland, and that the ELO rank of Poland was probably 400+ points higher than that of Luxembourg.\nWe could make cross-temporal fantasy comparisons too. The ELO ranking of Germany in 1939 was probably ~400 points greater than that of the entire world circa 1910, for example. (Visualize the entirety of 1939 Germany teleporting back in time to 1910, and then imagine the havoc it would wreak.)\nClaim 1A: If we were to estimate the ELO rankings of all nation-states and sets of nation-states (potential alliances) over the last 300 years, the rank of the most powerful nation-state at at a given year would on several occasions be 400+ points greater than the rank of the entire world combined 30 years prior.\nClaim 1B: Over the last 300 years there have been several occasions in which one nation-state had the capability to take over the entire world of 30 years prior.\nI’m no historian, but I feel fairly confident in these claims.\n\nIn naval history, the best fleets in the world in 1850 were obsolete by 1860 thanks to the introduction of iron-hulled steamships, and said steamships were themselves obsolete a decade or so later, and then those ships were obsoleted by the Dreadnought, and so on… This process continued into the modern era. By “Obsoleted” I mean something like “A single ship of the new type could defeat the entire combined fleet of vessels of the old type.”\nA similar story could be told about air power. In a dogfight between planes of year 19XX and year 19XX+30, the second group of planes will be limited only by how much ammunition they can carry.\nSmall technologically advanced nations have regularly beaten huge sprawling empires and coalitions. (See: Colonialism)\nThe entire world has been basically carved up between the small handful of most-technologically advanced nations for two centuries now. For example, any of the Great Powers of 1910 (plus the USA) could have taken over all of Africa, Asia, South America, etc. if not for the resistance that the other great powers would put up. The same was true 40 years later and 40 years earlier.\n\nI conclude from this that if some great power in the era kicked off by the industrial revolution had managed to “pull ahead” of the rest of the world more effectively than it actually did–30 years more effectively, in particular–it really would have been able to take over the world.\nClaim 2: If the future is like the Industrial Revolution but 10x-100x faster, then correspondingly the technological and economic power granted by being 3 – 0.3 years ahead of the rest of the world should be enough to enable a decisive strategic advantage.\nThe question is, how likely is it that one nation/project/AI could get that far ahead of everyone else? After all, it didn’t happen in the era of the Industrial Revolution. While we did see a massive concentration of power into a few nations on the leading edge of technological capability, there were always at least a few such nations and they kept each other in check.\nThe “surely not faster than the rest of the world combined” argument\nSometimes I have exchanges like this:\n\nMe: Decisive strategic advantage is plausible!\nInterlocutor: What? That means one entity must have more innovation power than the rest of the world combined, to be able to take over the rest of the world!\nMe: Yeah, and that’s possible after intelligence explosion. A superintelligence would totally have that property.\nInterlocutor: Well yeah, if we dropped a superintelligence into a world full of humans. But realistically the rest of the world will be undergoing intelligence explosion too. And indeed the world as a whole will undergo a faster intelligence explosion than any particular project could; to think that one project could pull ahead of everyone else is to think that, prior to intelligence explosion, there would be a single project innovating faster than the rest of the world combined!\n\nThis section responds to that by way of sketching how one nation/project/AI might get 3 – 0.3 years ahead of everyone else.\nToy model: There are projects which research technology, each with their own “innovation rate” at which they produce innovations from some latent tech tree. When they produce innovations, they choose whether to make them public or private. They have access to their private innovations + all the public innovations.\nIt follows from the above that the project with access to the most innovations at any given time will be the project that has the most hoarded innovations, even though the set of other projects has a higher combined innovation rate and also a larger combined pool of accessible innovations. Moreover, the gap between the leading project and the second-best project will increase over time, since the leading project has a slightly higher rate of production of hoarded innovations, but both projects have access to the same public innovations\nThis model leaves out several important things. First, it leaves out the whole “intelligence explosion” idea: A project’s innovation rate should increase as some function of how many innovations they have access to. Adding this in will make the situation more extreme and make the gap between the leading project and everyone else grow even bigger very quickly.\nSecond, it leaves out reasons why innovations might be made public. Realistically there are three reasons: Leaks, spies, and selling/using-in-a-way-that-makes-it-easy-to-copy.\nClaim 3: Leaks & Spies: I claim that the 10x-100x speedup Paul prophecies will not come with an associated 10x-100x increase in the rate of leaks and successful spying. Instead the rate of leaks and successful spying will be only a bit higher than it currently is.\nThis is because humans are still humans even in this soft takeoff future, still in human institutions like companies and governments, still using more or less the same internet infrastructure, etc. New AI-related technologies might make leaking and spying easier than it currently is, but they also might make it harder. I’d love to see an in-depth exploration of this question because I don’t feel particularly confident.\nBut anyhow, if it doesn’t get much easier than it currently is, then going 3 years to 0.3 years without a leak is possible, and more generally it’s possible for the world’s leading project to build up a 0.3-3 year lead over the second-place project. For example, the USSR had spies embedded in the Manhattan Project but it still took them 4 more years to make their first bomb.\nClaim 4: Selling etc. I claim that the 10x-100x speedup Paul prophecies will not come with an associated 10x-100x increase in the budget pressure on projects to make money fast. Again, today AI companies regularly go years without turning a profit — DeepMind, for example, has never turned a profit and is losing something like a billion dollars a year for its parent company — and I don’t see any particularly good reason to expect that to change much.\nSo yeah, it seems to me that it’s totally possible for the leading AI project to survive off investor money and parent company money (or government money, for that matter!) for five years or so, while also keeping the rate of leaks and spies low enough that the distance between them and their nearest competitor increases rather than decreases. (Note how this doesn’t involve them “innovating faster than the rest of the world combined.”)\nSuppose they could get a 3-year lead this way, at the peak of their lead. Is that enough?\nWell, yes. A 3-year lead during a time 10x-100x faster than the Industrial Revolution would be like a 30-300 year lead during the era of the Industrial Revolution. As I argued in the previous section, even the low end of that range is probably enough to get a decisive strategic advantage.\nIf this is so, why didn’t nations during the Industrial Revolution try to hoard their innovations and gain decisive strategic advantage?\nEngland actually did, if I recall correctly. They passed laws and stuff to prevent their early Industrial Revolution technology from spreading outside their borders. They were unsuccessful–spies and entrepreneurs dodged the customs officials and snuck blueprints and expertise out of the country. It’s not surprising that they weren’t able to successfully hoard innovations for 30+ years! Entire economies are a lot more leaky than AI projects.\nWhat a “Paul Slow” soft takeoff might look like according to me\nAt some point early in the transition to much faster innovation rates, the leading AI companies “go quiet.” Several of them either get huge investments or are nationalized and given effectively unlimited funding. The world as a whole continues to innovate, and the leading companies benefit from this public research, but they hoard their own innovations to themselves. Meanwhile the benefits of these AI innovations are starting to be felt; all projects have significantly increased (and constantly increasing) rates of innovation. But the fastest increases go to the leading project, which is one year ahead of the second-best project. (This sort of gap is normal for tech projects today, especially the rare massively-funded ones, I think.) Perhaps via a combination of spying, selling, and leaks, that lead narrows to six months midway through the process. But by that time things are moving so quickly that a six months’ lead is like a 15-150 year lead during the era of the Industrial Revolution. It’s not guaranteed and perhaps still not probable, but at least it’s reasonably likely that the leading project will be able to take over the world if it chooses to.\nObjection: What about coalitions? During the industrial revolution, if one country did successfully avoid all leaks, the other countries could unite against them and make the “public” technology inaccessible to them. (Trade does something like this automatically, since refusing to sell your technology also lowers your income which lowers your innovation rate as a nation.)\nReply: Coalitions to share AI research progress will be harder than free-trade / embargo coalitions. This is because AI research progress is much more the result of rare smart individuals talking face-to-face with each other and much less the result of a zillion different actions of millions of different people, as the economy is. Besides, a successful coalition can be thought of as just another project, and so it’s still true that one project could get a decisive strategic advantage. (Is it fair to call “The entire world economy” a project with a decisive strategic advantage today? Well, maybe… but it feels a lot less accurate since almost everyone is part of the economy but only a few people would have control of even a broad coalition AI project.)\nAnyhow, those are my thoughts. Not super confident in all this, but it does feel right to me. Again, the conclusion is not that one project will take over the world even in Paul’s future, but rather that such a thing might still happen even in Paul’s future.\n\nThanks to Magnus Vinding for helpful conversation.", "url": "https://aiimpacts.org/soft-takeoff-can-still-lead-to-decisive-strategic-advantage/", "title": "Soft takeoff can still lead to decisive strategic advantage", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-09-11T18:39:38+00:00", "paged_url": "https://aiimpacts.org/feed?paged=11", "authors": ["Daniel Kokotajlo"], "id": "6253a9161cd6bc83c99c967b4e659179", "summary": []} {"text": "Ernie Davis on the landscape of AI risks\n\nBy Robert Long, 23 August 2019\nErnie Davis (NYU)\nEarlier this month, I spoke with Ernie Davis about why he is skeptical that risks from superintelligent AI are substantial and tractable enough to merit dedicated work. This was part of a larger project that we’ve been working on at AI Impacts, documenting arguments from people who are relatively optimistic about risks from advanced AI. \nDavis is a professor of computer science at NYU, and works on the representation of commonsense knowledge in computer programs. He wrote Representations of Commonsense Knowledge (1990) and will soon publish a book Rebooting AI (2019) with Gary Marcus. We reached out to him because of his expertise in artificial intelligence and because he wrote a critical review of Nick Bostrom’s Superintelligence. \nDavis told me, “the probability that autonomous AI is going to be one of our major problems within the next two hundred years, I think, is less than one in a hundred.” We spoke about why he thinks that, what problems in AI he thinks are more urgent, and what his key points of disagreement with Nick Bostrom are. A full transcript of our conversation, lightly edited for concision and clarity, can be found here.\n\n\n", "url": "https://aiimpacts.org/ernie-davis-on-the-landscape-of-ai-risks/", "title": "Ernie Davis on the landscape of AI risks", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-08-24T00:23:38+00:00", "paged_url": "https://aiimpacts.org/feed?paged=11", "authors": ["Rob Long"], "id": "1b472081d1b6f2f1231eaca3cf12327e", "summary": []} {"text": "Conversation with Ernie Davis\n\nAI Impacts spoke with computer scientist Ernie Davis about his views of AI risk. With his permission, we have transcribed this interview.\nParticipants\n\nErnest Davis – professor of computer science at the Courant Institute of Mathematical Science, New York University\nRobert Long – AI Impacts\n\nSummary\nWe spoke over the phone with Ernie Davis on August 9, 2019. Some of the topics we covered were:\n\nWhat Davis considers to be the most urgent risks from AI\nDavis’s disagreements with Nick Bostrom, Eliezer Yudkowsky, and Stuart Russell\n\nThe relationship between greater intelligence and greater power\nHow difficult it is to design a system that can be turned off\nHow difficult it would be to encode safe ethical principles in an AI system\n\n\nDavis’s evaluation of the likelihood that advanced, autonomous AI will be a major problem within the next two hundred years; and what evidence would change his mind\nChallenges and progress towards human-level AI\n\nThis transcript has been lightly edited for concision and clarity.\nTranscript\nRobert Long: You’re one of the few people, I think, who is an expert in AI, and is not necessarily embedded in the AI Safety community, but you have engaged substantially with arguments from that community. I’m thinking especially of your review of Superintelligence.1 2\nI was hoping we could talk a little bit more about your views on AI safety work. There’s a particular proposition that we’re trying to get people’s opinions on. The question is: Is it valuable for people to be expending significant effort doing work that purports to reduce the risk from advanced artificial intelligence? I’ve read some of your work; I can guess some of your views. But I was wondering: what would you say is your answer to that question, whether this kind of work is valuable to do now?\nErnie Davis: Well, a number of parts to the answer. In terms of short term—and “short” being not very short—short term risks from computer technology generally, this is very low priority. The risks from cyber crime, cyber terrorism, somebody taking hold of the insecurity of the internet of things and so on—that in particular is one of my bugaboos—are, I think, an awful lot more urgent. So there’s urgency; I certainly don’t see that this is especially urgent work. \nNow, some of the approaches are being taken to long term AI safety seem to me extremely far fetched. On the one hand the fears of people like Bostrom and Yudkowsky and to a lesser extent Stuart Russell—seem to me misdirected and the approaches they are proposing are also misdirected. I have a book with Gary Marcus which is coming out in September, and we have a chapter which is called ‘Trust’ which gives our opinions—which are pretty much convergent—at length. I can send you that chapter. \nRobert Long: Yes, I’d certainly be interested in that.\nErnie Davis: So, the kinds of things that Russell is proposing—Russell also has a book coming out in October, he is developing ideas that he’s already published about: the way to have safe AI is to have them be unsure about what the human goals are.3 And Yudkowsky develops similar ideas in his work, engages with them, and tries to measure their success. This all seems to me too clever by half. And I don’t think it’s addressing what the real problems are going to be.\nMy feeling is that the problem of AIs doing the wrong thing is a very large one—you know, just by sheer inadvertence and incompetent design. And the solution there, more or less, is to design them well and build in safety features of the kinds that one has in engineering, one has throughout engineering. Whenever one is doing an engineering project, one builds in—one designs for failure. And one has to do that with AI as well. The danger of AI being abused by bad human actors is a very serious danger. And that has to be addressed politically, like all problems involving bad human actors. \nAnd then there are directions in AI where I think it’s foolish to go. For instance it would be very foolish to build—it’s not currently technically feasible, but if it were, and it may at some point become technically feasible—to build robots that can reproduce themselves cheaply. And that’s foolish, but it’s foolish for exactly the same reason that you want to be careful about introducing new species. It’s why Australia got into trouble with the rabbits, namely: if you have a device that can reproduce itself and it has no predators, then it will reproduce itself and it gets to be a nuisance.\nAnd that’s almost separate. A device doesn’t have to be superintelligent to do that, in fact superintelligence probably just makes that harder because a superintelligent device is harder to build; a self replicating device might be quite easy to build on the cheap. It won’t survive as well as a superintelligent one, but if it can reproduce itself fast enough that doesn’t matter. So that kind of thing, you want to avoid.\nThere’s a question which we almost entirely avoided in our book, which people always ask all the time, which is, at what point do machines become conscious. And my answer to that—I’m not necessarily speaking for Gary—my answer to that is that you want to avoid building machines which you have any reason to suspect are conscious. Because once they become conscious, they simply raise a whole collection of ethical issues like—”is it ethical turn them off?”, is the first one, and “what are your responsibilities toward the thing?”. And so you want to continue to have programs which, like current programs, one can think of purely as tools which we can use, which it is ethical to use as we choose.\nSo that’s a thing to be avoided, it seems to me, in AI research. And whether people are wise enough to avoid that, I don’t know. I would hope so. So in some ways I’m more conservative than a lot of people in the AI safety world—in the sense that they assume that self replicating robots will be a thing and that self-aware robots will be a thing and the object is to design them safely. My feeling is that research shouldn’t go there at all.\nRobert Long: I’d just like to dig in on a few more of those claims in particular. I would just like to hear a little bit more about what you think the crux of your disagreement is with people like Yudkowsky and Russell and Bostrom. Maybe you can pick one because they all have different views. So, you said that you feel that their fears are far-fetched and that their approaches are far-fetched as well. Can you just say a little bit about more about why you think that? A few parts: what you think is the core fear or prediction that their work is predicated on, and why you don’t share that fear or prediction.\nErnie Davis: Both Bostrom very much, and Yudkowsky very much, and Russell to some extent, have this idea that if you’re smart enough you get to be God. And that just isn’t correct. The idea that a smart enough machine can do whatever it want—there’s a really good essay by Steve Pinker, by the way, have you seen it?4\nRobert Long: I’ve heard of it but have not read it.\nErnie Davis: I’ll send you the link. A couple of good essays by Pinker, I think. So, it’s not the case that once superintelligence is reached, then times become messianic if they’re benevolent and dystopian if they’re not. They’re devices. They are limited in what they can do. And the other thing is that we are here first, and we should be able to design them in such a way that they’re safe. It is not really all that difficult to design an AI or a robot which you can turn off and which cannot block you from turning it off.\nAnd it seems to me a mistake to believe otherwise. With two caveats. One is that, if you embed it in a situation where it’s very costly to turn off—it’s controlling the power grid and the power grid won’t work if you turn it off, then you’re in trouble. And secondly, if you have malicious actors who are deliberately designing, building devices which can’t be turned off. It’s not that it’s impossible to build an intelligent machine that is very dangerous.\nBut that doesn’t require superintelligence. That’s possible with very limited intelligence, and the more intelligent, to some extent, the harder it is. But again that’s a different problem. It doesn’t become a qualitatively different problem once the thing has exceeded some predefined level of intelligence.\nRobert Long: You might be even more familiar with these arguments than I am—in fact I can’t really recite them off the top of my head—but I suppose Bostrom and Yudkowsky, and maybe Russell too, do talk about this at length. And I guess they’re they’re always like, Well, you might think you have thought of a good failsafe for ensuring these things won’t get un-turn-offable. But, so they say, you’re probably underestimating just how weird things can get once you have superintelligence. \nI suppose maybe that’s precisely what you’re disagreeing with: maybe they’re overestimating how weird and difficult things get once things are above human level. Why do you think you and they have such different hunches, or intuitions, about how weird things can get?\nErnie Davis: I don’t know, I think they’re being unrealistic. If you take a 2019 genius and you put him into a Neolithic village, they can kill him no matter how intelligent he is, and how much he knows and so on. \nRobert Long: I’ve been trying to trace the disagreements here and I think a lot of it does just maybe come down to people’s intuitions about what a very smart person can do if put in a situation where they are far smarter than other people. I think this actually comes up in someone who responded to your review. They claim, “I think if I went back to the time of the Romans I could probably accrue a lot of power just by knowing things that they did not know.”5\nErnie Davis: I missed that, or I forgot that or something.\nRobert Long: Trying to locate the crux of the disagreement: one key disagreement is what the relationship is between greater intellectual capacity and greater physical power and control over the world. Does that seem safe to say, that that’s one thing you disagree with them about?\nErnie Davis: I think so, yes. That’s one point of disagreement. A second point of disagreement is the difficulty of—the point which we make in the book at some length is that, if you’re going to have an intelligence that’s in any way comparable to human, you’re going to have to build in common sense. It’s going to have to have a large degree of commonsense understanding. And once an AI has common sense it will realize that there’s no point in turning the world into paperclips, and that there’s no point in committing mass murder to go fetch the milk—Russell’s example—and so on. My feeling is that one can largely incorporate a moral sense, when it becomes necessary; you can incorporate moral rules into your robots.\nAnd one of the people who criticized my Bostrom paper said, well, philosophers haven’t solved the problems of ethics in 2,000 years, how do you think we’re going to solve them? And my feeling is we don’t have to come up with the ultimate solution to ethical problems. You just have to make sure that they understand it to a degree that they don’t do spectacularly foolish and evil things. And that seems to me doable.\nAnother point of disagreement with Bostrom in particular, and I think also Yudkowsky, is that they have the idea that ethical senses evolve—which is certainly true—and that a superintelligence, if well-designed, can be designed in such a way that it will itself evolve toward a superior ethical sense. And that this is the thing to do. Bostrom goes into this at considerable length: somehow, give it guidance toward an ethical sense which is beyond anything that we currently understand. That seems to me not very doable, but it would be a really bad thing to do if we could do it, because this super ethics might decide that the best thing to do is to exterminate the human population. And in some super-ethical sense that might be true, but we don’t want it to happen. So the belief in the super ethics—I have no belief, I have no faith in the super ethics, and I have even less faith that there’s some way of designing an AI so that as it grows superintelligent it will achieve super ethics in a comfortable way. So this all seems to me pie in the sky.\nRobert Long: So the key points of disagreement we have so far are the relationship between intelligence and power; and the second thing is, how hard is what we might call the safety problem. And it sounds like even if you became more worried about very powerful AIs, you think it would not require substantial research and effort and money (as some people think) to make them relatively safe?\nErnie Davis: Where I would put the effort in is into thinking about, from a legal regulatory perspective, what we want to do. That’s not an easy question.\nThe problem at the moment, the most urgent question, is the problem of fake news. We object to having bots spreading fake news. It’s not clear what the best way of preventing that is without infringing on free speech. So that’s a hard problem. And that is, I think, very well worth thinking about. But that’s of course a very different problem. The problems of security at the practical level—making sure that an adversary can’t take control of all the cars that are connected to the Internet and start using them as weapons—is, I think, a very pressing problem. But again that has nothing much to do with the AI safety projects that are underway.\nRobert Long: Kind of a broad question—I was curious to hear what you make of the mainstream AI safety efforts that are now occurring. My rough sense is since your review and since Superintelligence, AI safety really gained respectability and now there are AI safety teams at places like DeepMind and OpenAI. And not only do they work on the near-term stuff which you talk about, but they are run by people who are very concerned about the long term. What do you make of that trend?\nErnie Davis: The thing is, I haven’t followed their work very closely, to tell you the truth. So I certainly don’t want to criticize it very specifically. There are smart and well-intentioned people on these teams, and I don’t doubt that a lot of what they’re doing is good work. \nThe work I’m most enthusiastic about in that direction is problems that are fairly near term. And also autonomous weapons is a pretty urgent problem, and requires political action. So the more that can be done about keeping those under control the better.\nRobert Long: Do you think your views on what it will take before we ever get to human-level or more advanced AI, do you think that drives a lot of your opinions as well? For example, your own work on common sense and how hard of a problem that can be?6 7\nErnie Davis: Yeah sure, certainly it informs my views. It affects the question of urgency and it affects the question of what the actual problems are likely to be.\nRobert Long: What would you say is your credence, your evaluation of the likelihood, that without significant additional effort, advanced AI poses a significant risk of harm?\nErnie Davis: Well, the problem is that without more work on artificial intelligence, artificial intelligence poses no risk. And the distinction between work on AI, and work on AI safety—work on AI is an aspect of work on AI safety. So I’m not sure it’s a well-defined question.\nBut that’s a bit of a debate. What we mean is, if we get rid of all the AI safety institutes, and don’t worry about the regulation, and just let the powers that be do whatever they want to do, will advanced AI be a significant threat? There is certainly a sufficiently significant probability of that, but almost all of that probability has do with its misuse by bad actors.\nThe problem that AI will autonomously become a major threat, I put it at very small. The probability that people will start deploying AI in a destructive way and causing serious harm, to some extent or other, is fairly large. The probability that autonomous AI is going to be one of our major problems within the next two hundred years I think is less than one in a hundred.\nRobert Long: Ah, good. Thank you for parsing that question. It’s that last bit that I’m curious about. And what do you think are the key things that go into that low probability? It seems like there’s two parts: odds of it being a problem if it arises, and odds of it arising. I guess what I’m trying to get at is—again, uncertainty in all of this—but do you have hunches or ‘AI timelines’ as people call them, about how far away we are from human level intelligence being a real possibility?\nErnie Davis: I’d be surprised—well, I will not be surprised, because I will be dead—but I would be surprised if AI reached human levels of capacity across the board within the next 50 years.\nRobert Long: I suspect a lot of this is also found in your written work. But could you say briefly what you think are the things standing in the way, standing in between where we’re at now in our understanding of AI, and getting to that—where the major barriers or confusions or new discoveries to be made are?\nErnie Davis: Major barriers—well, there are many barriers. We don’t know how to give computers basic commonsense understanding of the world. We don’t know how to represent the meaning of either language or what the computer can see through vision. We don’t have a good theory of learning. Those, I think, are the main problems that I see and I don’t see that the current direction of work in AI is particularly aimed at those problems.\nAnd I don’t think it’s likely to solve those problems without a major turnaround. And the problems, I think, are very hard. And even after the field has turned around I think it will take decades before they’re solved.\nRobert Long: I suspect a lot of this might be what the book is about. But can you say what you think that turnaround is, or how you would characterize the current direction? I take it you mean something like deep learning and reinforcement learning?\nErnie Davis: Deep learning, end-to-end learning, is what I mean by the current direction. It is very much the current direction. And the turnaround, in one sentence, is that one has to engage with the problems of meaning, and with the problems of common sense knowledge.\nRobert Long: Can you think of plausible concrete evidence that would change your views one way or the other? Specifically, on these issues of the problem of safety, and what if any work should be done.\nErnie Davis: Well sure, I mean, if, on the one hand, progress toward understanding in a broad sense—if there’s startling progress on the problem of understanding then my timeline changes obviously, and that makes the problem harder.\nAnd if it turned out—this is an empirical question—if it turned out that certain types of AI systems inherently turned toward single minded pursuit of malevolence or toward their own purposes and so on. And it seems to me wildly unlikely, but it’s not unimaginable.\nOr of course, if in a social sense—if people start uncontrollably developing these things. I mean it always amazes me the amount of sheer malice in the cyber world, the number of people who are willing to hack systems and develop bugs for no reason. The people who are doing it to make money is one thing, I can understand them. The people do it simply out of the challenge and out of the spirit of mischief making—I’m surprised that there are so many. \nRobert Long: Can I ask a little bit more about what progress towards understanding looks like? What sort of tasks or behaviors? What does the arxiv paper that demonstrates that look like? What’s it called, and what is is the program doing, where you’re like, “Wow, this is this is a huge stride.”\nErnie Davis: I have a paper called “How to write science questions that are easy for people and hard for computers.”8 So once you get a response paper to that: “My system answers all the questions in this dataset which are easy for people and hard for computers.” That would be impressive. If you have a program that can read basic narrative text and answer questions about it or watch a video and answer questions, a film and answer questions about it—that would be impressive.\nNotes", "url": "https://aiimpacts.org/conversation-with-ernie-davis/", "title": "Conversation with Ernie Davis", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-08-23T23:35:20+00:00", "paged_url": "https://aiimpacts.org/feed?paged=11", "authors": ["Rob Long"], "id": "bd66a3ca85b024b1cac4cefa133363ab", "summary": []} {"text": "Evidence against current methods leading to human level artificial intelligence\n\nThis is a list of published arguments that we know of that current methods in artificial intelligence will not lead to human-level AI.\nDetails\nClarifications\nWe take ‘current methods’ to mean techniques for engineering artificial intelligence that are already known, involving no “qualitatively new ideas”.1 We have not precisely defined ‘current methods’. Many of the works we cite refer to currently dominant methods such as machine learning (especially deep learning) and reinforcement learning.\nBy human-level AI, we mean AI with a level of performance comparable to humans. We have in mind the operationalization of ‘high-level machine intelligence’ from our 2016 expert survey on progress in AI: “Say we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers.”2\nBecause we are considering intelligent performance, we have deliberately excluded arguments that AI might lack certain ‘internal’ features, even if it manifests human-level performance.3 4 We assume, concurring with Chalmers (2010), that “If there are systems that produce apparently [human-level intelligent] outputs, then whether or not these systems are truly conscious or intelligent, they will have a transformative impact.”5\nMethods\nWe read well-known criticisms of current AI approaches of which we were already aware. Using these as a starting point, we searched for further sources and solicited recommendations from colleagues familiar with artificial intelligence.\nWe include arguments that sound plausible to us, or that we believe other researchers take seriously. Beyond that, we take no stance on the relative strengths and weaknesses of these arguments. \nWe cite works that plausibly support pessimism about current methods, regardless of whether the works in question (or their authors) actually claim that current methods will not lead to human-level artificial intelligence. \nWe do not include arguments that serve primarily as undercutting defeaters of positive arguments that current methods will lead to human-level intelligence. For example, we do not include arguments that recent progress in machine learning has been overstated.\nThese arguments might overlap in various ways, depending on how one understands them. For example, some of the challenges for current methods might be special instances of more general challenges. \nList of arguments\nInside view arguments\nThese arguments are ‘inside view’ in that they look at the specifics of current methods.\nInnate knowledge: Intelligence relies on prior knowledge which it is currently not feasible to embed via learning techniques, recapitulate via artificial evolution, or hand-specify. — Marcus (2018)6Data hunger: Training a system to human level using current methods will require more data than we will be able to generate or acquire. — Marcus (2018)7\nCapacities\nSome researchers claim that there are capacities which are required for human-level intelligence, but difficult or impossible to engineer with current methods.8 Some commonly-cited capacities are: \nCausal models: Building causal models of the world that are rich, flexible, and explanatory — Lake et al. (2016)9, Marcus (2018)10, Pearl (2018)11Compositionality: Exploiting systematic, compositional relations between entities of meaning, both linguistic and conceptual — Fodor and Pylyshyn (1988)12, Marcus (2001)13, Lake and Baroni (2017)14Symbolic rules: Learning abstract rules rather than extracting statistical patterns — Marcus (2018)15Hierarchical structure: Dealing with hierarchical structure, e.g. that of language — Marcus (2018)16Transfer learning: Learning lessons from one task that transfer to other tasks that are similar, or that differ in systematic ways — Marcus (2018)17, Lake et al. (2016)18Common sense understanding: Using common sense to understand language and reason about new situations — Brooks (2019)19, Marcus and Davis (2015)20\nOutside view arguments\nThese arguments are ‘outside view’ in that they look at “a class of cases chosen to be similar in relevant respects”21 to current artificial intelligence research, without looking at the specifics of current methods.\nLack of progress: There are many tasks specified several decades ago that have not been solved, e.g. effectively manipulating a robot arm, open-ended question-answering. — Brooks (2018)22, Jordan (2018)23Past predictions: Past researchers have incorrectly predicted that we would get to human-level AI with then-current methods. — Chalmers (2010)24Other fields: Several fields have taken centuries or more to crack; AI could well be one of them. — Brooks (2018)25\nContributions\nRobert Long and Asya Bergal contributed research and writing.\nNotes\nFeatured image from www.extremetech.com.", "url": "https://aiimpacts.org/evidence-against-current-methods-leading-to-human-level-artificial-intelligence/", "title": "Evidence against current methods leading to human level artificial intelligence", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-08-13T00:55:13+00:00", "paged_url": "https://aiimpacts.org/feed?paged=11", "authors": ["Asya Bergal"], "id": "2f66a830e988871b55febcced921d81b", "summary": []} {"text": "Historic trends in land speed records\n\nLand speed records did not see any greater-than-10-year discontinuities relative to linear progress across all records. Considered as several distinct linear trends it saw discontinuities of 12, 13, 25, and 13 years, the first two corresponding to early (but not first) jet-propelled vehicles.\nThe first jet-propelled vehicle just predated a marked change in the rate of progress of land speed records, from a recent 1.8 mph / year to 164 mph / year.\nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nBackground\nAccording to Wikipedia, the land speed record is “the highest speed achieved by a person using a vehicle on land.”1 Wheel-driven cars, which supply power to their axles, held the records for land speed record through 1963, when the first turbojet powered vehicles arrived on the scene. No wheel-driven car has held the record since 1964.2\nFigure 1: Three record-setting vehicles: Sunbeam, Sunbeam Blue Bird, and Blue Bird3\nTrends\nLand speed records\nData\nWe took data from Wikipedia’s list of land speed records,4 which we have not verified, and added it to this spreadsheet. See Figure 2 below.\nFigure 2: Historic land speed records in mph over time. Speeds on the left are an average of the record set in mph over 1 km and over 1 mile. The red dot represents the first record in a cluster that was from a jet propelled vehicle. The discontinuities of more than ten years are the third and fourth turbojet points, and the last two points.\nDiscontinuity measurement\nIf we treat the data as a linear trend across all time,5 then the land speed record did not contain any greater than 10-year discontinuities. \nHowever we divide the data into several linear trends.6 Extrapolating based on these trends, there were four discontinuities of sizes 12, 13, 25, and 13 years, produced by different turbojet-powered vehicles.7In addition to the size of these discontinuities in years, we have tabulated a number of other potentially relevant metrics here.8\nChanges in the rate of progress\nThere are several marked changes in the rate of progress in this history. The first two discontinuities are near the start of a sharp change, that seemed to come from the introduction of jet-propulsion (though note that the first jet-propelled vehicle in the trend is neither discontinuous with the previous trend, nor seemingly within the period of faster growth).\nIf we look at the rates of progress in the stretches directly before the second jet propelled vehicle in 1964, and the stretch directly after that through 1965, the rate of progress increases from 1.8 mph / year to 164 mph / year.9\nNotes", "url": "https://aiimpacts.org/historic-trends-in-land-speed-records/", "title": "Historic trends in land speed records", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-07-17T23:10:07+00:00", "paged_url": "https://aiimpacts.org/feed?paged=11", "authors": ["Asya Bergal"], "id": "f544212c3563d4377d1c46a97f51b723", "summary": []} {"text": "Methodology for discontinuous progress investigation\n\nAI Impacts’ discontinuous progress investigation was conducted according to methodology outlined on this page.\nDetails\nContributions to the discontinuous progress investigation were made over at least 2015-2019, by a number of different people, and methods have varied somewhat. In 2019 we attempted to make methods across the full collection of case studies more consistent. The following is a description of methodology as of December 2019.\nOverview\nTo learn about the prevalence and nature of discontinuities in technological progress1, we:\nSearched for potential examples of discontinuous progress (e.g. ‘Eli Whitney’s cotton gin’) We collected around ninety suggestions of technological change which might have been discontinuous, from various people. Some of these pointed to particular technologies, and others to trends.2 We added further examples as they arose when we were already working on later steps. A list of suggested examples not ultimately included is available here.Chose specific metrics related to some of these potential examples (e.g. ‘cotton ginned per person per day’, ‘value of cotton ginned per cost’) and found historic data on progress on those metrics (usually in conjunction). We took cases one by one and searched for data on relevant metrics in their vicinity. For instance, if we were told that fishing hooks became radically stronger in 1997, we might look for data on the strength of fishing hooks over time, but also for the cost of fishing hooks per year, or how many fish could be caught by a single fishing hook, because these are measures of natural interest that we might expect to be affected by a change in fishing hook strength. Often we ended up collecting data on several related trends. This was generally fairly dependent on what data could be found. Many suggestions are not included in the investigation so far because we have not found relevant data. Though sometimes we proceeded with quite minimal data, if it was possible to at least assess a single development’s likelihood of having been discontinuous.Defined a ‘rate of past progress’ throughout each historic dataset At each datapoint in a trend after the first one, we defined a ‘previous rate of progress’. This was generally either linear or exponential, and was the average rate of progress between the previous datapoint and some earlier datapoint, though not necessarily the first. For instance, if a trend was basically flat from 1900 until 1967, then became steep, then in defining the previous rate of progress for the 1992 datapoint, we may decide to call this linear progress since 1967, rather than say exponential progress since 1900. Measured the discontinuity at each datapoint We did this by comparing the progress at the point to the expected progress at that date based on the last datapoint and the rate of past progress. For instance, if the last datapoint five years ago was 600 units, and progress had been going at two units per year, and now a development took it to 800 units, we would calculate 800 units – 600 units = 200 units of progress = 100 years of progress in 5 years, so a 95 year discontinuity. Noted any discontinuities of more than ten years (‘moderate discontinuities’), and more than one hundred years (‘large discontinuities’)Noted anything interesting about the circumstances of each discontinuity (e.g. the type of metric it was in, the events that appeared to lead to the discontinuity, the patterns of progress around it.)\nChoosing areas\nWe collected around ninety suggestions of technological change which might have been discontinuous. Many of these were offered to us in response to a Facebook question, a Quora question, personal communications, and a bounty posted on this website. We obtained some by searching for abrupt graphs in google images, and noting their subject matter. We found further contenders in the process of investigating others. Some of these are particular technologies, and others are trends.3\nWe still have around fifty suggestions for trends that may have been discontinuities that we have not looked into, or have not finished looking into. \nChoosing metrics\nFor any area of technological activity, there are many specific metrics one could measure progress on. For instance consider ginning cotton (that is, taking the seeds out of it so that the fibers may be used for fabric). The development of new cotton gins might be expected to produce progress in all of the following metrics:\nCotton ginnable per minute under perfect laboratory conditionsCotton ginned per day by usersCotton ginned per worker per day by usersQuality-adjusted cotton ginned per quality-adjusted worker per dayCost to produce $1 of ginned cottonNumber of worker injuries stemming from cotton ginningPrevalence of cotton ginsValue of cotton\n(These are still not entirely specific—in order to actually measure one, you would need to also for instance specify how the information would reach you. For instance, “cotton ginned per day by users, as claimed in a source findable by us within one day of searching online”.)\nWe choose both general areas to investigate, and particular metrics according to:\nApparent likelihood of containing discontinuous progress (e.g. because it was suggested to us in a bounty submission4, suggestions from readers and friends, and our own understanding.[/note], or by readers.)Ease of collecting clear data (e.g. because someone pointed us to a dataset, or because we could find one easily). We often began investigating a metric and then set it aside to potentially finish later, or gave up.Not seeming trivially likely to contain discontinuities for uninteresting reasons. For instance we expect the following to have a high number of discontinuities, which do not seem profitable to individually investigate:obscure metrics constructed to contain a discontinuity (e.g. Average weekly rate of seltzer delivery to Katja’s street from a particular grocery store over the period during which Katja’s household discovered that that grocery store had the cheapest seltzer)metrics very far from anyone’s concern (e.g. number of live fish in Times Square)metrics that are very close to metrics we already know contain discontinuities (e.g. if explosive power per gram of material sees a large discontinuity, then probably explosive power per gram of material divided by people needed to detonate bomb would also see a large discontinuity.)\nOur goal with the project was to understand roughly how easy it is to find large discontinuities, and to learn about the situations in which they tend to arise, rather than to clearly assess the frequency of discontinuities within a well-specified reference class of metrics (which would have been hard, for instance because good data is rarely available). Thus we did not follow a formal procedure for selecting case studies. One important feature of the set of case studies and metrics we have is that they are likely to be heavily skewed in favor of having more large discontinuities, since we were explicitly trying to select discontinuous technologies and metrics.\nData collection\nMost data was either from a particular dataset that we found in one place, or was gathered by AI Impacts researchers.\nWhen we gathered data ourselves, we generally searched for sources online until we felt that we had found most of what was readily available, or had at least investigated thoroughly the periods relevant to whether there were discontinuities. For instance, it is important to know about the trend just prior to an apparent discontinuity, than it is to know about the trend between two known records, where it is clear that little total progress has taken place.\nIn general, we report the maximal figures that we are confident of. i.e. we report the best known thing at each date, not the best possible thing at that date. So if in 1909 a thing was 10-12, we report 10, though we may note if we think 12 is likely and it makes a difference to the point just after. If all we know is that progress was made between 2010 and 2015, we report it in 2015.\nDiscontinuity calculation\nWe measure discontinuities in terms of how many years it would have taken to see the same amount of progress, if the previous trend had continued. \nTo do this, we:\nDecide which points will be considered as potential discontinuitiesDecide what we think the previous trend was for each of those pointsDetermine the shape of the previous curveEstimate the growth rate of that curveCalculate how many years the previous trend would need to have continued to see as much progress as the new point representsReport as ‘discontinuities’ all points that represented more than ten years of progress at previous rates\nRequirements for measuring discontinuities\nSometimes we exclude points from being considered as potential discontinuities, though include them to help establish the trend. This is usually because:\nWe have fewer than two earlier points, so no prior trend to compare them toWe expect that we are missing prior data, so even if they were to look discontinuous, this would be uninformative.The value of the metric at the point is too ambiguous\nSometimes when we lack information we still reason about whether a point is a discontinuity. For instance, we think the Great Eastern very likely represents a discontinuity, even though we don’t have an extensive trend for ship size, because we know that a recent Royal Navy ship was the largest ship in the world, and we know the trend for Royal Navy ship size, which the trend for overall ship size cannot ever go below. So we can reason that the recent trend for ship size cannot be any steeper than that of Royal Navy ship size, and we know that at that rate, the Great Eastern represented a discontinuity.\nCalculating previous rates of progress\nTime period selection and trend fitting\nAs history progresses, a best guess about what the trend so far is can change. The best guess trend might change apparent shape (e.g. go from seeming linear to seeming exponential) or change apparent slope (e.g. what seemed like a steeper slope looks after a few slow years like noise in a flatter slope) or change its apparent relevant period (e.g. after multiple years of surprisingly fast progress, you may decide to treat this as a new faster growth mode, and expect future progress accordingly). \nWe generally reassess the best guess trend so far for each datapoint, though this usually only changes occasionally within a dataset.\nWe have based this on researcher judgments of fit, which have generally had the following characteristics:\nTrends are expected to be linear or exponential unless they are very clearly something else. We don’t tend search for better fitting curves.If curves are not upward curving, we tend to treat them as linearIn ambiguous cases, we lean toward treating curves as exponentialWhen there appears to be a new a newer faster growth mode, we generally recognize this and start a new trend at the third point (i.e. if there has been one discontinuity we don’t immediately treat the best guess for future progress as much faster, but after two in a row, we do).\nWe color the growth rate column in the spreadsheets according to periods where the growth rate is calculated as having the same overall shape and same starting year (though within those periods, the calculated growth rate changes as new data points are added to the trend).\nTrend calculation\nWe calculate the rate of past progress as the average progress between the first and last datapoints in a subset of data, rather than taking a line of best fit. (This being a reasonable proxy for expected annual progress is established via trend selection described in the last section.)\nDiscontinuity measurement\nFor each point, we calculate how much progress it represents since the last point, and how many years of progress that is according to the past trend, then subtract the number of years that actually passed, for the discontinuity size.\nThis means that if no progress is seen for a hundred years, and then all of the progress expected in that time occurs at once, this does not count as a discontinuity. \nReporting discontinuities\nWe report discontinuities as ‘substantial’ if they are at least ten years of progress at once, and ‘large’ if they are at least one hundred years of progress at once.\n‘Robust’ discontinuities\nMany developments classified as discontinuities by the above methods are ahead of a best guess trend, but unsurprising because the data should have left much uncertainty about the best trend. For instance, if the data does not fit a consistent curve well, or is very sparse, one should be less surprised if a new point fails to line up with any particular line through it.\nIn this project we are more interested in clear departures from established trends than in noisy or difficult to extrapolate trends, so a researcher judged each discontinuity as a clear divergence from an established trend or not. We call discontinuities judged to clearly involve a departure from an established trend ‘robust discontinuities’.\n\nSee the project’s main page for authorship and acknowledgements.", "url": "https://aiimpacts.org/methodology-for-discontinuity-investigation/", "title": "Methodology for discontinuous progress investigation", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-06-05T19:55:08+00:00", "paged_url": "https://aiimpacts.org/feed?paged=11", "authors": ["Asya Bergal"], "id": "b70f0c0c018737ecedc098e5b556faf4", "summary": []} {"text": "Historic trends in particle accelerator performance\n\nPublished Feb 7 2020\nNone of particle energy, center-of-mass energy nor Lorentz factor achievable by particle accelerators appears to have undergone a discontinuity of more than ten years of progress at previous rates. \nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nBackground\nParticle accelerators propel charged particles at high speeds, typically so that experiments can be conducted on them.1\nFermi National Laboratory particle accelerator2\nTrends\nOur understanding is that key performance metrics for particle accelerators include how much kinetic energy they can generate in particles, how much center-of-mass energy they can create in collisions between particles, and the Lorentz factor they can achieve. \n‘Livingston charts’ show progress in particle accelerator efficacy over time, and seem to be common. We took data from a relatively recent and populated one in a slide deck from a Cornell accelerator physics course (see slide 45),3 and extracted data from it, shown in this spreadsheet (see columns ‘year’ and ‘eV’ in tabs ‘Hoffstaetter Hadrons’ and ‘Hoffstaetter Leptons’ for original data). \nThe standard performance metric in a Livingston chart is ‘energy needed for a particle to hit a stationary proton with the same center of mass energy as the actual collisions in the accelerator’. We are uncertain why this metric is used, though it does allow for comparisons to earlier technology in a way that CM energy does not. We used a Lorentz transform to obtain particle energy, center-of-mass energy, and Lorentz factors from the Livingston chart data.4\nParticle energy\nData\n Figure 1 shows our data on particle energy over time, also available in our spreadsheet, tab ‘Particle energy’.\nFigure 1: Particle energy in eV over time\nDiscontinuity measurement\nWe chose to model the data as a single exponential trend.5 There are no greater than 10-year discontinuities in particle energy at previous rates within this trend.6\nCenter-of-mass energy\nData\nFigure 2 shows our data on center-of-mass energy over time, also available in this spreadsheet, tab ‘CM energy’.\nFigure 2: Center-of-mass energy in eV over time\nDiscontinuity measurement\nWe treated the data as exponential.7 There are no greater than 10-year discontinuities in center-of-mass energy at previous rates within this trend.8\nLorentz factor\nAccording to Wikipedia, ‘The Lorentz factor or Lorentz term is the factor by which time, length, and relativistic mass change for an object while that object is moving.’ \nData\nThis spreadsheet, tab ‘Lorentz factor’, shows our calculated data for progress on Lorentz factors attained over time.\n Figure 3: Lorentz factor (gamma) over time. \nDiscontinuity measurement\nWe treated the data as one exponential trend.9 There were no greater than 10-year discontinuities at previous rates within this trend.10\nPrimary author: Rick Korzekwa\nNotes\n", "url": "https://aiimpacts.org/particle-accelerator-performance-progress/", "title": "Historic trends in particle accelerator performance", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-03-27T00:07:42+00:00", "paged_url": "https://aiimpacts.org/feed?paged=11", "authors": ["Katja Grace"], "id": "736822e9723fd451bd60718a6aba54ea", "summary": []} {"text": "AI conference attendance\n\nSix of the largest seven AI conferences hosted a total of 27,396 attendees in 2018. Attendance at these conferences has grown by an average of 21% per year over 2011-2018. These six conferences host around six times as many attendees as six smaller AI conferences.\nDetails\nData from AI Index. The  conference IROS is excluded because AI Index did not have data for them in 2018. IROS had 2,678 attendees in 2017, so this does not dramatically change the graph.\nArtificial Intelligence Index reports on this, from data they collected from conferences directly.1 We extended their spreadsheet to measure precise growth rates, and visualize the data differently (data here). They are missing 2018 data for IROS, so while we include it in Figure 1, we excluded it from the growth rate calculations.\nFrom their spreadsheet we calculate:\n\n\n\n\n\n\nLarge conferences (>2000 2018 participants) total participants\n27,396\n\n\nSmall conferences (<2000 2018 participants) total participants\n4,754\n\n\n\nThis means large conferences have 5.9 times as many participants as smaller conferences.\nAccording to this data, total large conference participation has grown by a factor 3.76 between 2011 and 2019, which is equivalent to a factor of 1.21 per year during that period.\nReferences", "url": "https://aiimpacts.org/ai-conference-attendance/", "title": "AI conference attendance", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-03-07T00:44:04+00:00", "paged_url": "https://aiimpacts.org/feed?paged=12", "authors": ["Katja Grace"], "id": "eb73d72e83fd8bbddce4f4458593f6ff", "summary": []} {"text": "Historical economic growth trends\n\nAn analysis of historical growth supports the possibility of radical increases in growth rate. Naive extrapolation of long-term trends would suggest massive increases in growth rate over the coming century, although growth over the last half-century has lagged very significantly behind these long-term trends.\nSupport\nBradford DeLong has published estimates for historical world GDP, piecing together data on recent GDP, historical population estimates, and crude estimates for historical per capita GDP. We have not analyzed these estimates in depth, but they appear to be plausible. (Robin Hanson has expressed complaints with the population estimates from before 10,000 BC, but our overall conclusions do not seem to be sensitive to these estimates.)\nThe raw data produced by DeLong, together with log-scale graphs of that data, are available here (augmented with one data point for 2013 found in the CIA world factbook, population data from the US census bureau via Wikipedia, and the website usinflationcalculator). Note that brief periods of negative growth have not been indicated, and that we have used what DeLong refers to as “ex-nordhaus” data, neglecting quality-of-life adjustments arising from improvements in the diversity of goods.\nFigure 1: The relationship of GWP and doubling time, historically. Note that the x-axis is log(GWP), not time—date lines mark GWP at those dates.\nThe data suggest that (proportional) rates of economic and population growth increase roughly linearly with the size of the world economy and population. Certainly, a constant rate of growth is a poor model for the data, as growth rates range over 5 orders of magnitude; rather, the data appear to be consistent with substantially superlinear returns to scale, such that doubling the size of the world multiplies the absolute rate of growth by 21.5 – 21.75 (as opposed to 2, which would be expected by exponential growth).\nExtrapolating this model implies that at a time when the economy is growing 1% per year, growth will diverge to infinity after about 200 years. This outcome of course seems impossible, but this does suggest that the historical record is consistent with relatively large changes in growth rate, and in fact rates of economic growth experienced today are radically larger (even proportionally) than those experienced prior to the industrial revolution.\nFrom around 0 to 500 CE, the predicted divergence occurs between 1700 and 2000, from 500 to 1000 CE it occurs around 2100, and from 1300 to 1950 it occurred in the later part of the 20th century.\nIn fact growth has fallen substantially behind this trend over the course of the 20th century; growth has continued but the acceleration of growth has slowed substantially (indeed reversing itself over the last 50 years). Moreover, it is unclear to us whether historically increasing returns to scale reflect returns to economic scale, or population scale, and if the latter then a profound slowdown seems likely–population growth rates seem to robustly fall at very high levels of development, and at any rate doubling times much shorter than 10-20 years would require radical changes in fertility patterns.1 That said, any such biologically contingent dynamics might be modified in a world where machine intelligence can substitute for human labor. Our impression is that this slowdown has been the subject of extensive inquiry by economists, but we have not reviewed this literature.\nImplications\nOverall, it seems unclear how much weight one should place on historical trends in predicting the future, and it seems unclear whether we should focus on very long-term trends of accelerating growth or short-term trends of stagnant growth (at least as measured by GDP). However, at a minimum it seems that extrapolation from history is consistent with extreme increases in the growth rate.", "url": "https://aiimpacts.org/historical-growth-trends/", "title": "Historical economic growth trends", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-03-06T08:06:27+00:00", "paged_url": "https://aiimpacts.org/feed?paged=12", "authors": ["Katja Grace"], "id": "66c5a0badd5c25ba8a4b544488d4fb43", "summary": []} {"text": "Primates vs birds: Is one brain architecture better than the other?\n\nBy Tegan McCaslin, 28 February 2019\nThe boring answer to that question is, “Yes, birds.” But that’s only because birds can pack more neurons into a walnut-sized brain than a monkey with a brain four times that size. So let’s forget about brain volume for a second and ask the really interesting question: neuron per neuron, who’s coming out ahead?\nYou might wonder why I picked birds and primates instead of, say, dogs and cats, or mice and elephants, or any other pair of distinct animals. But check out this mouse brain:\n[By Mamunur Rashid – Own work, CC BY 4.0]\nSee how, on the outside of the lobe (the part closer to the upper righthand corner), you can pick out a series of stripes in a neat little row? Those stripes are the six layers of the neocortex, a specifically mammalian invention—all mammals have it, and no one else does. People have been pointing to this structure to explain why we’re so much better than fish since the scala naturae fell out of favor. \nAnd that would be a pretty convenient story if birds hadn’t come along and messed the whole picture up. If you look at a similar cross section of a bird’s brain, it kind of just looks like a structureless blob. For a long time, comparative neuroanatomists thought birds must be at a more primitive stage of brain evolution, with no cortex but huge basal ganglia (the bit that we have sitting under our own fancy cortex). But we’ve since realized that this “lower” structure is actually a totally different, independently-evolved form of cortex, which seems to control all the same areas of behavior that mammalian cortex does. In fact, birds have substantially more of their brain neurons concentrated in their cortices than we mammals have in ours. \nAlright, so it’s not that surprising that another form of cortical tissue exists in nature. But could it really work as well as ours? Surprisingly, no one has really tried to figure this out before.\nIf, for instance, primates were head and shoulders above birds, that might mean that intelligent brains aren’t just energetically expensive (in terms of the energy required for developing and operating neurons), they’re also exceptionally tricky to get right from a design standpoint. Of course, if bird and primate architectures worked equally well, that doesn’t mean brains are easy to get right–it would just mean that evolution happened to stumble into two independent solutions around 100 million years ago. Still, that would imply substantially more flexibility in neural tissue architectures than the world in which one tissue architecture outstripped all others.\nAnswering the question of birds vs. primates conclusively would be an enormous undertaking (and to be honest, answering it inconclusively was a pretty big pain already), so instead I focused on a very small sample of species in a narrow range of brain sizes and tried to get a really good sense of how smart those animals in particular were, relative to one another. I also got 80+ other people (non-experts) to look at the behavioral repertoire of these animals and rank how cognitively demanding they sounded. \nWith my methodology of just digging through all of the behavioral literature I could find on these species, full and representative coverage of their entire behavioral repertoire was a major challenge, and I think it fell well short of adequate in some categories. This can be a big problem if an animal only displays its full cognitive capacities in one or a few domains, and worse, you might not even know which those are. I think this wasn’t as big an issue with the species I studied as it could have been, since we have pretty good priors with respect to what selective pressures drove cognitive development in the smartest animals (like primates and parrots). Plus, scientists are much more likely to study the most complex and interesting behaviors, and those are very often the ones that display the most intelligence.\nOne of the behaviors scientists are really keen on is tool use. Our survey participants seemed to like it too, because they rated its importance higher than any other category, and it ended up being the most discriminatory behavior, too–neither the small-brained monkey nor the small-brained parrot had recorded examples of tool use in the wild, while both of the larger-brained animals did.\nIn the end, people didn’t seem to think the two primate species I included acted smarter than the two bird species or vice versa, but did think the larger-brained animals acted smarter than the smaller-brained animals. The fact that this surveying method both confirmed my intuitions and didn’t seem totally overwhelmed by noise kind of impressed me, because who knew you could just ask a bunch of random people to look at some animal behaviors and have them kind of agree on what the smartest were? That said, we didn’t validate it against anything, and even though we have reasons to suspect this method works as intended (see the full article), how well and whether this was a good implementation aren’t clear.\nSo this is all pretty cool, but even if we could prove definitively that macaws and squirrel monkeys are smarter than grey parrots and owl monkeys, it’s not a knock-down argument for architecture space being chock full of feasible designs, or even for birds and primates having identical per-neuron cognitive capacity. It’s mostly just a demonstration that the old self-flattering dogma of primate exceptionalism doesn’t really hold water. But it also points to an interesting trend: instead of trying to tweak a bunch of parameters in brains to squeeze the best possible performance out of a given size, evolution seems to have gotten a lot of mileage out of just throwing more neurons at the problem.\nThere’s a lot more dirt here, in the full analysis.", "url": "https://aiimpacts.org/primates-vs-birds-is-one-brain-architecture-better-than-the-other/", "title": "Primates vs birds: Is one brain architecture better than the other?", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-03-01T01:26:24+00:00", "paged_url": "https://aiimpacts.org/feed?paged=12", "authors": ["Tegan McCaslin"], "id": "37444674de1503d6aaded739221b50ec", "summary": []} {"text": "Investigation into the relationship between neuron count and intelligence across differing cortical architectures\n\nSurvey participants (n = 83) were given anonymized descriptions of behavior in the wild for four animals: one bird species and one primate species with a similar neuron count, and one bird species and one primate species with twice as many neurons. Participants judged the two large-brained animals to display more intelligent behavior than the two smaller-brained animals on net, due to the large-brained animals’ substantial tool use being seen as a strong sign of intelligence, next to the small-brained animals absence of tool use. Other results were mixed. Participants did not judge either primates or birds to display more intelligent behavior.\n\n1. Background\nThe existence of a correlation between brain size and intelligence across animal species is well-known (Roth & Dicke, 2005). Less clear is the extent to which brain size–in particular, neuron count–is responsible for differences in cognitive abilities between species. Here, we investigate one possible factor, the tissue organization of the cerebral cortex, by comparing cognitive abilities of animals with differing cortical architectures.\nPrimates make a natural target for comparison, since their intelligence has already been extensively studied. Additionally, comparing primate cognitive abilities to taxa that are farther from the human line may allow us to either confirm or deny the existence of a hard step for the evolvability of intelligence between primates and their last common ancestor with other large-brained animals (Shulman & Bostrom, 2012)1. Although some informal comparisons with other animals have been made, so far there have been few attempts to make detailed or quantitative comparisons between primate and non-primate intelligence.\nThere is only one extant alternative to primate cerebral architecture which has scaled to a similar size in terms of neuron count, that of birds, a lineage which diverged from our last common ancestor over 300 million years ago (for neuron counts of species across several lineages, see here). Avian cortical architecture appears strikingly different from primates and indeed all mammals (see 1.1). However, compared to primates, radically less research effort has gone into investigating bird intelligence in a way that would enable comparison with other species. Therefore, in addition to theoretical difficulties (see 1.3), we also face the practical difficulty of comparing bird and primate intelligence without the aid of a rich psychometric literature, as exists for humans. Despite this difficulty, we believe that the comparison is nonetheless worthwhile, as it could give us insight into the flexibility of possible solutions to the problem of intelligence, given “hardware” of sufficient size.\nFor instance, if primates performed especially well relative to their absolute number of brain neurons or brain energy budget, this might indicate that primate cortical architecture (or some other systematic difference between primate and avian brains) was especially well-suited to producing intelligence. Furthermore, it would suggest that the evolution of biological intelligence faced design-related bottlenecks moreso than energy- or “hardware” bottlenecks. Likewise, if bird and primate architectures perform similarly despite different organization, this at the very least would indicate that the space of “wetware” architectures that lent themselves to the successful implementation of intelligence was larger than one. More speculatively, it could be taken as a sign that working brain architectures are fairly easy to come by, given a sufficient number of neurons and/or a sufficiently high brain energy budget.\n1.1 Mammalian vs avian brains: Similarities and differences\nFigure 1. Image by J. Arthur Thomson \n \nThe usefulness of the comparison between birds and primates relies on the degree to which the same resources (a particular quantity of brain neurons) are arranged differently. At a glance, the majority of tissue in the avian and primate brains appears to be quite different, as the structure which evolved after the divergence point 300 million years ago–the cerebral cortex–occupies ~80% of the volume of both avian and primate brains. However, there is nonetheless a great deal of overlap in non-cerebral structures, and there is even reason to believe that the cerebral cortex has more commonality between bird and primate than might naively be expected (Kaas, 2017).\nIn the central nervous system, the common structures shared by mammals and birds include the spinal cord, the hindbrain, and the midbrain. These regions are primarily responsible for non-cognitive processes such as autonomic, sensorimotor, and circadian functions. Although each of these structures underwent changes to accommodate differences in body plan, environment, and niche, they are overall quite similar. Additionally, they have an unambiguously homologous (that is, similar by virtue of common descent) relationship in birds and mammals (Güntürkün, Stacho, & Strockens, 2017). \nAtop the midbrain sits the forebrain, in particular the telencephalon, which is evolution’s most recent addition and the region which displays the most novel properties. The lower portion of the forebrain (the basal ganglia) is likely homologous between birds and mammals, but beyond this point the architectures diverge markedly. This uppermost layer is known as the pallium, or more commonly as the cerebral cortex in mammals.\nMost of the mammalian cerebral cortex can be classed as neocortex. Neocortex spans six horizontally-oriented layers, with neurons organized into vertical columns, which may both interact with adjacent columns, and also send efferents (outgoing fibers) to distant columns or even locations farther afield in the nervous system. (However, some areas of mammalian cerebral cortex, such as parts of the hippocampus, have only three or four cell layers.) In contrast, the analogue to our neocortex in birds–the pallium–contains no layers or columns, and neurons are instead organized into nuclei. The extent to which the neocortex and the avian pallium are elaborations on pre-existing structures (and therefore homologous), versus de novo inventions of early mammals/birds, is still debated (Puelles et al., 2017). However, it is interesting to note that the most abundant type of neuron in mammalian cerebral cortex, the excitatory pyramidal cell, is also common in the avian pallium, having originated in an early vertebrate ancestor (Naumann & Laurent, 2017).\nThe most immediately obvious difference between mammalian brains and avian brains is their size. For an animal adapted for flight, bulk would have been particularly costly, and this pressure probably forced neurons to become smaller and more tightly packed, resulting in a small brain dense with neurons (Olkowicz et al., 2016). However, neurons in mammal brains are both large relative to comparably sized bird brains, and also scale with the size of the brain. The only mammalian order exempt from this neuron scaling rule is primates (Herculano-Houzel, Collins, Wong, & Kaas, 2007). Therefore, although they still possess larger neurons than those of birds, primates were able to increase neuron count relatively efficiently through brain size increases, and are less constrained than birds with regard to size and weight limits.\nAlthough it was reasoned that larger neurons would be more energetically expensive due to the maintenance cost of neurons even at rest, this has not been borne out empirically. At least in mammals, the per-neuron energy budget appears to be relatively constant within brain structures, and does not vary as a function of cell size (Herculano-Houzel, 2011). This finding has not been verified in birds, however the commonality of cell types across mammalian and avian brains suggests that it is likely true for birds as well. Interestingly, neuronal energy budget appears to differ substantially between brain structures: energy consumption by cerebral neurons, which are predominantly pyramidal cells, is an order of magnitude higher than that of cerebellar neurons, which are predominantly small granule cells.\nThis may have functional relevance for the final notable difference between primate and avian brains, the relative size of certain brain regions. While both bird and mammal brains are dominated volumetrically by the telencephalon (including the cerebral cortex/pallium), only in birds are the majority of neurons contained within this structure. In mammals, the densely-packed cerebellum expanded in tandem with the cerebrum,2 while this structure remained relatively small in birds.\nThis is a topic of some curiosity, since the cerebellum was previously thought to simply control motor processes. The observation that it scaled proportionally to brain size may have contributed to the popularity of the “encephalization quotient”, based on the notion that the amount of brain tissue required to control a body scales with the size of the body. However, more recent findings suggest a broader role for the cerebellum in humans, including in cognitive functions. If the cerebellum made a substantial contribution to cognition, it would call to mind several possible scenarios.\nIt’s possible that after it was no longer useful to improve motor control, developmental or other constraints made changing the brain’s scaling rules to de-emphasize the cerebellum costly. Instead of reassigning the brain’s volume budget, perhaps cerebellar tissue was repurposed to serve cognitive functions which had been pushed out of the cerebrum, a structure which had already become crowded enough to resort to lateralizing functions (relegating certain domains, like language, to one side of the brain exclusively, in contrast to the default in animals of bilateral function). Since the cerebrum and cerebellum are extremely cytoarchitecturally dissimilar, sharing neither cell types nor organization, this would be evidence of generality of function across different neural tissue types. Indeed, it would be more impressive than if bird and mammal cortex were functionally equivalent, since a mammal’s cerebellum bears far less resemblance to its neocortex than its neocortex does to a bird’s pallium.\nAlternatively, birds may lack some novel functions which emerged in mammals as the result of the expanding cerebellum. Finally, the most disheartening possibility is that the extra cerebellar tissue in large-brained mammals represents an inferior allocation of brain tissue.\n1.2 Common models of brain-based intelligence differences between species\nHistorically, there was much popular support for the idea that differences in brain size tracked differences in intelligence between species. Several variations on this theme have also built a following in the past century, including encephalization quotient, brain-to-body ratio, and neuron count. These could be called the “More is Better” class of models, where increases in intelligence across species are attributed to greater absolute amounts of brain tissue, neurons, synapses, etc, or to greater amounts relative to some expected amount. \nAlthough among these models the most parsimonious currently appears to be neuron count (see here and Herculano-Houzel 2009), the intuitively appealing “relative size” models–encephalization quotient and brain-to-body ratio–may still have heuristic value in distinguishing between similarly-sized brains, despite lacking mechanistic explanatory power. This is because a relatively large investment in brain tissue compared to body size would imply stronger selection pressure for intelligence. However, in this case, the likely mechanism of the cognitive advantage falls under the next category.\nThe other class of models could be called “Structural Improvements”, where intelligence increases are attributed to improvements in brain architecture. At a gross brain level, the most popular of these models implicates the size of the forebrain, relative to the rest of the brain. Other possibilities in this space include tissue-level properties (such as whether cells are arranged into layers or nuclei), as well as much finer cytoarchitectural adjustments, altered developmental processes, functional properties of neurons, and features like gyrification (cortical folding).\nWhile it’s certainly the case that both quantitative and qualitative changes factored into the development of higher intelligence, the degree to which one or the other explains the variance between species is not well understood. This uncertainty is due in part to the difficulty of measuring animal intelligence across a collection of species diverse enough to differ in both quantitative and qualitative brain characteristics. (Additionally, our understanding of qualitative interspecific differences that are less apparent than the architectural differences we focus on here is currently rather poor.) Such a set of animal species would tend to vary not simply in characteristics related to intelligence, but also in body plan, physical abilities, temperament, accessibility for human study, and the evolutionary pressures favoring intelligence in the species.\nThe nature of the intelligence construct adds a further layer of obscurity. While the general factor (g) is well-accepted among intelligence researchers with regard to humans (Carroll, 1997), the body of evidence in non-humans–and especially in non-primates–is small and somewhat conflicting (Burkart, Schubiger, & van Schaik, 2017). Furthermore, it’s likely that assumptions of generality hold less well in animals with low cognitive capacity (for instance, in insects).\n1.3 Previous attempts to measure primate and avian intelligence\nOur knowledge of primate intelligence is primarily informed by a diverse body of laboratory tasks that attempt to measure various aspects of cognition. While any particular task is likely to be a relatively weak signal of overall intelligence on its own, combining this result with the results of dissimilar tasks will tend to improve the measure, as has been found in human intelligence testing. Very few studies have attempted to administer such a battery of intelligence tasks at the level of an individual non-human subject; however, a ‘species-level battery’ may be assembled from the single-task results that do exist. Especially when this ‘species-level battery’ is based on a small number of tests, care must be taken to ensure that the procedures for administering tasks were the same across species. Luckily, the large amount of primate cognition research conducted in the last century allows the construction of a battery according to these criteria. The measurement of primate intelligence is discussed further here.\nIn comparison with primates, the collection of cognitive tests that have been administered to bird species is disappointingly sparse. There are few examples of directly comparable tasks that have been administered to multiple species, preventing the construction of a battery from laboratory tasks. Even rarer are tasks that would enable comparison between primate species and bird species.\nAn alternative methodology that has been validated in primates is based on observations of behavior in the wild. Because the cognitive abilities displayed in the laboratory are likely the result of behavioral adaptations to challenging physical or social environments, it stands to reason that certain species-typical behaviors should correlate with the average intelligence of the species; that is, species that act intelligent in the lab should act intelligent in the field. This approach was used by Reader and colleagues (2011), who found that the number of reports citing instances of several types of behavior (eg tool use, social learning) correlated with each other, supporting the existence of a general factor of intelligence in primates. Furthermore, these results correlated with the results of the laboratory test battery discussed above at 0.7.\n2. Estimating animal intelligence by survey: Methods\nRather than conducting a comprehensive behavioral review across many genera, as Reader and colleagues did (see 1.3), we restricted our analysis to a small set of primates and birds which were matched for total neuron count. We then gathered behavioral observations from the academic literature on each species, attempting to draw evidence from all plausibly relevant domains of animal life, and used these to construct a questionnaire for ranking animal intelligence. This was then given to a small, non-random pilot sample, as well as a larger sample of Mechanical Turk workers. In addition to apparent difficulty of behaviors in several behavioral domains, participants were asked to rank the relevance of behavioral domains to intelligence, and this ranking was used to weight the within-domain scores. Where possible, we removed features of descriptions which would have identified an animal as a bird or a primate.\nAlthough far below the standard demanded of well-validated measures of intelligence,3 we believe that the aggregated judgments of survey participants can offer some information about an agent’s intelligence due to the moderate correlation of peer-rated intelligence with measured IQ within humans. For instance, Bailey and Mettetal (1977) found that spouses’ ratings had a correlation of 0.6 with scores on the Otis Quick Scoring Test of Mental Ability, while Borkenau and Liebler (1993) found that acquaintances’ ratings had a correlation of 0.3 with test scores. Most impressively, they also found that strangers shown a short video of a subject reading from a script gave ratings of the subject’s intelligence that correlated at 0.38 with the subject’s actual test scores.\nThe problem of rating human intelligence from impressions is in some ways quite a different one from the rating of an unfamiliar species. One factor that could potentially make judgment of humans easier is that human society rewards intelligence by conferring certain forms of status differentially on those who display greater cognitive ability, in ways that are legible to both close associates (ie spouses) and total strangers. This means that individual raters are already benefiting from the aggregated judgments of many past raters (indeed, these positional signals may constitute the majority of evidence in low information situations like acquaintanceship). Additionally, humans have a natural point of reference for the behavior of other humans, and this familiarity probably allows much more accurate comparisons.\nHowever, judgment of other humans may also suffer from several disadvantages that judgment of nonhuman animals does not. Because humans in the same social group often occupy a relatively narrow range of the intelligence distribution, raters are asked to distinguish between differences in behavior that are small in absolute terms. For example, in the studies cited above, samples were drawn from college populations, which are famously range-restricted. Furthermore, raters of humans likely do not have the full range of behavior available to draw evidence from when considering strangers, acquaintances, or even spouses. In contrast, we attempted to capture all potentially relevant behavioral domains in data collection for our survey. Finally, as each others’ main social competitors, humans probably have stronger conflicts of interest in evaluating the intelligence of other humans, and thus may be disincentivized to make completely honest judgments.\nOverall, we expect our methodology to produce weaker results than what is possible for raters of human subjects, but not radically so. It should be noted that, because of the scarcity of psychometric data for the species studied, we were not able to verify a correlation with other measures of intelligence. However, it would be possible to validate some version of this methodology with species for which psychometric data does exist (see 4.2).\n2.1 Study object selection\nWe chose to study four animals: one larger-brained specimen of each of bird and primate, and one smaller-brained specimen of each. Having already established a strong relationship between brain size and intelligence within architecture types (see here), varying both architecture type and size allowed us to consider the degree to which one architecture type consistently outperformed the other–for instance, if the smaller version of one architecture outperformed both smaller and larger versions of the other architecture, this would more strongly suggest superiority due to structure than would a performance difference in two architectures of similar size.\nSince we were limited to only those species in which neuron count is known, and where there is overlap between birds and primates, we had only five primates to choose from, three of which had few instances of behavioral reports (the Northern greater galago, Otolemur garnettii; the common marmoset, Callithrix jacchus; and the gray mouse lemur, Microcebus murinus). \nOf the remaining primates, the squirrel monkey (Saimiri sciureus) was the larger-brained, with 3.2 billion neurons. Only one bird, the blue and yellow macaw (Ara ararauna) was reported as having a similarly large number of neurons, at 3.1 billion. The smaller-brained primate, the owl monkey (Aotus trivirgatus) has less than half this number of neurons at 1.5 billion, and was matched by both the grey parrot (Psittacus erithacus) at 1.6 billion and a corvid, the rook (Corvus frugilegus), at 1.5 billion. Because of the close evolutionary relationship between the two selected primates (~30 million years divergence time for Saimiri and Aotus, according to TimeTree), we chose to focus on the parrots, who share a similar evolutionary relationship (~30 million years divergence time for Ara and Psittacus, versus ~80 million years for Ara and Corvus).\nIt was expected that the factor of two difference in neuron count between the larger- and smaller-brained samples would be substantial enough to provide some signal despite the noisy nature of behavioral data and analysis, without being so enormous as to render the results trivial. Supposing the relationship between intelligence and neuron count scaled logarithmically, the difference between our sample would be somewhat smaller than the difference between humans and chimpanzees, who differ by a factor of three. (In absolute terms, the neuron count difference is more comparable to neuron count differences between individual humans.) However, it is worth noting that, in our analysis of primate intelligence from lab tests, a factor of two difference was approximately the lower bound for reliably producing a difference in measured intelligence.\nBecause the features of a single species are often studied unevenly, we improved our coverage of the behavioral spectrum by broadening data collection to include all species in a genus. This is a common practice in the study of animal behavior, generally poses fewer problems than groupings at higher taxa, and prevented us from having to search multiple species names in cases where these had changed in the last century. Furthermore, although brain sizes varied somewhat within genera, the size distribution of the smaller-brained genera (Aotus and Psittacus) had little to no overlap with that of the larger-brained genera (Saimiri and Ara). Species in each genus with available brain size data are shown in the table below. It is probably the case that not all species listed in the table were represented in our data, and that some species were overrepresented within their genus, however in many cases the exact species was not specified in the source.\n\n\n\nGenus\nSpecies/sample\nBrain mass (g)\n\n\nAotus\ntrivirgatus (n = 2)\n15.7 \n\n\n\ntrivirgatus (other sources, n = 288)\n17.2 (SD = 1.6)\n\n\n\nazarai (n = 6)\n21.1\n\n\n\nlemurinus (n = 34)\n16.8\n\n\nSaimiri\nsciureus (n = 2)\n30.2\n\n\n\nsciureus (other sources, n = 216)\n24.0 (SD = 2.0)\n\n\n\nboliviensis (n = 3)\n25.7\n\n\n\noerstedii (n = 81)\n21.4\n\n\nPsittacus\nerithacus (Olkowicz sample, n = 2)\n8.8\n\n\n\nerithacus (other sources, n = 1)\n6.4\n\n\nAra\nararauna (Olkowicz sample, n = 1)\n20.7\n\n\n\nararauna (other sources, n = 20)\n17.0\n\n\n\nchloropterus (n = 7)\n22.2\n\n\n\nhyacinthus (n = 12)\n25.0\n\n\n\nrubrogenys (n = 4)\n12.1\n\n\n\n2.2 Behavioral data collection\nFor each genus, we searched English language journals for behavioral observations demonstrating learning, behavioral flexibility, problem-solving, social communication, and other traits that imply intelligence. We excluded observations that involved training or interaction with humans (such as the Alex studies).\nA problematic element of this type of behavioral study is the disproportionate research effort focused on certain species over others, and in certain domains of behavior. While none of the animals studied had an especially large representation in the literature, Aotus, Ara and Psittacus were generally less well represented than Saimiri. In the case of Psittacus, a very large proportion of our data was drawn from two sources by a single author. Additionally, conventions regarding the way in which behavior was studied and which details of behavior were considered salient seemed to differ somewhat between ornithologists and primatologists. For instance, while the vocal repertoire and functional significance of vocalizations were frequently a topic of great interest to primatologists, at least in our sample, vocal communication was given a much more casual treatment by ornithologists. Therefore, our data may cause primates and birds to appear to have more qualitative differences in cognitive ability than actually exist.\nIn our analysis, we make no explicit attempt to correct for these differences in research effort, but do indicate areas of disproportionately high or low coverage of a species, and recommend that the reader bear these in mind when interpreting our results.\nAfter collection, the behavioral observations were sorted into eight functional categories, including three which primarily involved interaction with the environment (tool use, navigation/range, and shelter selection), and five involving social interaction (group dynamics, mate dynamics, care of young, play, and predation prevention). For the accompanying data for each genus, see S1. Below are full descriptions of the eight behavioral categories.\n2.2.1 Tool use\nTool use involves the manipulation of an intermediate object to affect a final object. In more sophisticated instances of this behavior, the intermediate object is modified from its original form to better serve its intended purpose. Some degree of tool use is widely reported among great apes and certain corvids, and is seldom seen in “lower” animals (Smith & Bentley-Condit, 2010). Tool use may draw on cognitive abilities such as planning, means-end reasoning, spatial or mechanical reasoning, and creativity. (However, it cannot be assumed that apparent tool use demonstrates any of these abilities–some simple animals can use objects as “tools” in a highly inflexible, presumably hard-coded way which requires no learning.)\nDespite an extensive search, examples of tool use in the wild (or a wild-mimicking environment) were not found for either Aotus or Psittacus. However, since at least one of these animals (Psittacus) can display tool-using behaviors in environments with frequent human contact (for instance, in a laboratory or pet environment) (Janzen, Janzen, & Pond, 1976), it’s unlikely that that these animals have no capacity at all for developing tool use. Therefore, other explanations for the lack of tool use in the wild should be considered. For one, both species are somewhat more neophobic than Saimiri and Ara, and thus are less likely to interact with unfamiliar objects frequently enough to develop a use for them. Furthermore, both species are substantially less well-studied in the wild than Saimiri (but not Ara), and may simply use tools too infrequently or inconspicuously to be noticed. \nHowever, because of its relative rarity, spontaneous tool use is often taken to be “absent until proven present” in an animal species, and we have adhered to this convention in the present study. Readers who disagree with this approach may regard the scores of Aotus and Psittacus on this metric as a lower bound.\n2.2.2 Navigation/range\nThe range and territory size of an animal are how far it typically travels on a day-to-day basis, and the total area in which its ranging happens, respectively. Since an animal that travels more distantly will encounter more different environments than one that travels less distantly, larger ranges or territory sizes could signal more behavioral flexibility. Additionally, large ranges or variable routes may be more taxing on memory.\nRelatively little information was available in this category for Ara and Psittacus. One might also expect that the skills required for navigation on land would differ substantially from those required for air navigation. In the final version of the survey, we consolidated this category with the following category.\n2.2.3 Shelter selection\nWhere an animal chooses to rest or nest is one of the most frequent decisions it makes, and for prey animals may be one of the more important for survival. When searching for shelter, some optimization criteria may place large demands on perceptual or planning abilities, or on memory.\nIn the final version of the survey, we consolidated this category with the category above. While neither category alone was judged by participants to contain a large amount of evidence for intelligence, we hoped that combining the two would improve the signal and balance a survey heavy on social behaviors.\n2.2.4 Group dynamics\nThe dynamics of group interaction vary dramatically between species, and frequently even within species in different geographic locations. Social group size of non-herding animals (that is, animals that do not affiliate with conspecifics merely to reduce predation risk) is thought to be correlated with intelligence, and some theories of the evolution of higher intelligence implicate social competition or cooperation as a primary driver (Dunbar, 1998). Furthermore, the range and flexibility of an animal’s vocal or visual communication may indicate the level of complexity of the species’ social life. Often, animals that have close or important relationships with their conspecifics engage in social grooming behaviors.\nDue to the amount and complexity of evidence that fell into this category, it was particularly difficult to consolidate these behaviors into a truly representative description of each species. In the final version of the survey, this category was consolidated into a new category, “Social dynamics”.\n2.2.5 Mate dynamics\nMate dynamics includes sexual and pair bonding behavior, as well as behaviors relevant to sexual competition. Some examples of behavior that falls into this category are courtship behaviors, social grooming between mates, and joint territorial displays. Some pairbonded animals, particularly birds, engage in the majority of their social interactions with a mate, rather than with group members (Luescher, 2006).\nIn the final version of the survey, this category was consolidated into the category “Social dynamics”.\n2.2.6 Care of young\nAs well as being an important social relationship in some species of animals, parent/offspring interaction during development generally holds clues about the degree to which learning influences an animal’s behavior, as well as whether an animal participates in social learning (that is, learning by mimicry or emulation of conspecifics) or trial-and-error learning. Longer development times and higher parental investment typically correlate with learning ability in a species.\nAotus was not included in this comparison due to a lack of information. Psittacus and Ara had very poor representation in the literature compared to Saimiri. However, the category was retained due to its consistently high rating on the importance score.\n2.2.7 Play\nPlay behavior is essentially a nonfunctional, simulated version of a functional behavior found in adult animals’ usual repertoire, and is more often seen in juvenile animals. Play probably exists to facilitate learning and practice of necessary skills, especially social ones. Play fighting is a very common form of play in social species.\nParticipants in our early Mechanical Turk sample did not find this category very informative, and indeed it is more a correlate of (or precursor to) intelligent behavior than intelligent in itself. It was therefore removed from the final version of the survey, although some details were preserved in the “Social dynamics” category. \n2.2.8 Predation prevention\nAnimals evade predation through individual precautionary actions, threat signalling, and sometimes group coordination. Since offspring are both highly valuable and also more vulnerable to predation, much of the behavior in this category centers around defense of the nest. Associations between threat types and the amount of alarm appropriate may be learned to a greater or lesser degree in different species, as well as the proper form of the threat signal in the animal’s social group. Furthermore, threats may be classed into few or many types, facilitating greater or lesser nuance in response actions.\nParticipants in our early Mechanical Turk sample did not find this category very informative, and it was not easily subsumable into “Social dynamics”, so this category was struck from the final version of the survey.\n2.3 Survey construction and procedures\nWe synthesized the reports from each category into a representative summary of a species’ behavior in that domain. Where possible, this included any details that might indicate the degree to which behaviors were learned, demonstrated flexibility across different environmental conditions, or were apparently supported by particular cognitive strategies. The summaries were then used to construct a questionnaire which asked participants to rate the apparent intelligence of behaviors against other behaviors in that same category. Afterward, participants were asked which categories they thought contained the most evidence about intelligence, on a scale of one to five. The questionnaire was given to a small random sample of Mechanical Turk workers (n = 12), as well as a small nonrandom panel composed of myself, Paul Christiano, Finan Adamson, Carl Shulman, Chris Olah and Katja Grace. Later, the questionnaire was condensed into four sections (tool use, navigation/shelter selection, social dynamics, and care of young) and given to a larger sample of Mechanical Turk workers (n = 104).\nBecause the term “intelligence” is somewhat value-laden and tends to have many idiosyncratic meanings attached to it, we chose to use the word “cognitive complexity” in its place. The hope was that this would reduce conflation with “rationality” or “adaptiveness”, which are both common lay misunderstandings of the term. We also attempted to reduce bias in survey responses by blinding participants to properties not directly relevant to the behaviors being described (including brain size and, wherever possible, membership in the bird or primate class).\n2.3.1 Pilot survey\nThe pilot survey included all eight categories of behavior, as well as longer and more detailed summaries. Mechanical Turk participants were selected through the platform Positly, and the survey was administered using Google Forms. Participants were asked to rate the behaviors presented on a 10 point scale against others in the same category, not against behaviors that had been presented in previous categories, and were given the option of providing commentary. Participants were also asked to rate categories against each other for evidence of intelligence on a five point scale. All questions from this version can be found in S1, and participant responses can be found in S2.\nMechanical Turk data from this round of the survey was used to inform the abridgment of the final version. In particular, we removed or consolidated sections that had been rated by participants as less important, and adjusted the wording or level of detail on questions that seemed unclear to participants.\n2.3.2 Final survey\nThe final version of the survey included four categories: tool use, navigation/shelter selection, social dynamics, and care of young. Social dynamics collapsed group dynamics, mate dynamics, and play. This version of the survey was administered via GuidedTrack, and added mandatory wait times to pages as well as a free response question assessing comprehension of the task instructions. Analysis was restricted to participants who were not rated as having poor comprehension (n = 77). All questions in this version can be found in S1, and participant responses can be found in S2.\n3 Estimating animal intelligence by survey: Results\n3.0.1 Pilot survey\nWe will present only the results from the small panel here, however the full data from this section can be found in the supplementary file. \nTool use, Group dynamics, and Play emerged as the most important categories, according to participant rating, with Navigation & range and Shelter selection rated as least important. Across most categories, especially those rated as more important, there was strong agreement that Samiri, Ara and Psittacus outranked Aotus. There was also reasonably good agreement that Saimiri and Ara outranked Psittacus. Finally, Saimiri generally outranked Ara, though the effect was less strong than in the other comparisons.\n\n  \n \nFigure 2: Fields without scores (“Care of young” for Aotus) indicate that insufficient data was found to compose a behavioral description for that animal.\nGiven this data, participants appeared to find our small-brained primate, Aotus, to display the least intelligent behavior, and found our large-brained primate, Saimiri, to display the most intelligent behavior, although within a similar range to our large-brained bird, Ara. \n3.0.2 Final survey\nAmong all four categories, participants reported that our descriptions of tool use provided the most evidence for intelligence, especially compared to the least informative category (Navigation and shelter selection). This aligned well with the pattern of answers within the category of Tool use, where there was strong agreement among participants on the rank order of Tool use behaviors, and the differences between Tool use behavior means were the largest of any category. The two larger-brained genera, Saimiri and Ara, were clear winners in this case, with participants reporting no significant difference between these two. \nSocial dynamics and Care of young were not clearly distinguishable from each other by importance rating, however participants responded quite differently to the evidence presented in these categories. All included genera (Saimiri, Ara and Psittacus) obtained about the same average score for Care of young, with no significant differences between them. However, for Social dynamics there were clear differences between the smaller-brained genera, Aotus and Psittacus, as well as the larger-brained bird and smaller-brained primate. Considering the borderline-significant comparison between Saimiri and Ara in this category (p=0.06), it would appear that participants rated birds slightly higher overall than primates on Social dynamics. Finally, Navigation and shelter selection was judged least important, but there were nonetheless clear differences in behavior scores between birds and primates, with primates outscoring birds, and no significant differences between sizes.\n \n\n\n\n\n\n\nDifferences in means\n\n\n\n\n\nTool use vs Navigation / Shelter selection\nTool use vs Social dynamics\nTool use vs Care of young\nNavigation / Shelter selection vs Social dynamics\nNavigation / Shelter selection vs. Care of young\nSocial dynamics vs Care of young\n\n\n\n1.0 +-0.2 (p<0.001)\n0.6 +-0.2 (p<0.001)\n0.8 +-0.2 (p<0.001)\n-0.4 +-0.2 (p<0.01)\n-0.2 +-0.2 (p=0.13)\n0.2 +-0.2 (p=0.32)\n\n\n\n\n \n\n\n\n\nSaimiri vs Ara\nSaimiri vs Aotus\nSaimiri vs Psittacus\nAra vs Aotus\nAra vs Psittacus\nAotus vs Psittacus\n\n\nTool use\n0.1 +-0.4 (p=0.79)\n3.7 +-0.4 (p<0.001)\n(see Saimiri vs Aotus)\n3.6 +-0.4 (p<0.001)\n(see Ara vs Aotus)\nNA\n\n\nNavigation / Shelter selection\n1.0 +-0.4 (p<0.01)\n-0.5 +-0.4 (p=0.29)\n0.5 +-0.4 (p=0.13)\n-1.5 +-0.4 (p<0.001)\n-0.5 +-0.3 (p=0.15)\n1.0 +-0.4 (p<0.01)\n\n\nSocial dynamics\n-0.8 +-0.4 (p=0.06)\n0.6 +-0.4 (p=0.16)\n-0.3 +-0.4 (p=0.51)\n1.4 +-0.4 (p<0.001)\n0.5 +-0.4 (p=0.19)\n-0.9 +-0.4 (p<0.01)\n\n\nCare of young\n0.1 +-0.3 (p=0.83)\nNot measured\n-0.3 +-0.3 (p=0.38)\nNot measured\n-0.4 +-0.3 (p=0.27)\nNot measured\n\n\n\n \n\n \nFigure 3: Fields without scores (“Care of young” for Aotus) indicate that insufficient data was found to compose a behavioral description for that animal.\nOverall, participants in this sample seemed to find the largest and most important differences between the two large- and two small-brained animals, not between the two primates and two birds. However, they did rate the birds slightly higher on social behaviors, while the primates were rated slightly higher on Navigation and shelter selection.\nIt’s possible that since the Tool use section compared instances of a behavior with the absence of a similar behavior, differences in scoring may have been inflated, relative to comparison between a tool-using behavior and an unrelated behavior in a non-tool using animal. Indeed, it is probable that the non-tool using animals in our sample have some problem-solving behavior akin to tool use in their repertoire, which was simply subtle enough to go unremarked upon by investigators. This sort of behavior could be seen as a precursor to the development of spontaneous complex tool use, and is probably what enables captive Psittacus to learn to solve tool-type problems in a laboratory setting. It is nonetheless striking that both larger-brained genera had strong evidence of spontaneous tool use, being either a regular component of its day-to-day life or an impressively novel use for an unfamiliar object, while no reports of the smaller-brained genera in the wild mentioned comparable problem-solving behaviors.\n4 Discussion\n4.1 Conclusion\nIn all iterations, we found the survey method of estimating animal intelligence to be quite noisy, without strong agreement on the importance of some categories, or on the rankings of species within some categories. This is unsurprising, since participants were given descriptions of behaviors stripped of much potentially relevant context, in the interest of time, and were not experts in either intelligence or animal behavior. However, there was broad agreement between our participants in both versions of the survey on some high-level conclusions, namely: a) that tool use as presented was a particularly important source of evidence; and b) that, when rankings were weighted by importance as judged by participants, the two larger-brained animals outscored to the two smaller-brained animals.\nBecause of the small number of genera represented in our survey, it is difficult to draw strong conclusions about the relative contributions of neuron count, architecture, and other factors to intelligence. However, our data do not support the hypothesis that one tissue architecture is greatly superior to the other as a rule, and weakly supports the hypothesis that birds and primates with similar neuron numbers have similar cognitive abilities. In particular, given the behaviors described in our survey, participants were not able to systematically distinguish the two birds from the two primates across all categories, but were substantially more able to distinguish the small-brained animals from those with twice as many brain neurons.\nWe also did not see strong evidence of specialized intelligence that differed between the groups. That is, the two birds in our study seemed not clearly better or worse at any particular kinds of cognitively-demanding behaviors than the two primates. However, this is not a claim that none of the species involved have specialized abilities. We could easily imagine it being the case, for example, that if one were to place an owl monkey brain or a grey parrot brain in the body of an ostrich, both would perform similarly well at the cognitive challenges presented by ostrich life, while an owl monkey brain would not do nearly as well as a grey parrot brain at living the life of a grey parrot. \n4.2 Implications and future directions\nWe hope our suggestive–if inconclusive–results spark greater interest in the highly neglected field of comparative animal intelligence. In particular, the further development and use of validated protocols for animal intelligence measurement seems to be a significant bottleneck to further progress. Furthermore, the gold standard of human psychometrics may not be a feasible model for animal intelligence measurement, given the prohibitive expense an analogous program in animals would incur (if traditional psychometric methods could even be applied usefully to most animals).\nOur surveying method may represent an inexpensive alternative that can produce useable if imperfect results. Although we believe it has reasonably good theoretical support, the method is nonetheless unvalidated and would surely require refinement. To that end, future studies may consider applying our method to species where the rank order is more certain, such as humans and chimpanzees, or the collection of primate species that have been compared by a psychometric battery (see here).\nWith regard to the question of avian and primate per-neuron intelligence, our result has limited generalizability due to the small number of genera represented. Even within a broad architecture type, species may still vary in brain characteristics that are relevant to intelligence, and we might expect larger evolutionary distances within Primates or Aves to be reflected in brain differences. Idiosyncratic selective pressures of certain niches likely also have an impact here. In future, it may be fruitful to compare other orders of bird, such as Passeriformes (and especially Corvidae), with primates. As a particularly evolutionarily recent clade made up of strong ecological generalists, Corvidae might have developed structural improvements allowing them to excel in tool use and other cognitive abilities relative to other animals in their brain size class, and indeed there are at least many anecdotal reports of spontaneous tool use in wild corvids. There may also be interesting brain structure differences between New World primates, like the two represented in this study, and Old World primates.\nSeveral limitations to the applicability of any bird-primate comparisons to the broader question surrounding architecture flexibility should be noted. Firstly, all brain structures other than the cerebral cortex are shared between birds and primates. Although these structures only account for a minority of brain volume, they could nonetheless perform some important precursor function to higher processing, such that an animal with a differently organized version could not perform as well cognitively, no matter their cortical architecture. This possibility seems less likely in light of the existence of cognitively advanced cephalopods like octopi, who are not vertebrates and therefore do not have a spinal cord or any other brain structures in common with birds and mammals.\nAnother issue pertains to scaling. While bird architectures clearly have the capacity to scale to the size of the smaller primate brains, no larger bird architectures have yet developed. This could be due to a number of limiting factors, including size limits imposed by the need to fly, a lack of adjacent niches that would support larger brains, or inherent randomness in the trajectory of brain evolution across lineages. However, it could also represent an upper bound on the scalability of bird-type cortical architecture.\n5 Contributions\nResearch, analysis and writing were done by Tegan McCaslin. Editing and feedback were provided by Katja Grace and Justis Mills. Feedback was provided by Daniel Kokotajlo and Carl Shulman.\n6 Bibliography\nBailey, R. C., & Mettetal, G. W. (1977). PERCEIVED INTELLIGENCE IN MARRIED PARTNERS. Social Behavior and Personality: An International Journal, 5(1), 137–141. https://doi.org/10.2224/sbp.1977.5.1.137\nBorkenau, P., & Liebler, A. (1993). Convergence of stranger ratings of personality and intelligence with self-ratings, partner ratings, and measured intelligence. Journal of Personality and Social Psychology, 65(3), 546–553. https://doi.org/10.1037/0022-3514.65.3.546\nBurkart, J. M., Schubiger, M. N., & van Schaik, C. P. (2017). The evolution of general intelligence. Behavioral and Brain Sciences, 40. https://doi.org/10.1017/S0140525X16000959\nCarroll, J. B. (1997). Psychometrics, intelligence, and public perception. Intelligence, 24(1), 25–52. https://doi.org/10.1016/S0160-2896(97)90012-X\nDunbar, R. I. M. (1998). The social brain hypothesis. Evolutionary Anthropology: Issues, News, and Reviews, 6(5), 178–190. https://doi.org/10.1002/(SICI)1520-6505(1998)6:5<178::AID-EVAN5>3.0.CO;2-8\nGüntürkün, O., Stacho, M., & Strockens, F. (2017). The brains of reptiles and birds. In J. H. Kaas (Ed.), Evolution of Nervous Systems (2nd ed., Vol. 1, pp. 171–221). Oxford, United Kingdom: Academic Press.\nHerculano-Houzel, S., Collins, C. E., Wong, P., & Kaas, J. H. (2007). Cellular scaling rules for primate brains. Proceedings of the National Academy of Sciences, 104(9), 3562–3567. https://doi.org/10.1073/pnas.0611396104\nHerculano-Houzel, Suzana. (2009). The human brain in numbers: a linearly scaled-up primate brain. Frontiers in Human Neuroscience, 3. https://doi.org/10.3389/neuro.09.031.2009\nHerculano-Houzel, Suzana. (2011). Scaling of Brain Metabolism with a Fixed Energy Budget per Neuron: Implications for Neuronal Activity, Plasticity and Evolution. PLoS ONE, 6(3), e17514. https://doi.org/10.1371/journal.pone.0017514\nJanzen, M. J., Janzen, D. H., & Pond, C. M. (1976). Tool-Using by the African Grey Parrot (Psittacus erithacus). Biotropica, 8(1), 70.\nKaas, J. H. (Ed.). (2017). Evolution of Nervous Systems (2nd ed.). Oxford, United Kingdom: Academic Press.\nLuescher, A. U. (Ed.). (2006). Manual of parrot behavior (1st ed). Ames, Iowa: Blackwell.\nNaumann, R., & Laurent, G. (2017). Function and evolution of the reptilian cerebral cortex. In J. H. Kaas (Ed.), Evolution of Nervous Systems (2nd ed., Vol. 1, pp. 491–518). Oxford, United Kingdom: Academic Press.\nOlkowicz, S., Kocourek, M., Lučan, R. K., Porteš, M., Fitch, W. T., Herculano-Houzel, S., & Němec, P. (2016). Birds have primate-like numbers of neurons in the forebrain. Proceedings of the National Academy of Sciences, 113(26), 7255–7260. https://doi.org/10.1073/pnas.1517131113\nPuelles, L., Sandoval, J., Ayad, A., del Corral, R., Alonso, A., Ferran, J., & Martinez-de-la-Torre, M. (2017). The pallium in reptiles and birds in light of the updated tetrapartite pallium model. In J. H. Kaas (Ed.), Evolution of Nervous Systems (2nd ed., Vol. 1, pp. 519–555). Oxford, United Kingdom: Academic Press.\nReader, S. M., Hager, Y., & Laland, K. N. (2011). The evolution of primate general and cultural intelligence. Philosophical Transactions of the Royal Society B: Biological Sciences, 366(1567), 1017–1027. https://doi.org/10.1098/rstb.2010.0342\nReiner, A., Yamamoto, K., & Karten, H. J. (2005). Organization and evolution of the avian forebrain. The Anatomical Record Part A: Discoveries in Molecular, Cellular, and Evolutionary Biology, 287A(1), 1080–1102. https://doi.org/10.1002/ar.a.20253\nSmith, & Bentley-Condit, V. (2010). Animal tool use: current definitions and an updated comprehensive catalog. Behaviour, 147(2), 185-32A. https://doi.org/10.1163/000579509X12512865686555\nRoth, G., & Dicke, U. (2005). Evolution of the brain and intelligence. Trends in Cognitive Sciences, 9(5), 250–257. https://doi.org/10.1016/j.tics.2005.03.005\nShulman, C., & Bostrom, N. (2012). How Hard Is Artificial Intelligence? Evolutionary Arguments and Selection Effects. Journal of Consciousness Studies, 19(7–8), 103–130.", "url": "https://aiimpacts.org/investigation-into-the-relationship-between-neuron-count-and-intelligence-across-differing-cortical-architectures/", "title": "Investigation into the relationship between neuron count and intelligence across differing cortical architectures", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-02-11T21:31:01+00:00", "paged_url": "https://aiimpacts.org/feed?paged=12", "authors": ["Tegan McCaslin"], "id": "edec03d7ff9ce96bfdbcaf7c15ea8b92", "summary": []} {"text": "Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post\n\nBy Daniel Kokotajlo, 2 July 2019\n\nFigure 0: The “four main determinants of forecasting accuracy.” 1\nExperience and data from the Good Judgment Project (GJP) provide important evidence about how to make accurate predictions. For a concise summary of the evidence and what we learn from it, see this page. For a review of Superforecasting, the popular book written on the subject, see this blog. \nThis post explores the evidence in more detail, drawing from the book, the academic literature, the older Expert Political Judgment book, and an interview with a superforecaster. Readers are welcome to skip around to parts that interest them:\n1. The experiment\nIARPA ran a forecasting tournament from 2011 to 2015, in which five teams plus a control group gave probabilistic answers to hundreds of questions. The questions were generally about potential geopolitical events more than a month but less than a year in the future, e.g. “Will there be a violent incident in the South China Sea in 2013 that kills at least one person?” The questions were carefully chosen so that a reasonable answer would be somewhere between 10% and 90%.2 The forecasts were scored using the original Brier score—more on that in Section 2.3\nThe winning team was the GJP, run by Philip Tetlock & Barbara Mellers. They recruited thousands of online volunteers to answer IARPA’s questions. These volunteers tended to be males (83%) and US citizens (74%). Their average age was forty. 64% of respondents held a bachelor’s degree, and 57% had postgraduate training.4\nGJP made their official predictions by aggregating and extremizing the predictions of their volunteers.5 They identified the top 2% of predictors in their pool of volunteers each year, dubbing them “superforecasters,” and put them on teams in the next year so they could collaborate on special forums. They also experimented with a prediction market, and they did a RCT to test the effect of a one-hour training module on forecasting ability. The module included content about probabilistic reasoning, using the outside view, avoiding biases, and more. Attempts were made to find out which parts of the training were most helpful—see Section 4.\n2. The results & their intuitive meaning\nHere are some of the key results:\n“In year 1 GJP beat the official control group by 60%. In year 2, we beat the control group by 78%. GJP also beat its university-affiliated competitors, including the University of Michigan and MIT, by hefty margins, from 30% to 70%.”6\n“The Good Judgment Project outperformed a prediction market inside the intelligence community, which was populated with professional analysts who had classified information, by 25 or 30 percent, which was about the margin by which the superforecasters were outperforming our own prediction market in the external world.”7\n“Teams of ordinary forecasters beat the wisdom of the crowd by about 10%. Prediction markets beat ordinary teams by about 20%. And [teams of superforecasters] beat prediction markets by 15% to 30%.”8 “On average, teams were 23% more accurate than individuals.”9\nWhat does Tetlock mean when he says that one group did X% better than another? By examining Table 4 (in Section 4)  it seems that he means X% lower Brier score. What is the Brier score? For more details, see the Wikipedia article; basically, it measures the average squared distance from the truth. This is why it’s better to have a lower Brier score—it means you were on average closer to the truth.10\nHere is a bar graph of all the forecasters in Year 2, sorted by Brier score:11\n\nFor this set of questions, guessing randomly (assigning even odds to all possibilities) would yield a Brier score of 0.53. So most forecasters did significantly better than that. Some people—the people on the far left of this chart, the superforecasters—did much better than the average. For example, in year 2, the superforecaster Doug Lorch did best with 0.14. This was more than 60% better than the control group.12 Importantly, being a superforecaster in one year correlated strongly with being a superforecaster the next year; there was some regression to the mean but roughly 70% of the superforecasters maintained their status from one year to the next.13\nOK, but what does all this mean, in intuitive terms? Here are three ways to get a sense of how good these scores really are:\nWay One: Let’s calculate some examples of prediction patterns that would give you Brier scores like those mentioned above. Suppose you make a bunch of predictions with 80% confidence and you are correct 80% of the time. Then your Brier score would be 0.32, roughly middle of the pack in this tournament. If instead it was 93% confidence correct 93% of the time, your Brier score would be 0.132, very close to the best superforecasters and to GJP’s aggregated forecasts.14 In these examples, you are perfectly calibrated, which helps your score—more realistically you would be imperfectly calibrated and thus would need to be right even more often to get those scores.\nWay Two: “An alternative measure of forecast accuracy is the proportion of days on which forecasters’ estimates were on the correct side of 50%. … For all questions in the sample, a chance score was 47%. The mean proportion of days with correct estimates was 75%…”15 According to this chart, the superforecasters were on the right side of 50% almost all the time:16\n\nWay Three: “Across all four years of the tournament, superforecasters looking out three hundred days were more accurate than regular forecasters looking out one hundred days.”17 (Bear in mind, this wouldn’t necessarily hold for a different genre of questions. For example, information about the weather decays in days, while information about the climate lasts for decades or more.)\n3. Correlates of good judgment\nThe data from this tournament is useful in two ways: It helps us decide whose predictions to trust, and it helps us make better predictions ourselves. This section will focus on which kinds of people and practices best correlate with success—information which is relevant to both goals. Section 4 will cover the training experiment, which helps to address causation vs. correlation worries.\nFeast your eyes on this:18\n\nThis shows the correlations between various things.19 The leftmost column is the most important; it shows how each variable correlates with (standardized) Brier score. (Recall that Brier scores measure inaccuracy, so negative correlations are good.)\nIt’s worth mentioning that while intelligence correlated with accuracy, it didn’t steal the show.20 The same goes for time spent deliberating.21 The authors summarize the results as follows: “The best forecasters scored higher on both intelligence and political knowledge than the already well-above-average group of forecasters. The best forecasters had more open-minded cognitive styles. They benefited from better working environments with probability training and collaborative teams. And while making predictions, they spent more time deliberating and updating their forecasts.”22\nThat big chart depicts all the correlations individually. Can we use them to construct a model to take in all of these variables and spit out a prediction for what your Brier score will be? Yes we can:\nFigure 3. Structural equation model with standardized coefficients.\nThis model has a multiple correlation of 0.64.23 Earlier, we noted that superforecasters typically remained superforecasters (i.e. in the top 2%), proving that their success wasn’t mostly due to luck. Across all the forecasters, the correlation between performance in one year and performance in the next year is 0.65.24 So we have two good ways to predict how accurate someone will be: Look at their past performance, and look at how well they score on the structural model above.\nI speculate that these correlations underestimate the true predictability of accuracy, because the forecasters were all unpaid online volunteers, and many of them presumably had random things come up in their life that got in the way of making good predictions—perhaps they have a kid, or get sick, or move to a new job and so stop reading the news for a month, and their accuracy declines.25 Yet still 70% of the superforecasters in one year remained superforecasters in the next.\nFinally, what about superforecasters in particular? Is there anything to say about what it takes to be in the top 2%?Tetlock devotes much of his book to this. It is hard to tell how much his recommendations come from data analysis and how much are just his own synthesis of the interviews he’s conducted with superforecasters. Here is his “Portrait of the modal superforecaster.”26\nPhilosophic outlook:\n\nCautious: Nothing is certain.\nHumble: Reality is infinitely complex.\nNondeterministic: Whatever happens is not meant to be and does not have to happen.\n\nAbilities & thinking styles:\n\nActively open-minded: Beliefs are hypotheses to be tested, not treasures to be protected.\nIntelligent and knowledgeable, with a “Need for Cognition”: Intellectually curious, enjoy puzzles and mental challenges.\nReflective: Introspective and self-critical.\nNumerate: Comfortable with numbers.\n\nMethods of forecasting:\n\nPragmatic: Not wedded to any idea or agenda.\nAnalytical: Capable of stepping back from the tip-of-your-nose perspective and considering other views.\nDragonfly-eyed: Value diverse views and synthesize them into their own.\nProbabilistic: Judge using many grades of maybe.\nThoughtful updaters: When facts change, they change their minds.\nGood intuitive psychologists: Aware of the value of checking thinking for cognitive and emotional biases.\n\nWork ethic:\n\nGrowth mindset: Believe it’s possible to get better.\nGrit: Determined to keep at it however long it takes.\n\nAdditionally, there is experimental evidence that superforecasters are less prone to standard cognitive science biases than ordinary people.27 This is particularly exciting because—we can hope—the same sorts of training that help people become superforecasters might also help overcome biases.\nFinally, Tetlock says that “The strongest predictor of rising into the ranks of superforecasters is perpetual beta, the degree to which one is committed to belief updating and self-improvement. It is roughly three times as powerful a predictor as its closest rival, intelligence.”28 Unfortunately, I couldn’t find any sources or data on this, nor an operational definition of “perpetual beta,” so we don’t know how he measured it.29\n4. The training and Tetlock’s commandments\nThis section discusses the surprising effect of the training module on accuracy, and finishes with Tetlock’s training-module-based recommendations for how to become a better forecaster.30\nThe training module, which was randomly given to some participants but not others, took about an hour to read.31 The authors describe the content as follows:\n“Training in year 1 consisted of two different modules: probabilistic reasoning training and scenario training. Scenario-training was a four-step process: 1) developing coherent and logical probabilities under the probability sum rule; 2) exploring and challenging assumptions; 3) identifying the key causal drivers; 4) considering the best and worst case scenarios and developing a sensible 95% confidence interval of possible outcomes; and 5) avoid over-correction biases. … Probabilistic reasoning training consisted of lessons that detailed the difference between calibration and resolution, using comparison classes and base rates (Kahneman & Tversky, 1973; Tversky & Kahneman, 1981), averaging and using crowd wisdom principles (Surowiecki, 2005), finding and utilizing predictive mathematical and statistical models (Arkes, 1981; Kahneman & Tversky, 1982), cautiously using time-series and historical data, and being self-aware of the typical cognitive biases common throughout the population.”32\nIn later years, they merged the two modules into one and updated it based on their observations of the best forecasters. The updated training module is organized around an acronym:33\n\nImpressively, this training had a lasting positive effect on accuracy in all four years:\n\nOne might worry that training improves accuracy by motivating the trainees to take their jobs more seriously. Indeed it seems that the trained forecasters made more predictions per question than the control group, though they didn’t make more predictions overall. Nevertheless it seems that the training also had a direct effect on accuracy as well as this indirect effect.34\nMoving on, let’s talk about the advice Tetlock gives to his audience in Superforecasting, advice which is based on, though not identical to, the CHAMPS-KNOW training. The book has a few paragraphs of explanation for each commandment, a transcript of which is here; in this post I’ll give my own abbreviated explanations:\nTEN COMMANDMENTS FOR ASPIRING SUPERFORECASTERS\n(1) Triage: Don’t waste time on questions that are “clocklike” where a rule of thumb can get you pretty close to the correct answer, or “cloudlike” where even fancy models can’t beat a dart-throwing chimp. \n(2) Break seemingly intractable problems into tractable sub-problems: This is how Fermi estimation works. One related piece of advice is “be wary of accidentally substituting an easy question for a hard one,” e.g. substituting “Would Israel be willing to assassinate Yasser Arafat?” for “Will at least one of the tests for polonium in Arafat’s body turn up positive?” \n(3) Strike the right balance between inside and outside views: In particular, first anchor with the outside view and then adjust using the inside view. (More on this in Section 5)\n(4) Strike the right balance between under- and overreacting to evidence: “Superforecasters aren’t perfect Bayesian predictors but they are much better than most of us.”35 Usually do many small updates, but occasionally do big updates when the situation calls for it. Take care not to fall for things that seem like good evidence but aren’t; remember to think about P(E|H)/P(E|~H); remember to avoid the base-rate fallacy.\n(5) Look for the clashing causal forces at work in each problem: This is the “dragonfly eye perspective,” which is where you attempt to do a sort of mental wisdom of the crowds: Have tons of different causal models and aggregate their judgments. Use “Devil’s advocate” reasoning. If you think that P, try hard to convince yourself that not-P. You should find yourself saying “On the one hand… on the other hand… on the third hand…” a lot.\n(6) Strive to distinguish as many degrees of doubt as the problem permits but no more: Some people criticize the use of exact probabilities (67%! 21%!) as merely a way to pretend you know more than you do. There might be another post on the subject of why credences are better than hedge words like “maybe” and “probably” and “significant chance;” for now, I’ll simply mention that when the authors rounded the superforecaster’s forecasts to the nearest 0.05, their accuracy dropped.36 Superforecasters really were making use of all 101 numbers from 0.00 to 1.00! (EDIT: I am told this may be wrong; the number should be 0.1, not 0.05. See the discussion here and here.)\n(7) Strike the right balance between under- and overconfidence, between prudence and decisiveness.\n(8) Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases.\n(9) Bring out the best in others and let others bring out the best in you: The book spent a whole chapter on this, using the Wehrmacht as an extended case study on good team organization.37 One pervasive guiding principle is “Don’t tell people how to do things; tell them what you want accomplished, and they’ll surprise you with their ingenuity in doing it.” The other pervasive guiding principle is “Cultivate a culture in which people—even subordinates—are encouraged to dissent and give counterarguments.”38\n(10) Master the error-balancing bicycle: This one should have been called practice, practice, practice. Tetlock says that reading the news and generating probabilities isn’t enough; you need to actually score your predictions so that you know how wrong you were.\n(11) Don’t treat commandments as commandments: Tetlock’s point here is simply that you should use your judgment about whether to follow a commandment or not; sometimes they should be overridden.\nIt’s worth mentioning at this point that the advice is given at the end of the book, as a sort of summary, and may make less sense to someone who hasn’t read the book. In particular, Chapter 5 gives a less formal but more helpful recipe for making predictions, with accompanying examples. See the end of this blog post for a summary of this recipe.\n5. On the Outside View & Lessons for AI Impacts\nThe previous section summarized Tetlock’s advice for how to make better forecasts; my own summary of the lessons I think we should learn is more concise and comprehensive and can be found at this page. This section goes into detail about one particular, more controversial matter: The importance of the “outside view,” also known as reference class forecasting. This research provides us with strong evidence in favor of this method of making predictions; however, the situation is complicated by Tetlock’s insistence that other methods are useful as well. This section discusses the evidence and attempts to interpret it.\nThe GJP asked people who took the training to self-report which of the CHAMPS-KNOW principles they were using when they explained why they made a forecast; 69% of forecast explanations received tags this way. The only principle significantly positively correlated with successful forecasts was C: Comparison classes.39 The authors take this as evidence that the outside view is particularly important. Anecdotally, the superforecaster I interviewed agreed that reference class forecasting was perhaps the most important piece of the training. (He also credited the training in general with helping him reach the ranks of the superforecasters.) \nMoreover, Tetlock did an earlier, much smaller forecasting tournament from 1987-2003, in which experts of various kinds made the forecasts.40 The results were astounding: Many of the experts did worse than random chance, and all of them did worse than simple algorithms:\n\nFigure 3.2, pulled from Expert Political Judgment, is a gorgeous depiction of some of the main results.41 Tetlock used something very much like a Brier score in this tournament, but he broke it into two components: “Discrimination” and “Calibration.” This graph plots the various experts and algorithms on the axes of discrimination and calibration. Notice in the top right corner the “Formal models” box. I don’t know much about the model used but apparently it was significantly better than all of the humans. This, combined with the fact that simple case-specific trend extrapolations also beat all the humans, is strong evidence for the importance of the outside view.\nSo we should always use the outside view, right? Well, it’s a bit more complicated than that. Tetlock’s advice is to start with the outside view, and then adjust using the inside view.42 He even goes so far as to say that hedgehoggery and storytelling can be valuable when used properly.\nFirst, what is hedgehoggery? Recall how the human experts fall on a rough spectrum in Figure 3.2, with “hedgehogs” getting the lowest scores and “foxes” getting the highest scores. What makes someone a hedgehog or a fox? Their answers to these questions.43 Tetlock characterizes the distinction as follows:\nLow scorers look like hedgehogs: thinkers who “know one big thing,” aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who “do not get it,” and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. High scorers look like foxes: thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible “ad hocery” that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess, and … rather dubious that the cloudlike subject of politics can be the object of a clocklike science.44\nNext, what is storytelling? Using your domain knowledge, you think through a detailed scenario of how the future might go, and you tweak it to make it more plausible, and then you assign a credence based on how plausible it seems. By itself this method is unpromising.45\nDespite this, Tetlock thinks that storytelling and hedgehoggery are valuable if handled correctly. On hedgehogs, Tetlock says that hedgehogs provide a valuable service by doing the deep thinking necessary to build detailed causal models and raise interesting questions; these models and questions can then be slurped up by foxy superforecasters, evaluated, and aggregated to make good predictions.46 The superforecaster Bill Flack is quoted in agreement.47 As for storytelling, see these slides from Tetlock’s edge.org seminar:\n\nAs the second slide indicates, the idea is that we can sometimes “fight fire with fire” by using some stories to counter other stories. In particular, Tetlock says there has been success using stories about the past—about ways that the world could have gone, but didn’t—to “reconnect us to our past states of ignorance.”48 The superforecaster I interviewed said that it is common practice now on superforecaster forums to have a designated “red team” with the explicit mission of finding counter-arguments to whatever the consensus seems to be. This, I take it, is an example of motivated reasoning being put to good use. \nMoreover, arguably the outside view simply isn’t useful for some questions.49 People say this about lots of things—e.g. “The world is changing so fast, so the current situation in Syria is unprecedented and historical averages will be useless!”—and are proven wrong; for example, this research seems to indicate that the outside view is far more useful in geopolitics than people think. Nevertheless, maybe it is true for some of the things we wish to predict about advanced AI. After all, a major limitation of this data is that the questions were mainly on geopolitical events only a few years in the future at most. (Geopolitical events seem to be somewhat predictable up to two years out but much more difficult to predict five, ten, twenty years out.)50 So this research does not directly tell us anything about the predictability of the events AI Impacts is interested in, nor about the usefulness of reference-class forecasting for those domains.51\nThat said, the forecasting best practices discovered by this research seem like general truth-finding skills rather than cheap hacks only useful in geopolitics or only useful for near-term predictions. After all, geopolitical questions are themselves a fairly diverse bunch, yet accuracy on some was highly correlated with accuracy on others.52 So despite these limitations I think we should do our best to imitate these best-practices, and that means using the outside view far more than we would naturally be inclined.\nOne final thing worth saying is that, remember, the GJP’s aggregated judgments did at least as well as the best superforecasters.53 Presumably at least one of the forecasters in the tournament was using the outside view a lot; after all, half of them were trained in reference-class forecasting.  So I think we can conclude that straightforwardly using the outside view as often as possible wouldn’t get you better scores than the GJP, though it might get you close for all we know. Anecdotally, it seems that when the superforecasters use the outside view they often aggregate between different reference-class forecasts.54 The wisdom of the crowds is powerful; this is consistent with the wider literature on the cognitive superiority of groups, and the literature on ensemble methods in AI.55\nTetlock describes how superforecasters go about making their predictions.56 Here is an attempt at a summary:\n\nSometimes a question can be answered more rigorously if it is first “Fermi-ized,” i.e. broken down into sub-questions for which more rigorous methods can be applied. \nNext, use the outside view on the sub-questions (and/or the main question, if possible). You may then adjust your estimates using other considerations (‘the inside view’), but do this cautiously.\nSeek out other perspectives, both on the sub-questions and on how to Fermi-ize the main question. You can also generate other perspectives yourself.\nRepeat steps 1 – 3 until you hit diminishing returns.\nYour final prediction should be based on an aggregation of various models, reference classes, other experts, etc.\n\nFootnotes", "url": "https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/", "title": "Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-02-07T22:25:29+00:00", "paged_url": "https://aiimpacts.org/feed?paged=12", "authors": ["Daniel Kokotajlo"], "id": "c2479c8ee30b18dc59cfc739a1b3038e", "summary": []} {"text": "Evidence on good forecasting practices from the Good Judgment Project\n\nAccording to experience and data from the Good Judgment Project, the following are associated with successful forecasting, in rough decreasing order of combined importance and confidence:\n\nPast performance in the same broad domain\nMaking more predictions on the same question\nDeliberation time\nCollaboration on teams\nIntelligence\nDomain expertise\nHaving taken a one-hour training module on these topics\n‘Cognitive reflection’ test scores\n‘Active open-mindedness’ \nAggregation of individual judgments\nUse of precise probabilistic predictions\nUse of ‘the outside view’\n‘Fermi-izing’\n‘Bayesian reasoning’\nPractice\n\n\n\nDetails\n1. 1. Process\nThe Good Judgment Project (GJP) was the winning team in IARPA’s 2011-2015 forecasting tournament. In the tournament, six teams assigned probabilistic answers to hundreds of questions about geopolitical events months to a year in the future. Each competing team used a different method for coming up with their guesses, so the tournament helps us to evaluate different forecasting methods.\nThe GJP team, led by Philip Tetlock and Barbara Mellers, gathered thousands of online volunteers and had them answer the tournament questions. They then made their official forecasts by aggregating these answers. In the process, the team collected data about the patterns of performance in their volunteers, and experimented with aggregation methods and improvement interventions. For example, they ran an RCT to test the effect of a short training program on forecasting accuracy. They especially focused on identifying and making use of the most successful two percent of forecasters, dubbed ‘superforecasters’.\nTetlock’s book Superforecasting describes this process and Tetlock’s resulting understanding of how to forecast well.\n \n1.2. Correlates of successful forecasting\n1.2.1. Past performance\nRoughly 70% of the superforecasters maintained their status from one year to the next 1. Across all the forecasters, the correlation between performance in one year and performance in the next year was 0.65 2. These high correlations are particularly impressive because the forecasters were online volunteers; presumably substantial variance year-to-year came from forecasters throttling down their engagement due to fatigue or changing life circumstances 3.\n \n1.2.2. Behavioral and dispositional variables\nTable 2  depicts the correlations between measured variables amongst GJP’s volunteers in the first two years of the tournament 4.  Each is described in more detail below.\n\nThe first column shows the relationship between each variable and standardized Brier score, which is a measure of inaccuracy: higher Brier scores mean less accuracy, so negative correlations are good. “Ravens” is an IQ test; “Del time” is deliberation time, and “teams” is whether or not the forecaster was assigned to a team. “Actively open-minded thinking” is an attempt to measure “the tendency to evaluate arguments and evidence without undue bias from one’s own prior beliefs—and with recognition of the fallibility of one’s judgment.” 5\nThe authors conducted various statistical analyses to explore the relationships between these variables. They computed a structural equation model to predict a forecaster’s accuracy:\n\nYellow ovals are latent dispositional variables, yellow rectangles are observed dispositional variables, pink rectangles are experimentally manipulated situational variables, and green rectangles are observed behavioral variables. This model has a multiple correlation of 0.64.6\nAs these data indicate, domain knowledge, intelligence, active open-mindedness, and working in teams each contribute substantially to accuracy. We can also conclude that effort helps, because deliberation time and number of predictions made per question (“belief updating”) both improved accuracy. Finally, training also helps. This is especially surprising because the training module lasted only an hour and its effects persisted for at least a year. The module included content about probabilistic reasoning, using the outside view, avoiding biases, and more.\n \n1.3. Aggregation algorithms\nGJP made their official predictions by aggregating and extremizing the predictions of their volunteers. The aggregation algorithm was elitist, meaning that it weighted more heavily people who were better on various metrics. 7 The extremizing step pushes the aggregated judgment closer to 1 or 0, to make it more confident. The degree to which they extremize depends on how diverse and sophisticated the pool of forecasters is. 8 Whether extremizing is a good idea is still controversial.  9 \nGJP beat all of the other teams. They consistently beat the control group—which was a forecast made by averaging ordinary forecasters—by more than 60%. 10 They  also beat a prediction market inside the intelligence community—populated by professional analysts with access to classified information—by 25-30%. 11\nThat said, individual superforecasters did almost as well, so the elitism of the algorithm may account for a lot of its success.12\n \n1.4. Outside View\nThe forecasters who received training were asked to record, for each prediction, which parts of the training they used to make it. Some parts of the training—e.g. “Post-mortem analysis”—were correlated with inaccuracy, but others—most notably “Comparison classes”—were correlated with accuracy. 13  ‘Comparison classes’ is another term for reference-class forecasting, also known as ‘the outside view’. It is the method of assigning a probability by straightforward extrapolation from similar past situations and their outcomes. \n \n1.5. Tetlock’s “Portrait of the modal superforecaster”\nThis subsection and those that follow will lay out some more qualitative results, things that Tetlock recommends on the basis of his research and interviews with superforecasters. Here is Tetlock’s “portrait of the modal superforecaster:” 14\nPhilosophic outlook:\n\nCautious: Nothing is certain.\nHumble: Reality is infinitely complex.\nNondeterministic: Whatever happens is not meant to be and does not have to happen.\n\nAbilities & thinking styles:\n\nActively open-minded: Beliefs are hypotheses to be tested, not treasures to be protected.\nIntelligent and knowledgeable, with a “Need for Cognition”: Intellectually curious, enjoy puzzles and mental challenges.\nReflective: Introspective and self-critical\nNumerate: Comfortable with numbers\n\nMethods of forecasting:\n\nPragmatic: Not wedded to any idea or agenda\nAnalytical: Capable of stepping back from the tip-of-your-nose perspective and considering other views\nDragonfly-eyed: Value diverse views and synthesize them into their own\nProbabilistic: Judge using many grades of maybe\nThoughtful updaters: When facts change, they change their minds\nGood intuitive psychologists: Aware of the value of checking thinking for cognitive and emotional biases 15\n\nWork ethic:\n\nGrowth mindset: Believe it’s possible to get better\nGrit: Determined to keep at it however long it takes\n\n \n1.6. Tetlock’s “Ten Commandments for Aspiring Superforecasters:”\nThis advice is given at the end of the book, and may make less sense to someone who hasn’t read the book. A full transcript of these commandments can be found here; this is a summary:\n(1) Triage: Don’t waste time on questions that are “clocklike” where a rule of thumb can get you pretty close to the correct answer, or “cloudlike” where even fancy models can’t beat a dart-throwing chimp. \n(2) Break seemingly intractable problems into tractable sub-problems: This is how Fermi estimation works. One related piece of advice is “be wary of accidentally substituting an easy question for a hard one,” e.g. substituting “Would Israel be willing to assassinate Yasser Arafat?” for “Will at least one of the tests for polonium in Arafat’s body turn up positive?” \n(3) Strike the right balance between inside and outside views: In particular, first anchor with the outside view and then adjust using the inside view.\n(4) Strike the right balance between under- and overreacting to evidence: Usually do many small updates, but occasionally do big updates when the situation calls for it. Remember to think about P(E|H)/P(E|~H); remember to avoid the base-rate fallacy. “Superforecasters aren’t perfect Bayesian predictors but they are much better than most of us.” 16\n(5) Look for the clashing causal forces at work in each problem: This is the “dragonfly eye perspective,” which is where you attempt to do a sort of mental wisdom of the crowds: Have tons of different causal models and aggregate their judgments. Use “Devil’s advocate” reasoning. If you think that P, try hard to convince yourself that not-P. You should find yourself saying “On the one hand… on the other hand… on the third hand…” a lot.\n(6) Strive to distinguish as many degrees of doubt as the problem permits but no more.\n(7) Strike the right balance between under- and overconfidence, between prudence and decisiveness.\n(8) Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases.\n(9) Bring out the best in others and let others bring out the best in you. The book spent a whole chapter on this, using the Wehrmacht as an extended case study on good team organization. One pervasive guiding principle is “Don’t tell people how to do things; tell them what you want accomplished, and they’ll surprise you with their ingenuity in doing it.” The other pervasive guiding principle is “Cultivate a culture in which people—even subordinates—are encouraged to dissent and give counterarguments.” 17\n(10) Master the error-balancing bicycle: This one should have been called practice, practice, practice. Tetlock says that reading the news and generating probabilities isn’t enough; you need to actually score your predictions so that you know how wrong you were.\n(11) Don’t treat commandments as commandments: Tetlock’s point here is simply that you should use your judgment about whether to follow a commandment or not; sometimes they should be overridden.\n \n1.7. Recipe for Making Predictions\nTetlock describes how superforecasters go about making their predictions. 18 Here is an attempt at a summary:\n\nSometimes a question can be answered more rigorously if it is first “Fermi-ized,” i.e. broken down into sub-questions for which more rigorous methods can be applied. \nNext, use the outside view on the sub-questions (and/or the main question, if possible). You may then adjust your estimates using other considerations (‘the inside view’), but do this cautiously.\nSeek out other perspectives, both on the sub-questions and on how to Fermi-ize the main question. You can also generate other perspectives yourself.\nRepeat steps 1 – 3 until you hit diminishing returns.\nYour final prediction should be based on an aggregation of various models, reference classes, other experts, etc.\n\n \n1.8. Bayesian reasoning & precise probabilistic forecasts\nHumans normally express uncertainty with terms like “maybe” and “almost certainly” and “a significant chance.” Tetlock advocates for thinking and speaking in probabilities instead. He recounts many anecdotes of misunderstandings that might have been avoided this way. For example:\nIn 1961, when the CIA was planning to topple the Castro government by landing a small army of Cuban expatriates at the Bay of Pigs, President John F. Kennedy turned to the military for an unbiased assessment. The Joint Chiefs of Staff concluded that the plan had a “fair chance” of success. The man who wrote the words “fair chance” later said he had in mind odds of 3 to 1 against success. But Kennedy was never told precisely what “fair chance” meant and, not unreasonably, he took it to be a much more positive assessment. 19\nThis example hints at another advantage of probabilistic judgments: It’s harder to weasel out of them afterwards, and therefore easier to keep score. Keeping score is crucial for getting feedback from reality, which is crucial for building up expertise.\nA standard criticism of using probabilities is that they merely conceal uncertainty rather than quantify it—after all, the numbers you pick are themselves guesses. This may be true for people who haven’t practiced much, but it isn’t true for superforecasters, who are impressively well-calibrated and whose accuracy scores decrease when you round their predictions to the nearest 0.05. (EDIT: This should be 0.1)20\nBayesian reasoning is a natural next step once you are thinking and talking probabilities—it is the theoretical ideal in several important ways 21 —and Tetlock’s experience and interviews with superforecasters seems to bear this out. Superforecasters seem to do many small updates, with occasional big updates, just as Bayesianism would predict. They recommend thinking in the Bayesian way, and often explicitly make Bayesian calculations. They are good at breaking down difficult questions into more manageable parts and chaining the probabilities together properly.\n \n2. Discussion: Relevance to AI Forecasting\n2.1. Limitations\nA major limitation is that the forecasts were mainly on geopolitical events only a few years in the future at most. (Uncertain geopolitical events seem to be somewhat predictable up to two years out but much more difficult to predict five years out.) 22 So evidence from the GJP may not generalize to forecasting other types of events (e.g. technological progress and social  consequences) or events further in the future. \nThat said, the forecasting best practices discovered by this research are not overtly specific to geopolitics or near-term events.  Also, geopolitical questions are diverse and accuracy on some was highly correlated with accuracy on others. 23\nTetlock has ideas for how to handle longer-term, nebulous questions. He calls it “Bayesian Question Clustering.” (Superforecasting 263) The idea is to take the question you really want to answer and look for more precise questions that are evidentially relevant to the question you care about. Tetlock intends to test the effectiveness of this idea in future research.\n \n2.2 Value\nThe benefits of following these best practices (including identifying and aggregating the best forecasters) appear to be substantial: Superforecasters predicting events 300 days in the future were more accurate than regular forecasters predicting events 100 days in the future, and the GJP did even better. 24 If these benefits generalize beyond the short-term and beyond geopolitics—e.g. to long-term technological and societal development—then this research is highly useful to almost everyone. Even if the benefits do not generalize beyond the near-term, these best practices may still be well worth adopting. For example, it would be extremely useful to have 300 days of warning before strategically important AI milestones are reached, rather than 100.\n \n3. Contributions\nResearch, analysis, and writing were done by Daniel Kokotajlo. Katja Grace and Justis Mills contributed feedback and editing. Tegan McCaslin, Carl Shulman, and Jacob Lagerros contributed feedback.\n \n4. Footnotes\n ", "url": "https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project/", "title": "Evidence on good forecasting practices from the Good Judgment Project", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2019-02-07T22:25:19+00:00", "paged_url": "https://aiimpacts.org/feed?paged=12", "authors": ["Daniel Kokotajlo"], "id": "196e91eaae4bc53ff5b73ee050c43135", "summary": []} {"text": "Reinterpreting “AI and Compute”\n\nThis is a guest post by Ben Garfinkel. We revised it slightly, at his request, on February 9, 2019.\nA recent OpenAI blog post, “AI and Compute,” showed that the amount of computing power consumed by the most computationally intensive machine learning projects has been doubling every three months. The post presents this trend as a reason to better prepare for “systems far outside today’s capabilities.” Greg Brockman, the CTO of OpenAI, has also used the trend to argue for the plausibility of “near-term AGI.” Overall, it seems pretty common to interpret the OpenAI data as evidence that we should expect extremely capable systems sooner than we otherwise would. \nHowever, I think it’s important to note that the data can also easily be interpreted in the opposite direction. A more pessimistic interpretation goes like this:\n\nIf we were previously underestimating the rate at which computing power was increasing, this means we were overestimating the returns on it. \nIn addition, if we were previously underestimating the rate at which computing power was increasing, this means that we were overestimating how sustainable its growth is.1\nLet’s suppose, as the original post does, that increasing computing power is currently one of the main drivers of progress in creating more capable systems. Then — barring any major changes to the status quo — it seems like we should expect progress to slow down pretty soon and we should expect to be underwhelmed by how far along we are when the slowdown hits.\n\nI actually think of this more pessimistic interpretation as something like the default one. There are many other scientific fields where R&D spending and other inputs are increasing rapidly, and, so far as I’m aware, these trends are nearly always interpreted as reasons for pessimism and concern about future research progress.2 If we are going to treat the field of artificial intelligence differently, then we should want clearly articulated and compelling reasons for doing so.\n\nThese reasons certainly might exist.3 Still, whatever the case may be, I think we should not be too quick to interpret the OpenAI data as evidence for dramatically more capable systems coming soon.4\nThank you to Danny Hernandez and Ryan Carey for comments on a draft of this post.", "url": "https://aiimpacts.org/reinterpreting-ai-and-compute/", "title": "Reinterpreting “AI and Compute”", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-12-18T22:51:50+00:00", "paged_url": "https://aiimpacts.org/feed?paged=12", "authors": ["Justis Mills"], "id": "d225246e1a990bde27f712b4a84f4528", "summary": []} {"text": "Time for AI to cross the human performance range in diabetic retinopathy\n\nIn diabetic retinopathy, automated systems started out just below expert human level performance, and took around ten years to reach expert human level performance.\nDetails\nDiabetic retinopathy is a complication of diabetes in which the back of the eye is damaged by high blood sugar levels.1 It is the most common cause of blindness among working-age adults.2 The disease is diagnosed by examining images of the back of the eye. The gold standard used for diabetic retinopathy diagnosis is typically some sort of pooling mechanism over several expert opinions. Thus, in the papers below, each time expert sensitivity/specificity (Se/Sp) is considered, it is the Se/Sp of individual experts graded against aggregate expert agreement. \nAs a rough benchmark for expert-level performance we’ll take the average Se/Sp of ophthalmologists from a few studies. Based on Google Brain’s work (detailed below), this paper 3, and this paper 4 , the average specificity of 14 opthamologists, which indicates expert human-level performance, is 95% and the average sensitivity is 82%.\nAs far as we can tell, 1996 is when the first algorithm automatically detecting diabetic retinopathy was developed. When compared to opthamologists’ ratings, the algorithm achieved 88.4% sensitivity and 83.5% specificity. \nIn late 2016 Google algorithms were on par with eight opthamologist diagnoses of diabetic retinopathy. See Figure 1.5 The high-sensitivity operating point (labelled on the graph) achieved 97.5/93.4 Se/Sp.   \nFigure 1: Performance comparison of a late 2016 Google algorithm, and eight opthalmologists, from here. The black curve represents the algorithm and the eight colored dots are opthamologists.\nMany other papers were published in between 1996 and 2016. However, none of them achieved better than expert human-level performance on both specificity and sensitivity. For instance 86/77 Se/Sp was achieved in 2007, 97/59 in 2013, and 94/72 by another team in 2016. 6\nThus it took about ten years to go from just below expert human level performance to slightly superhuman performance.\nContributions\nAysja Johnson researched and wrote this page. Justis Mills and Katja Grace contributed feedback.\nFootnotes", "url": "https://aiimpacts.org/diabetic-retinopathy-as-a-case-study-in-time-for-ai-to-cross-the-range-of-human-performance/", "title": "Time for AI to cross the human performance range in diabetic retinopathy", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-11-21T22:34:37+00:00", "paged_url": "https://aiimpacts.org/feed?paged=12", "authors": ["Aysja Johnson"], "id": "f9e80c04fe898eeae1738ec04936e402", "summary": []} {"text": "AGI-11 survey\n\nThe AGI-11 survey was a survey of 60 participants at the AGI-11 conference. In it:\n\nNearly half of respondents believed that AGI would appear before 2030.\nNearly 90% of respondents believed that AGI would appear before 2100.\nAbout 85% of respondents believed that AGI would be beneficial for humankind.\n\nDetails\nJames Barrat and Ben Goertzel surveyed participants at the AGI-11 conference on AGI timelines. The survey had two questions, administered over email after the conference. The results were fairly similar to those from the more complex AGI-09 survey, for which Ben Goertzel was also an author.\nSixty people total responded to the survey, out of over 200 conference registrations. Nobody skipped either of the two questions.\nThe data in this post, and the results tables, are taken from the write up on this survey in h+ magazine.\nQuestion One\nQuestion one was: “I believe that AGI (however I define it) will be effectively implemented in the following timeframe:”\nThe answer choices were:\n\nBefore 2030\n2030-2049\n2050-2099\nAfter 2100\nNever\n\nThe results were:\n\nQuestion Two\nQuestion two was: “I believe that AGI (however I define it) will be a net positive event for humankind:”\nThe answer choices were:\n\nTrue\nFalse\n\nThe results were:\n\nComments\nAs well as the two survey questions, there was also a form where respondents could submit comments. These are recorded in the h+ magazine write up. Many of the comments expressed concern with the survey structure, suggesting that there could have been more or different options.\nContributions\nJustis Mills researched for and wrote this post. Katja Grace provided feedback on style and content. Thanks to James Barrat for linking us to the h+ magazine write up when we reached out to him about his survey.", "url": "https://aiimpacts.org/agi-11-survey/", "title": "AGI-11 survey", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-11-10T23:28:58+00:00", "paged_url": "https://aiimpacts.org/feed?paged=12", "authors": ["Justis Mills"], "id": "4077ed4ea25b39623a585c27cc92e1c2", "summary": []} {"text": "On the (in)applicability of corporate rights cases to digital minds\n\nThis is a guest cross-post by Cullen O’Keefe, 28 September 2018\nHigh-Level Takeaway\nThe extension of rights to corporations likely does not provide useful analogy to potential extension of rights to digital minds.\nIntroduction\nExamining how law can protect the welfare of possible future digital minds is part of my research agenda. I expect that study of historical efforts to secure legal protections (“rights”) for previously unprotected classes (e.g., formerly enslaved persons, nonhuman animals, young children) will be crucial to this line of research.\nI recently read We the Corporations: How American Businesses Won Their Civil Rights by UCLA constitutional law professor Adam Winkler. The book chronicles how business corporations gradually won various constitutional and statutory civil rights, culminating in the (in)famous recent Citizens United and Hobby Lobby cases.\nA key insight from Winkler’s book is that, contrary to some popular portrayals of corporate rights cases, these cases usually do not rely primarily on corporate personhood: “While the Supreme Court has on occasion said that corporations are people, the justices have more often relied upon a very different conception of the corporation, one that views it as an association capable of asserting the rights of its members.” Id. at xx. The Court, in other words, “pierced the corporate veil” to give corporations rights properly belonging to its members. See id. at 54–55.\nThe Supreme Court’s opinion in Citizens United is illustrative. In determining that the First Amendment’s free speech protections applied to corporations, the Court wrote: “[Under the challenged campaign finance statute,] certain disfavored associations of citizens—those that have taken on the corporate form—are penalized for engaging in [otherwise-protected] political speech.” 558 U.S. at 356. The Court held that this was impermissible: the shareholders’ right to free speech imbued the corporation—which it viewed as merely an association of rights-bearing shareholders—with those same rights. See id. at 365.\nEarlier cases that, Winkler argues, exhibit this same pattern include:\n\nBank of U.S. v. Deveaux, holding that federal jurisdiction over corporations depends on jurisdiction over the individuals comprising the corporation;\nTrustees of Dartmouth Coll. v. Woodward, holding that corporate charters gave trustees private rights therein, which were protected against state alteration by the Constitution;\nNAACP v. Alabama ex rel. Patterson, holding that non-profit corporation could assert First Amendment rights of its members;\nBains LLC v. Arco Prod. Co., Div. of Atl. Richfield Co., holding that a corporation had standing to bring racial discrimination claim for racial discrimination against its employees.\n\nImplications\nI believe that this understanding of the corporate civil rights “struggle” has small-but-nontrivial implications for a potential future strategy to secure legal protections for digital minds. Specifically, I think Winkler’s thesis suggests that the extension of rights to corporations is not a useful historical or legal analogy for the potential extension of rights to digital minds. This is because Winkler’s book demonstrates that corporations gained rights primarily because their constitutive members (i.e., shareholders) already had rights. In the case of digital minds generally, I see no obvious analogy to shareholders: digital minds as such are not mere extensions or associations of entities already bearing rights.\nMore concretely, this suggests that securing legal personhood for digital minds for instrumental reasons is not likely, on its own, to increase the likelihood of legal protections for them.\nCarrick Flynn suggested to me (and I now agree) that nonhuman animal protections probably provide the best analog for future digital mind protections. To the extent that it rules out another possible method of approaching the question, this post supports that thesis.\nThis work was financially supported by the Berkeley Existential Risk Initiative.", "url": "https://aiimpacts.org/on-the-inapplicability-of-corporate-rights-cases-to-digital-minds/", "title": "On the (in)applicability of corporate rights cases to digital minds", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-09-28T22:27:26+00:00", "paged_url": "https://aiimpacts.org/feed?paged=12", "authors": ["Katja Grace"], "id": "1c2e58e0454575151cc40a1a29ceedaa", "summary": []} {"text": "Hardware overhang\n\nHardware overhang refers to a situation where large quantities of computing hardware can be diverted to running powerful AI systems as soon as the software is developed.\nDetails\nDefinition\nIn the context of AI forecasting, hardware overhang refers to a situation where enough computing hardware to run many powerful AI systems already exists by the time the software to run such systems is developed. If such hardware is repurposed to AI, this would mean that as soon as one powerful AI system exists, probably a large number of them do. This might amplify the impact of the arrival of human-level AI.\nThe alternative to a hardware overhang is for software sufficient for powerful AI to be developed before the hardware to run it has become cheap. In that case, we might instead expect to see the first powerful AI be built when it is very expensive, and therefore continues to be rare.", "url": "https://aiimpacts.org/hardware-overhang/", "title": "Hardware overhang", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-07-16T16:37:30+00:00", "paged_url": "https://aiimpacts.org/feed?paged=13", "authors": ["Katja Grace"], "id": "9032c68eb3246b3680058ff527b9b559", "summary": []} {"text": "Historic trends in structure heights\n\nTrends for tallest ever structure heights, tallest ever freestanding structure heights, tallest existing freestanding structure heights, and tallest ever building heights have each seen 5-8 discontinuities of more than ten years. These are:\nDjoser and Meidum pyramids (~2600BC, >1000 year discontinuities in all structure trends)Three cathedrals that were shorter than the all-time record (Beauvais Cathedral in 1569, St Nikolai in 1874, and Rouen Cathedral in 1876, all >100 year discontinuities in current freestanding structure trend)Washington Monument (1884, >100 year discontinuity in both tallest ever structure trends, but not a notable discontinuity in existing structure trend)Eiffel Tower (1889, ~10,000 year discontinuity in both tallest ever structure trends, 54 year discontinuity in existing structure trend)Two early skyscrapers: the Singer Building and the Metropolitan Life Tower (1908 and 1909, each >300 year discontinuities in building height only)Empire State Building (1931, 19 years in all structure trends, 10 years in buildings trend)KVLY-TV mast (1963, 20 year discontinuity in tallest ever structure trend)Taipei 101 (2004, 13 year discontinuity in building height only)Burj Khalifa (2009, ~30 year discontinuity in both freestanding structure trends, 90 year discontinuity in building height trend)\nDetails\nBackground\nOver human history, the tallest man-made structures have included mounds, pyramids, churches, towers, a monument, skyscrapers, and radio and TV masts.\nHeight records often distinguish between structures and buildings, where a building is ‘regularly inhabited or occupied’ according to Wikipedia, or is ‘designed for residential, business or manufacturing purposes’ and ‘has floors’ according to the Council on Tall Buildings and Urban Habitat.1 Figure 1a is an illustration from Wikipedia showing the historic relationship between the heights of buildings and structures.2\n\nFigure 1a: Recent history of tall structures by type.3\nHeight records also distinguish ‘freestanding’ structures from other structures. According to Wikipedia, “To be freestanding a structure must not be supported by guy wires, the sea or other types of support. It therefore does not include guyed masts, partially guyed towers and drilling platforms but does include towers, skyscrapers (pinnacle height) and chimneys.”4 Definitions vary, for instance Guinness World Records apparently treats underwater structures as ‘freestanding’.5 We ignore underwater height in general, excluding underwater structures from ‘freestanding’ records and and ‘all structures’ records. The heights of buildings in particular are commonly measured in terms of ‘architectural height’ or ‘height to tip’, which both start at the lowest, significant, open-air, pedestrian entrance, but differ in that ‘to tip’ includes ‘functional-technical equipment’ like antennae, signage or flag poles, while architectural height does not6 Our understanding is that ‘pinnacle height’ is the same as ‘height to tip’. There are also several less common measures in use.\nHeight records must also distinguish between the tallest structure standing at a given time, and the tallest structure to have ever existed, at that time. The tallest building or structure at a particular time is sometimes not the tallest ever, when the tallest is damaged without anything taller being built. For instance, the tallest structures in the 1700s were shorter than earlier records, because those were church spires which became damaged without replacement (see Figure 1b).\n\nFigure 1b: An illustration of structure heights over time by location from Wikipedia. 7 (Click to enlarge)\nTrends\nWe collected data for several combinations of measurement possibilities mentioned:\nTallest ever structures on land (i.e. freestanding or not, but not underwater), measured to tipTallest ever freestanding structures, measured to tipTallest existing freestanding structures, measured to tipTallest ever buildings, measured to architectural height\nTallest ever structure heights\nData\nWe collected height records from numerous Wikipedia lists of tall buildings and structures.8 We have not extensively verified these sources, though we made minor adjustments and additions from elsewhere online where sources were inconsistent or records incomplete. Our data is in this spreadsheet, sheet ‘Structures collection’. Figure 2 shows this data.\nFigure 2: Our collection of height records for man-made above ground structures, from a variety of online sources (excluding two earlier records). Note that some records are repeated in slightly different versions or are for the same structure being extended, or becoming a record again after the destruction of another structure. The collection is constructed to contain the tallest structures, but the subset of non-tallest structures included is arbitrary.\nWe constructed a timeline of tallest ever structures by pinnacle height from the tallest ever records in this dataset (see sheet ‘Structures (all time, pinnacle)’ in the spreadsheet). This is shown in figures 3a and 3b below.\nFigure 3a: Recent history of tallest structures ever built on land (not necessarily freestanding). The record may be taller than any structure standing at a given time.\nFigure 3b: Longer term history of tallest structures ever built on land (not necessarily freestanding), on a log scale. The record may be taller than any structure standing at a given time.\n \nDiscontinuity measurement\nWe treat this data as exponential initially followed by three linear trends. Using these trends as the previous rate to compare to, we calculated for each record how many years ahead of the trend it was.9 The series contained six unambiguous greater-than-ten-year discontinuities, shown in Table 1 below.\nThe Bent Pyramid appears to represent a 12 year discontinuity, but we ignore this because its date of construction seems uncertain relative to the small discontinuity.10\nWhile our early records are presumably incomplete, we do not avoid measuring early discontinuities for this reason, because the large discontinuities that we find before the 19th Century seem unlikely to depend substantially on the exact set of earlier records.\nTable 1: discontinuities in tallest ever structure heights\nYearHeight (m)Discontinuity (years)Structure2650 BC62.5~9000Pyramid of Djoser2610 BC91.65~1000Meidum Pyramid1884169.3106Washington Monument1889300~10,000Eiffel Tower193138119Empire State Building1963628.820KVLY-TV mast\nA number of other potentially relevant metrics are tabulated here.\nTallest ever freestanding structure heights\nData\nThis data is another subset of the ‘structures collection’ described above, this time including only records for ‘freestanding’ structures. This excludes structures supported by guy ropes, such as radio masts. Guyed masts were the tallest structures on land overall between 1954 and 2008, so this dataset differs from the ‘tallest ever structure heights’ dataset above between those years.\nThis dataset can be found in this spreadsheet, sheet ‘Freestanding structures (all time, pinnacle)’. Figures 4-5 below illustrate it.\nFigure 4: Recent history of tallest freestanding structures ever built.\nFigure 5: Longer term history of tallest freestanding structures ever built, on a log scale.\nDiscontinuity measurement\nWe treat this data as exponential initially followed by three linear trends. Using these trends as the previous rate to compare to, we calculated for each record how many years ahead of the trend it was.11 The series contained six unambiguous greater-than-ten-year discontinuities. The first five are the same as those in the previous dataset, since the series do not diverge until later (see Tallest ever structure heights section above for further details). The last discontinuity is a 32 year jump in 2009 from the Burj Khalifa.\nWe tabulated a number of other potentially relevant metrics here.\nTallest existing freestanding structure heights\nWe constructed a dataset of tallest freestanding structures over time largely from Wikipedia’s Timeline of world’s tallest freestanding structures12, with some modifications. This is available in our spreadsheet, sheet ‘Freestanding structures (current, pinnacle)’, and is shown in Figures 6-7 below.\nFigure 6: Recent history of tallest freestanding structures standing. New records are sometimes shorter than old records.\nFigure 7: Longer term history of tallest freestanding structures standing, on a log scale. New records are sometimes shorter than old records.\nDiscontinuity measurement\nWe treat this data as exponential initially, followed by four linear trends. Using these trends as the ‘previous rate’ to compare to,13 the data contained eight unambiguous greater than ten year discontinuities, shown in Table 2 below.14\nThis series differs from that of all-time tallest freestanding structures above by the insertion of a series of records between Lincoln Cathedral in 1311 and the Washington Monument in 1889. This change made the Washington Monument unexceptional rather than a 100 year discontinuity, and the Eiffel Tower a fifty-year discontinuity rather than a ten-thousand year one. Later discontinuities from the Empire State Building and Burj Khalifa are very similar.\nTable 2: discontinuities in tallest existing freestanding structures\nYearHeight (m)Discontinuity (years)Structure2650 BC62.5~9000Pyramid of Djoser2610 BC91.65~1000Meidum Pyramid1569153138Beauvais Cathedral1874147.3224St Nikolai1876151307Rouen Cathedral188930054Eiffel Tower193138119Empire State Building2009829.835Burj Khalifa\nWe have tabulated a number of other potentially relevant metrics here.15\nTallest ever building heights\nData\nWe collected data on the tallest ever buildings from Wikipedia’s History of the world’s tallest buildings,16 and added it to this spreadsheet (sheet ‘Buildings (all time, architectural)’). We have not thoroughly verified it, but have made minor modifications (noted in the spreadsheet). Figure 8 shows this data.\nFigure 8: Height of tallest buildings ever built, measured using ‘architectural height’, which excludes some additions such as antennae.\nFigure 9: Close up of Figure 8\nDiscontinuity measurement\nWe treated this data as an exponential trend followed by a linear trend.17 Compared to previous rates within these trends, tallest buildings over time contained five greater than ten year discontinuities, shown in Table 3 below.18\nTable 3: discontinuities in tallest ever building heights\nYearHeight (m)Discontinuity (years)Building1908186.57383Singer Building1909213.36320Metropolitan Life Tower193138110Empire State Building2004509.213Taipei 101201082890Burj Khalifa\nWe have tabulated a number of other potentially relevant metrics here.19\nFigure 10: Burj Khalifa, current record holder for every listed metric, and discontinuously tall freestanding structure and building.\nNotes", "url": "https://aiimpacts.org/discontinuity-from-the-burj-khalifa/", "title": "Historic trends in structure heights", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-07-12T17:15:02+00:00", "paged_url": "https://aiimpacts.org/feed?paged=13", "authors": ["Katja Grace"], "id": "a4e05382ef0931c2332e09646aad271c", "summary": []} {"text": "Interpreting AI compute trends\n\nThis is a guest post by Ryan Carey, 10 July 2018.\nOver the last few years, we know that AI experiments have used much more computation than previously. But just last month, an investigation by OpenAI made some initial estimates of just how fast this growth has been. Comparing AlphaGo Zero to AlexNet, they found that the largest experiment now is 300,000-fold larger than the largest experiment six years ago. In the intervening time, the largest experiment in each year has been growing exponentially, with a doubling time of 3.5 months.\nThe rate of growth of experiments according to this AI-Compute trend is astoundingly fast, and this deserves some analysis. In this piece, I explore two issues. The first is that if experiments keep growing so fast, they will quickly become unaffordable, and so the trend will have to draw to a close. Unless the economy is drastically reshaped, this trend can be sustained for at most 3.5-10 years, depending on spending levels and how the cost of compute evolves over time. The second issue is that if this trend is sustained for even 3.5 more years, the amount of compute used in an AI experiment will have passed some interesting milestones. Specifically, the compute used by an experiment will have passed the amount required to simulate, using spiking neurons, a human mind thinking for eighteen years. Very roughly speaking, we could say that the trend would surpass the level required to reach the level of intelligence of an adult human, given an equally efficient algorithm. In sections (1) and (2), I will explore these issues in turn, and then in section (3), I will discuss the limitations of this analysis and weigh how this work might bear on AGI forecasts.\n1. How long can the AI-Compute trend be sustained?\nTo figure out how long the AI-Compute trend can be economically sustained, we need to know three things: the rate of growth of the cost of experiments, the cost of current experiments, and the maximum amount that can be spent on an experiment in the future.\nThe size of the largest experiments is increasing with a doubling time of 3.5 months, (about an order of magnitude per year)1, while the cost per unit of computation is decreasing by an order of magnitude every 4-12 years (the long-run trend has improved costs by 10x every 4 years, whereas recent trends have improved costs by 10x every 12 years)2. So the cost of the largest experiments is increasing by an order of magnitude every 1.1 – 1.4 years.3\nThe largest current experiment, AlphaGo Zero, probably cost about $10M.4\nThe largest that experiments can get depends who is performing them. The richest actor is probably the US government. Previously, the US spent 1% of annual GDP5 on the Manhattan Project, and ~0.5% of annual GDP on NASA during the Apollo program.6 So let’s suppose they could similarly spend at most 1% of GDP, or $200B, on one AI experiment. Given the growth of one order of magnitude per 1.1-1.4 years, and the initial experiment size of $10M, the AI-Compute trend predicts that we would see a $200B experiment in 5-6 years.7 So given a broadly similar economic situation to the present one, that would have to mark an end to the AI-Compute trend.\nWe can also consider how long the trend can last if government is not involved. Due to their smaller size, economic barriers hit a little sooner for private actors. The largest among these are tech companies: Amazon and Google have current research and development budgets of about ~20B/yr each8, so we can suppose that the largest individual experiment outside of government is $20B. Then the private sector can keep pace with the AI-Compute trend for around ¾ as long as government, or ~3.5-4.5 years.910\nOn the other hand, the development of specialized hardware could cheapen computation, and thereby cause the trend to be sustainable for a longer period. If some new hardware cheapened compute by 1000x over and above price-performance Moore’s Law, then the economic barriers bite a little later– after an extra 3-4 years.11\nIn order for the AI-Compute trend to be maintained for a really long time (more than about a decade), economic output would have to start growing by an order of magnitude or more per year. This is a really extreme scenario, but the main thing that would make it possible would presumably be some massive economic gains from some extremely powerful AI technology, that would also serve to justify the massive ongoing AI investment.\nOf course, it’s important to be clear that these figures are upper bounds, and they do not preclude the possibility that the AI-Compute trend may halt sooner (e.g. if AI research proves less economically useful than expected) either in a sudden or more gradual fashion.\nSo we have shown one kind of conclusion from a rapid trend — that it cannot continue for very long, specifically, beyond 3.5-10 years.\n2. When will the AI-Compute trend pass potentially AGI-relevant milestones?\nThe second conclusion that we can draw is that if the AI-Compute trend continues at its current rapid pace, it will pass some interesting milestones. If the AI-Compute trend continues for 3.5-10 more years, then the size of the largest experiment is projected to reach 107-5×1013 Petaflop/s-days, and so the question is which milestones arrive below that level.12 Which milestones might allow the development of AGI is a controversial topic, but three candidates are:\n\nThe amount of compute required to simulate a human brain for the duration of a human childhood\nThe amount of compute required to simulate a human brain to play the number of Go games Alphago Zero required to become superhuman\nThe amount of compute required to simulate the evolution of the human brain\n\nHuman-childhood milestone\nOne natural guess for the amount of computation required to create artificial intelligence is the amount of computation used by the human brain. Suppose an AI had (compared to a human):\n\na similarly efficient algorithm for learning to perform diverse tasks (with respect to both both compute and data),\nsimilar knowledge built in to its architecture,\nsimilar data, and\nenough computation to simulate a human brain running for eighteen years, at sufficient resolution to capture the intellectual performance of that brain.\n\nThen, this AI should be able to learn to solve a similarly wide range of problems as an eighteen year-old can solve.13\nThere is a range of estimates for how many floating point operations per second are required to simulate a human brain for one second. Those collected by AI Impacts have a median of 1018 FLOPS (corresponding roughly to a whole-brain simulation using Hodgkin-Huxley neurons14), and ranging from 3×1013FLOPS (Moravec’s estimate) to 1×1025FLOPS (simulating the metabolome). Running such simulations for eighteen years would correspond to a median of 7 million Petaflop/s-days (range 200 – 7×1013 Petaflop/s-days).15\nSo for the shortest estimates, such as the Moravec estimate, we have already reached enough compute to pass the human-childhood milestone. For the median estimate, and the Hodgkin-Huxley estimates, we will have reached the milestone within 3.5 years. For the metabolome estimates, the required amount of compute cannot be reached within the coming ten year window before the AI-Compute trend is halted by economic barriers. After the AI-Compute trend is halted, it’s worth noting that Moore’s Law could come back to the fore, and cause the size of experiments to continue to slowly grow. But on Moore’s Law, milestones like the metabolome estimate are still likely decades away.\nAlphaGo Zero-games milestone\nOne objection to the human-childhood milestone is that AI systems presently are “slower-learners” than humans. AlphaGo Zero used 2.5 million Go games to become superhuman16, which if each game took at hour, would correspond to 300 years of Go games17. We might ask how long it would take to run something as complex as the human brain, for 300 years, rather than just eighteen. In order for this milestone to be reached, the trend would have to continue for another 14 months longer than the human-childhood milestone18.\nBrain-evolution milestone\nA more conservative milestone is the amount of compute required to simulate all neural evolution. One approach, described by Shulman and Bostrom 2012, is to look at the cost of simulating the evolution of nervous systems. This entails simulating 1025 neurons for one billion years.19 Shulman and Bostrom estimate the cost simulating a neuron for one second at 1-1010 floating point operations,20 and so the total cost for simulating evolution is 3×1021-3×1031 Petaflop/s-days21. This figure would not be reached until far beyond the time when the current AI-Compute trend must end. So the AI-Compute trend does not change the conclusion of Shulman and Bostrom that simulation of brain evolution on Earth is far away — even with a rapid increase in spending, this compute milestone would take many decades of advancement of Moore’s Law to be reached22.\nOverall, we can see that although the brain-evolution milestone is well beyond the AI-Compute trend, the others are not necessarily. For some estimates — especially metabolome estimates — the human-childhood and AlphaGo Zero-games milestones cannot be reached either. But some of the human-childhood and AlphaGo Zero-games milestones will be reached if the AI Compute trend continues for the next few years.\n3. Discussion and Limitations\nIn light of this analysis, a reasonable question to ask is: for the purpose of predicting AGI, which milestone should we care most about? This is very uncertain, but I would guess that building AGI is easier than the brain-evolution milestone would suggest, but that AGI could arrive either before, or after the AlphaGo Zero-games milestone is reached.\nThe first claim is because the brain-evolution milestone assumes that the process of algorithm discovery must be performed by the AI itself. It seems more likely to me that the appropriate algorithm is provided (or mostly provided) by the human designers at no computational cost (or at hardly any cost compared to simulating evolution).\nThe second matter — evaluating the difficulty of AGI relative to the AlphaGo Zero-games milestone — is more complex. One reason for thinking that the AlphaGo Zero-games milestone makes AGI look too easy is that more training examples ought to be required to teach general intelligence, than are required to learn the game of Go.23 In order to perform a wider range of tasks, it will be necessary to consider a larger range of dependencies and to learn a more intricate mapping from actions to utilities. This matter could be explored further by comparing the sample efficiency of various solved AI problems and extrapolating the sample efficiency of AGI based on how much more complicated general intelligence seems. However, there are also reasons the AlphaGo Zero-games milestone might make things look too hard. Firstly, AlphaGo Zero does not use any pre-existing knowledge, whereas AGI systems might. If we had looked instead at the original AlphaGo, this would have required an order of magnitude fewer games relative to AlphaGo Zero24, and further efficiency gains might be possible for more general learning tasks. Secondly, there might be one or more orders of magnitude of conservatism built-in to the approach of using simulations of the human brain. Simulating the human brain on current hardware may be a rather inefficient way to capture its computing function: that is, the human brain might only be using some fraction of the computation that is needed to simulate it. So it’s hard to judge whether the AlphaGo Zero-games milestone is too late or too soon for AGI.\nThere is another reason for some more assurance that AGI is more than six years away. We can simply look at the AI-Compute trend and ask ourselves: is AGI as close to AlphaGo Zero as AlphaGo Zero is to AlexNet? If we think that the difference (in terms of some combination of capabilities, compute, or AI research) between the first pair is larger than the second, then we should think that AGI is more than six years away.\nIn conclusion, we can see that the AI-Compute trend is an extraordinarily fast trend that economic forces (absent large increases in GDP) cannot sustain beyond 3.5-10 more years. Yet the trend is also fast enough that if it is sustained for even a few years from now, it will sweep past some compute milestones that could plausibly correspond to the requirements for AGI, including the amount of compute required to simulate a human brain thinking for eighteen years, using Hodgkin Huxley neurons. However, other milestones will not be reached before economic factors halt the AI-Compute trend. For example, this analysis shows that we will not have enough compute to simulate the evolution of the human brain for (at least) decades.\nThanks Jack Gallagher, Danny Hernandez, Jan Leike, and Carl Shulman for discussions that helped with this post.\n", "url": "https://aiimpacts.org/interpreting-ai-compute-trends/", "title": "Interpreting AI compute trends", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-07-10T22:04:45+00:00", "paged_url": "https://aiimpacts.org/feed?paged=13", "authors": ["Katja Grace"], "id": "19632e255629177370528fa60bbdc6bf", "summary": []} {"text": "Occasional update July 5 2018\n\nBy Katja Grace, 5 July 2018\nBefore I get to substantive points, there has been some confusion over the distinction between blog posts and pages on AI Impacts. To make it clearer, this blog post shall proceed in a way that is silly, to distinguish it from the very serious and authoritative reference pages that comprise the bulk of AI Impacts.\nNow for a picture of a duck, to remind you that this is silly, and also that we are all fragile biological organisms that evolved because apparently that’s what happens if you just leave a bunch of wet mud on a space rock for long enough, alone and vulnerable in a hostile and uncharted world.\n\nAnd now to the exciting facts on the ground, as we try to marginally rectify that situation.\nPeople\nTegan McCaslin is now working at AI Impacts, as far as I can tell about five hundred hours a week. It’s going well, except that the rate at which she sends me extensive, carefully researched articles about neuroanatomy and genetics and such to review is in slight tension with my preferred lifestyle.\nWe also welcome Carl Shulman as occasional consultant on everything, and reviewer of things (especially articles about neuroanatomy and genetics and such…)\nJustis Mills joined us last year, to work on miscellany. He usually does one of those software related things, but in his spare time he has been making illustrative timelines of near-term AI predictions and checking whether everything on AI Impacts isn’t obviously false, and fixing bits of it, and such.\nWe mostly-farewell Michael Wulfsohn—an Australian economist, called to us from the Central Bank of Lesotho by WaitButWhy—who is winding up his assessment of how great avoiding human extinction might be (having already estimated how how much of a bother it might be). He has gone to get a PhD, the better to save the world.\nPlaces\nOur implicit office has moved from a spare room in my house to Tegan’s house. This is good, because she has an absurdly nice rug, and an excellent snack drawer, and it is a minor ambition of mine to head an organization which has Bay Area start-up quality snack areas.\nWe have also been trying out co-working with other save-the-world-something-something-AI related folks around Berkeley, which seems promising. We have also been trying out co-working with Oxford, which seems promising, but not as Bay Area convenient as we would like.\nThings we want\nWe want to hire more people. Relatedly, we would like money. We think these things would nicely complement our many brilliant and tractable research ideas and our ambition. We also want to have our own T-shirts, but that is on the back-burner.\nThings we got\n$100,000 from The Open Philanthropy Project, for the next two years.\n$39,000 from another donor to support several specific research projects from our list of promising research projects.\nProjects\nYou can mostly see what we are up to by watching various parts of our front page, so I shan’t go into it all, except to say that I for one am especially enjoying my investigation into reasons to (or not to) expect AI discontinuities. If you too are fascinated by this topic, and want to give especially many pointed comments on it, you can do so on this doc version.\nOur survey became the 16th most discussed journal article of 2017, so that was neat. If I recall, I was at least relatively in favor of not writing a paper about it, so I was probably wrong there. (Probably good job, everyone else!) I suspect this success is related to the journalists who have been writing to me endlessly, and me being invited to give talks, and go to Chile and be on the radio and that kind of thing. Which has all been an unusual experience.\nHow you can get involved\nIf you want to do this kind of work, consider applying for a job with us, or just doing one of these projects anyway, and sending it to us. If you want to chat about this kind of research, or spy on it, or help a tiny bit noncommittally, ask us nicely and we might add you to our fairly open Slack. If you want to help in some other way, we especially welcome money and any good researchers you have hanging around, but are open to other ideas.", "url": "https://aiimpacts.org/occasional-update-july-5-2018/", "title": "Occasional update July 5 2018", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-07-05T15:36:59+00:00", "paged_url": "https://aiimpacts.org/feed?paged=13", "authors": ["Katja Grace"], "id": "b63d4e3a144249eac91f76f53b92f04a", "summary": []} {"text": "Trend in compute used in training for headline AI results\n\nCompute used in the largest AI training runs appears to have roughly doubled every 3.5 months between 2012 and 2018.\nDetails\nAccording to Amodei and Hernandez, on the OpenAI Blog:\n…since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time (by comparison, Moore’s Law had an 18-month doubling period). Since 2012, this metric has grown by more than 300,000x (an 18-month doubling period would yield only a 12x increase)…\nThey give the following figure, and some of their calculations. We have not verified their calculations, or looked for other reports on this issue.\nFigure 1: Originally captioned: The chart shows the total amount of compute, in petaflop/s-days, that was used to train selected results that are relatively well known, used a lot of compute for their time, and gave enough information to estimate the compute used. A petaflop/s-day (pfs-day) consists of performing 1015 neural net operations per second for one day, or a total of about 1020 operations. The compute-time product serves as a mental convenience, similar to kW-hr for energy. We don’t measure peak theoretical FLOPS of the hardware but instead try to estimate the number of actual operations performed. We count adds and multiplies as separate operations, we count any add or multiply as a single operation regardless of numerical precision (making “FLOP” a slight misnomer), and we ignore ensemble models. Example calculations that went into this graph are provided in this appendix. Doubling time for line of best fit shown is 3.43 months.", "url": "https://aiimpacts.org/trend-in-compute-used-in-training-for-headline-ai-results/", "title": "Trend in compute used in training for headline AI results", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-05-17T21:33:13+00:00", "paged_url": "https://aiimpacts.org/feed?paged=13", "authors": ["Katja Grace"], "id": "fcb4077412fbf81f4ba82082598e3349", "summary": []} {"text": "The tyranny of the god scenario\n\nBy Michael Wulfsohn, 6 April 2018\nI was convinced. An intelligence explosion would result in the sudden arrival of a superintelligent machine. Its abilities would far exceed those of humans in ways we can’t imagine or counter. It would likely arrive within a few decades, and would wield complete power over humanity. Our species’ most important challenge would be to solve the value alignment problem. The impending singularity would lead either to our salvation, our extinction, or worse.\nIntellectually, I knew that it was not certain that this “god scenario” would come to pass. If asked, I would even have assigned it a relatively low probability, certainly much less than 50%. Nevertheless, it dominated my thinking. Other possibilities felt much less real: that humans might achieve direct control over their superintelligent invention, that reaching human-level intelligence might take hundreds of years, that there might be a slow progression from human-level intelligence to superintelligence, and many others. I paid lip service to these alternatives, but I didn’t want them to be valid, and I didn’t think about them much. My mind would always drift back to the god scenario.\nI don’t know how likely the god scenario really is. With currently available information, nobody can know for sure. But whether or not it’s likely, the idea definitely has powerful intuitive appeal. For example, it led me to change my beliefs about the world more quickly and radically than I ever had before. I doubt that I’m the only one.\nWhy did I find the god scenario so captivating? I like science fiction, and the idea of an intelligence explosion certainly has science-fictional appeal. I was able to relate to the scenario easily, and perhaps better think through the implications. But the transition from science fiction to reality in my mind wasn’t immediate. I remember repeatedly thinking “nahhh, surely this can’t be right!” My mind was trying to put the scenario in its science-fictional place. But each time the thought occurred, I remember being surprised at the scenario’s plausibility, and at my inability to rule out any of its key components.\nI also tend to place high value on intelligence itself. I don’t mean that I’ve assessed various qualities against some measure of value and concluded that intelligence ranks highly. I mean it in a personal-values sense. For example, the level of intelligence I have is a big factor in my level of self-esteem. This is probably more emotional than logical.\nThis emotional effect was an important part of the god scenario’s impact on me. At first, it terrified me. I felt like my whole view of the world had been upset, and almost everything people do day to day seemed to no longer matter. I would see a funny video of a dog barking at its reflection, and instead of enjoying it, I’d notice the grim analogy of the intellectual powerlessness humanity might one day experience. But apart from the fear, I was also tremendously excited by the thought of something so sublimely intelligent. Having not previously thought much about the limits of intelligence itself, the concept was both consuming and eye-opening, and the possibilities were inspiring. The notion of a superintelligent being appealed to me similarly to the way Superman’s abilities have enthralled audiences. \nOther factors included that I was influenced by highly engaging prose, since I first learned about superintelligence by reading this excellent waitbutwhy.com blog post. Another was my professional background; I was accustomed to worrying about improbable but significant threats, and to arguments based on expected value. The concern of prominent people—Bill Gates, Elon Musk, and Stephen Hawking—helped. Also, I get a lot of satisfaction from working on whatever I think is humanity’s most important problem, so I really couldn’t ignore the idea. \nBut there were also countervailing effects in my mind, leading away from the god scenario. The strongest was the outlandishness of it all. I had always been dismissive of ideas that seem like doomsday theories, so I wasn’t automatically comfortable giving the god scenario credence in my mind. I was hesitant to introduce the idea to people who I thought might draw negative conclusions about my judgement. \nI still believe the god scenario is a real possibility. We should assiduously prepare for it and proceed with caution. However, I believe I have gradually escaped its intuitive capture. I can now consider other possibilities without my mind constantly drifting back to the god scenario. \nI believe a major factor behind my shift in mindset was my research interest in analyzing AI safety as a global public good. Such research led me to think concretely about other scenarios, which increased their prominence in my mind. Relatedly, I began to think I might be better equipped to contribute to outcomes in those other scenarios. This led me to want to believe that the other scenarios were more likely, a desire compounded by the danger of the god scenario. My personal desires may or may not have influenced my objective opinion of the probabilities. But they definitely helped counteract the emotional and intuitive appeal of the god scenario. \nExposure to mainstream views on the subject also moderated my thinking. In one instance, reading an Economist special report on artificial intelligence helped counteract the effects I’ve described, despite that I actually disagreed with most of their arguments against the importance of existential risk from AI. \nExposure to work done by the Effective Altruism community on different future possibilities also helped, as did my discussions with Katja Grace, Robin Hanson, and others during my work for AI Impacts. The exposure and discussions increased my knowledge and the sophistication of my views such that I could better imagine the range of AI scenarios. Similarly, listening to Elon Musk’s views of the importance of developing brain-computer interfaces, and seeing OpenAI pursue goals that may not squarely confront the god scenario, also helped. They gave me a choice: decide without further ado that Elon Musk and OpenAI are misguided, or think more carefully about other potential scenarios.\nRelevance to the cause of AI safety\nI believe the AI safety community probably includes many people who experience the god scenario’s strong intuitive appeal, or have previously experienced it. This tendency may be having some effects on the field.\nStarting with the obvious, such a systemic effect could cause pervasive errors in decision-making. However, I want to make clear that I have no basis to conclude that it has done so among the Effective Altruism community. For me, the influence of the god scenario was subtle, and driven by its emotional facet. I could override it when asked for a rational assessment of probabilities. But its influence was pervasive, affecting the thoughts to which my mind would gravitate, the topics on which I would tend to generate ideas, and what I would feel like doing with my time. It shaped my thought processes when I wasn’t looking. \nPreoccupation with the god scenario may also entail a public relations risk. Since the god scenario’s strong appeal is not universal, it may polarize public opinion, as it can seem bizarre or off-putting to many. At worst, a rift may develop between the AI safety community and the rest of society. This matters. For example, policymakers throughout the world have the ability to promote the cause of AI safety through funding and regulation. Their involvement is probably an essential component of efforts to prevent an AI arms race through international coordination. But it is easier for them to support a cause that resonates with the public.\nConversely, the enthusiasm created by the intuitive appeal of the god scenario can be quite positive, since it attracts attention to related issues in AI safety and existential risk. For example, others’ enthusiasm and work in these areas led me to get involved. \nI hope readers will share their own experience of the intuitive appeal of the god scenario, or lack thereof, in the comments. A few more data points and insights might help to shed light.", "url": "https://aiimpacts.org/the-tyranny-of-the-god-scenario/", "title": "The tyranny of the god scenario", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-04-06T15:00:00+00:00", "paged_url": "https://aiimpacts.org/feed?paged=13", "authors": ["Michael Wulfsohn"], "id": "8fd2a46edf706ad7735563be648f3aa1", "summary": []} {"text": "Promising research projects\n\nThis is an incomplete list of concrete projects that we think are tractable and important. We may do any of them ourselves, but many also seem feasible to work on independently. Those we consider especially well suited to this are marked Ψ. More potential projects are listed here.\nProject\nReview the literature on forecasting (in progress) Ψ\nSummarize what is known about procedures that produce good forecasts, and measures that are relatively easier to forecast. This may involve reading secondary sources, or collecting past forecasts and investigating what made some of them successful.\nThis would be an input to improving our own forecasting practices, and to knowing which other forecasting efforts to trust.\nWe have reviewed some literature associated with the Good Judgment Project in particular.\nReview considerations regarding the chance of local, fast takeoff Ψ\nWe have a list of considerations here. If you find local, fast take-off likely, check if the considerations that lead you to this view are represented. Alternately, interview someone else with a strong position about the considerations they find important. If there are any arguments or counterarguments that you think are missing, write a short page explaining the case.\nCollecting arguments on this topic is helpful because opinion among well-informed thinkers on the topic seems to diverge from what would be expected given the arguments that we know about. This suggests that we are missing some important considerations, that we would need to well assess the chance of local, fast takeoff.\nQuantitatively model an intelligence explosion Ψ\nAn intelligence explosion (or ‘recursive self-improvement’) consists of a feedback loop where researcher efforts produce scientific progress, which produces improved AI performance, which produces more efficient researcher efforts. This forms a loop, because the researchers involved are artificial themselves.\nThough this loop does not yet exist, relatively close analogues to all of the parts of it already occur: for instance, researcher efforts do lead to scientific progress; scientific progress does lead to better AI; better AI does lead to more capacity at the kinds of tasks that AI can do.\nCollect empirical measurements of proxies like these, for different parts of the hypothesized loop (each part of this could be a stand-alone project). Model the speed of the resulting loop if they were put together, under different background conditions.\nThis would give us a very rough estimate of the contribution of intelligence explosion dynamics to the speed of intelligence growth in a transition to an AI-based economy. Also, a more detailed model may inform our understanding of available strategies to improve outcomes.\nInterview AI researchers on topics of interest Ψ\nFind an AI researcher with views on matters of interest (e.g. AI risk, timelines, the relevance of neuroscience to AI progress) and interview them. Write a summary, or transcript (with their permission). Some examples here, here, here. (If you do not expect to run an interview well enough to make a good impression on the interviewee, consider practicing elsewhere first, so as not to discourage interacting with similar researchers in the future.)\nTalking to AI researchers about their views can be informative about the nature of AI research (e.g. What problems are people trying to solve? How much does it seem like hardware matters?), and provide an empirically informed take on questions and considerations of interest to us (e.g. Current techniques seem really far from general). They also tell us about state of opinion within the AI research community, which may be relevant in itself.\nReview what is known about the relative intelligence of humans, chimps, and other animals (in progress)\nReview efforts to measure animal and human intelligence on a single scale, and efforts to quantify narrower cognitive skills across a range of animals.\nHumans are radically more successful than other animals, in some sense. This is taken as reason to expect that small modifications to brain design (for instance whatever evolution did between the similar brains of chimps and humans) can produce outsized gains in some form of mental performance, and thus that AI researchers may see similar astonishing progress near human-level AI.\nHowever without defining or quantifying the mental skills of any relevant animals, it is unclear a) whether individual intelligence in particular accounts for humans’ success (rather than e.g. ability to accrue culture and technology), b) whether the gap in capabilities between chimps and humans is larger than expected (maybe chimps are also astonishingly smarter than smaller mammals), or c) whether the success stems from something that evolution was ‘intentionally’ progressing on. These things are all relevant to the strength of an argument for AI ‘fast take-off’ based on human success over chimps (see here).\nReview explanations for humans’ radical success over apes (in progress)\nInvestigate what is known about the likely causes of human success, relative to that of other similar animals. In particular, we are interested in how likely improvement in individual cognitive ability is to account for this (as opposed to say communication and group memory abilities).\nThis would help resolve the same issues described in the last section (‘Review what is known about the relative intelligence of humans, chimps, and other animals’).\nCollect data on time to cross the human range on intellectual skills where machines have surpassed us (in progress) Ψ\nFor intellectual skills where machines have surpassed humans, find out how long it took to go from the worst performance to average human skill, and from average human skill to superhuman skill.\nThis would contribute to this project.\nMeasure the importance of hardware progress in a specific narrow AI trajectory Ψ\nTake an area of AI progress, and assess how much of annual improvement can be attributed to hardware improvements vs. software improvements, or what the more detailed relationship between the two is.\nUnderstanding the overall importance of hardware progress and software progress (and other factors) in overall AI progress lets us know to what extent our future expectations should be a function of expected hardware developments, versus software developments. This both alters what our timelines look like (e.g. see here), and tells us what we should be researching to better understand AI timelines.", "url": "https://aiimpacts.org/promising-research-projects/", "title": "Promising research projects", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-04-06T06:00:47+00:00", "paged_url": "https://aiimpacts.org/feed?paged=13", "authors": ["Katja Grace"], "id": "8b624734bdff612f0540d2a51e1ca319", "summary": []} {"text": "Brain wiring: The long and short of it\n\nBy Tegan McCaslin, 30 March 2018\nWhen I took on the task of counting up all the brain’s fibers and figuratively laying them end-to-end, I had a sense that it would be relatively easy–do a bit of strategic Googling, check out a few neuroscience references, and you’ve got yourself a relaxing Sunday afternoon project. By that afternoon project’s 40th hour, I had begun to question my faith in Paul Christiano’s project length estimates.\nIt was actually pretty surprising how thin on the ground numbers and quantities about the metrics I was after seemed to be. Even somewhat simple questions, like “how many of these neuron things does the brain even have”, proved not to have the most straightforward answer. According to one author, the widely-cited, rarely-caveated figure in textbooks of 100 billion neurons couldn’t be sourced in any primary literature published before the late 2000s, and this echo chamber-derived estimate was subsequently denounced for being off by tens of billions in either direction (depending on who you ask). But hey, what’s a few tens of billions between friends?\nThe question of why these numbers are so hard to find is an interesting one. One answer is that it’s genuinely difficult to study populations of cells at the required level of detail. Another is that perhaps neuroscientists are too busy studying silly topics like “how the brain works” or “clinically relevant things” to get down to the real meat of science, which is anal-retentively cataloging every quantity that could plausibly be measured. Perhaps the simplest explanation is just that questions like “how long is the entire dendritic arbor of a Purkinje cell” didn’t have a great argument for why they might be useful, prior to now.\nOr maybe its “fuck it”-quotient was too high.\nWhich brings us rather neatly to the point of why an AI forecasting organization might care about the length of all the wires in the brain, even when the field of neuroscience seems not to. At a broad level, it’s probably the case that neuroscientists care about very different aspects of the brain than AI folks do, because neuroscientists mostly aren’t trying to solve an engineering problem (at least, not the engineering problem of “build a brain out of bits of metal and plastic”). The particular facet of that engineering problem we were interested in here was: how much of a hurdle is hauling information around going to be, once computation is taken care of?\nOur length estimates don’t provide an exhaustive answer to that question, and to be honest they can’t really tell you anything on their own. But, as is the case with AI Impacts’ 2015 article on brain performance in TEPS, learning these facts about the brain moves us incrementally closer to understanding how promising our current models of hardware architectures are, and where we should expect to encounter trouble.  \nSome interesting takeaways: long-range fibers–that is, myelinated ones–probably account for about 20% of the total length of brain wires. Also, the neocortex is huge, but not because it has lots of neurons. \nScroll to the bottom of our article if you’re a cheater who just wants to see a table full of summary statistics, but read the whole thing if you want those numbers to have some context. And please contact me if you spot anything wrong, or think I missed something, or if you’re Wen or Chklovskii of Wen and Chklovskii 2004 and you want to explain your use of tildes in full detail.", "url": "https://aiimpacts.org/brain-wiring-the-long-and-short-of-it/", "title": "Brain wiring: The long and short of it", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-03-30T07:02:35+00:00", "paged_url": "https://aiimpacts.org/feed?paged=13", "authors": ["Tegan McCaslin"], "id": "dbca66839fe3d9f93b34a27d7e435de7", "summary": []} {"text": "Transmitting fibers in the brain: Total length and distribution of lengths\n\nThe human brain’s approximately 86 billion neurons are probably connected by something like 850,000 km of axons and dendrites. Of this total, roughly 80% is short-range, local connections (averaging 680 microns in length), and approximately 20% is long-range, global connections in the form of myelinated fibers (likely averaging several centimeters in length).\nBackground\nThe brain’s precisely coordinated action relies on a dense network of fibers capable of rapidly transmitting information, both locally (to adjacent neurons whose separation can be measured in microns) and to distant locations removed by many centimeters from one another. And while manipulation of that information–”computation”–is an important component of what the brain does, it would be hard-pressed to make any use of that computational power without the ability to communicate within itself. \nSo how much of a problem does the need for moving around information pose for brains and, by extension, brain-like computers? It’s clear from a cursory physical examination of the brain that evolution has prioritized information transfer, since the vast majority of brain tissue is taken up by the tendrils of axons and dendrites snaking through a convoluted maze of cables. Some proportion of these form short-range connections with neurons that are both nearby in physical space, and are probably also close in “functional space” as well. The rest are long-range fibers, which move information from these local, functionally similar regions to areas separated by significant physical, and likely also functional, distance. Whether we can expect one type of connection or the other to impose a larger cost on hardware, as well as the transferability of total brain fiber length to communication requirements in hardware, depends largely on the kind of hardware in question. One can imagine both types of brain-mimicking computer architecture that might make long-distance communication the main limiting factor, as well as architectures where long-distance communication was trivial compared to short-distance communication. \nAI Impacts previously estimated brain communication costs in terms of the benchmark TEPS, or “traversed edges per second”, where “edges” corresponded roughly to synaptic connections between neurons. However, this benchmark measures performance in a certain family of graphs that may not be very representative of connectivity patterns in the brain. Characterizing the actual topology of connections in the brain, especially the proportions contributed by long and short fibers, may give us a more informative picture of the capacities hardware will need in order to mimic wetware.\nShort fibers\nOur estimates for length and length distribution of short fibers were found by comparing the results of what might be called “top-down” and “bottom-up” approaches. Directly measuring any cell-level metric for the entire brain is challenging, but two substantially different methodologies converging on similar answers is probably a reasonable substitute for direct measurement. The first of these relied on observations of fiber density in the neocortex of rats, which there is reason to believe translates fairly well to the human brain as a whole. The second required gathering morphology data on various types of human neurons, then adjusting for the proportion of each cell type in the brain.\nSome important notes on brain structure and animal models\nIn this section, our estimates were drawn from only two brain regions: the cerebral cortex alone in the first case, and the cerebral cortex and cerebellar cortex in the second. However, taken together, these regions account for roughly 85% of total brain volume and as many as 99% of all brain neurons in humans, making this a safe approximation for all gray matter (which represents short connections–see here) in the brain.\nSince the first case considers the brains of rats rather than humans, it may seem to have little utility, but in fact the composition of tissue in rats’ neocortex differs from ours in only a few predictable ways. There are more neurons per cubic millimeter in the cortex of small animals (3-10x), meaning that somewhat more of the brain’s volume is taken up by cell bodies, slightly decreasing the density of fibers compared to larger brains. However, cell bodies are measured in the tens of microns, so this is unlikely to bear on our conclusions.\nTotal length from neocortical fiber density\nWhile the cerebral cortex comprises 82% of the volume of the human brain, only 19% of the brain’s 86 billion neurons reside here, cushioned in the dense web of axons and dendrites known as “neuropil”. The amount of neuropil packed into any given tissue sample can give us a sense of the lengths of these fibers per unit volume, as long as we also know their diameters.\nAfter determining rat neocortical fiber density, Braitenberg and Schüz (1998) concluded that the total length of the average neuron’s axonal tree was between 10 and 40 mm, and that the average dendritic tree came out to 4 mm. These numbers were derived from examining electron micrographs of tissue samples to find the proportion of area taken up by axons and dendrites, then measuring the average diameter of these fibers to find an axonal density of 4 km per mm^3, and a dendritic density of 456 m per mm^3. It’s not quite clear to us how the authors got from these numbers to average fiber length per neuron, but since their average values agreed with values we obtained by other methods (see below), we were inclined to assume their process was reasonable.\nAssuming mouse neocortical neurons are comparable to human neurons, their average fiber length suggests that the neocortex alone contains at least 220,000 km of short range connections between dendrites and axons.1\nTotal length and distribution of lengths from morphological data\nIn principle, obtaining estimates of average fiber lengths from a representative sample of different varieties of neuron should yield something close to the sum total fiber length for all brain neurons, when combined with information about the neuronal composition of the brain.\nGranule cells\nThe most numerous neuron type in the brain is the cerebellar granule cell, at around 50 billion (58% of the brain’s total neurons). These small cells have three to five unbranched dendrites, each around 15 microns long and appended by a “claw”. They’re primarily distinguished by their unusual axonal morphology, which extends from the lowest of the three cerebellar cortex layers to the outermost layer, then splits perpendicularly into two fibers, forming a “T”. The fibers forming the top of this “T” run an average of 6 mm total, and while it was difficult to find a direct measurement for the other axonal component, the number is bounded by the thickness of the cerebellar cortex at 1176 microns, and is probably much shorter on average. Overall, the fiber length of the average cerebellar granule cell is probably in the neighborhood of 6.6 mm, giving us around 330,000 km in total.2\nPyramidal cells\nThe next most numerous neuron type, at around 2/3rds to 85% of the cerebral cortex (or 10.5-13.6 billion cells in humans), is the well-studied pyramidal cell. Pyramidal cells are in close contact with their neighbors in the vertical direction, forming tiny “columns” along the cerebral cortex that are thought to have functional relevance, with relatively less connectivity between columns. This is reflected in the structure of the pyramidal cell’s dendritic tree, with a long fiber extending vertically from the cell body (the apical dendrite) and several relatively short fibers branching laterally (the basal dendrites). Some pyramidal cells have long, myelinated axons that connect the two hemispheres or different functional areas of the same hemisphere, and these axons will be considered in the next section on long fibers, but for now we will focus exclusively on more local connections. \nQuantitative descriptions of pyramidal cell morphology were lacking, so we collected data on 2130 human pyramidal neurons from the NeuroMorpho.org database, computing various metrics for each neuron using L-Measure, and then performed our analysis with R (data here). \nDendritic trees had an average total length of 3.4 mm per cell, with a standard deviation of 1.8 mm. We also analyzed path distance, or the length between the terminal point of one branch and the soma. The average path distance of pyramidal dendrites’ longest branch was 340 microns, which likely corresponds to the apical dendrites, while a typical branch was 180 microns. Axons were vastly less well represented in our dataset–only 243 had nonzero values, and while the mean length for these axons was in the same ballpark as the estimate found by Braitenberg and Schüz, at 11.5 mm, it’s probable that not all axons in the dataset were complete.3 In particular, the length distribution was bimodal, with maxima around 100 microns and 20 mm, and this latter number may be the more accurate. This bimodal distribution was also reflected in the path distance of axonal branches, with (what we suspect to be) the more realistic values around 2.2 mm average for each neuron, and 4 mm for the longest branches. In all, pyramidal cells as measured here probably contribute roughly 23 mm each to the brain’s fiber network, or ~240,000 to ~310,000 km, in basic agreement with the numbers obtained from neocortical fiber density.4\nOther cell types\nThe remaining cell types made up a much smaller proportion of total fiber length. Besides pyramidal neurons, stellate cells are the other primary residents of the cerebrum, and are known to be substantially smaller than their cortical comrades, with axonal projections no longer than their dendrites. They could therefore add no more than 9,600-22,000 km to the total.5 After the 50 billion granule cells, the cerebellum still has 13-20 billion neurons to account for, over half of which are small stellate cells, a quarter basket cells, and the remainder split evenly between Purkinje cells and Golgi cells. Between them, these cerebral and cerebellar cells probably contribute 65,000 to 110,000 km.6\nLong fibers\nMyelination as indicative of fiber length\nThe most natural point of transition from “short range” to “long range” is the length of fiber for which conduction velocity of action potentials in a bare axon becomes unacceptably slow. Rather conveniently, this demarcation is evident from a glance at a cross section of the brain, where the white of myelinated fibers stands in stark contrast to the gray matter.\nFatty insulating sheaths of myelin are used by the brain’s longest fibers to increase conduction velocity at the cost of taking up more volume in the brain, as well as rendering myelinated segments of axons unable to synapse onto nearby neurons. Frequently, axons running through white matter tracts bundle with others inside a single myelin sheath, a frugal move for a brain with space and energy constraints. It’s unlikely that in these circumstances a brain would expend resources to myelinate short connections with no great need for it, so it’s reasonable to assume that all myelinated fibers are long. Furthermore, gray and white matter are highly segregated, and myelin is rarely found in cortical tissue.\nLength of myelinated fibers from white matter volume\nThis protective myelin coating not only insulates axons from lost transmission, but also, unfortunately, from the prying eyes of scientists. This means that long distance connections are difficult to study, and there have been few attempts to characterize white matter fibers at an appropriate level of detail for our purposes.\nOne frequently cited figure comes from Marner et al 2003, where the method called for divvying up preserved brains into slabs and take needle biopsies from random points on the slabs, then slicing these biopsies into fine sections and staining them. These could then be inspected for dark colored rings corresponding to myelin sheaths, and the total length of fibers could be approximated by multiplying “length density”, or total length of fibers per volume of white matter, with white matter volume. This method yielded a total of 149,000 km of myelinated fibers in female brains, and 176,000 km in males.\nAs for the distribution of these fiber lengths in the brain, we’re left somewhat in the dark. A very imprecise estimate for a portion of them can be gotten from a few key facts about the cerebrum. The largest and most famous white matter tract in the human brain is the corpus callosum, which connects the two hemispheres and contains 200-250 million fibers, about as many as one can find in tracts connecting areas within hemispheres. Given the width of the corpus callosum (~100 mm, or two thirds of the brain’s total width), a reasonable value for average fiber length in this tract is 10 cm, suggesting that perhaps 50,000 km or less of long-range fiber connects the cerebrum with itself.7 Clearly, this leaves much white matter to be accounted for, which can presumably be attributed to connections within and between the cerebellum and subcortical structures, as well as the occasional cerebral white matter found outside the white matter tracts. \nThis vague picture can be supplemented by the relationship between long-range connection length and brain volume alluded to in Wen and Chklovskii 2004. These authors estimate that average global connection length should be roughly similar to the cube root of brain volume, or 10.6 cm – 11.4 cm, much like the figure we approximated above for intracortical connections.\nDiscussion\nSummary of conclusions\nOur estimates are aggregated in the table below:\n\n\n\n\nConnection typeTotal length (km)Average length per neuron (mm)Contributing neuron typesSources of evidence\n\n\n\n\nCerebral, short-range220,000 - 320,000914 - 20Pyramidal (2/3rds to 85%), stellateFiber density in rats, morphometry\n\n\nCerebellar, short-range390,000 - 420,00085.7 - 6.1Granule (~70%), stellate, basket, Purkinje, GolgiMorphometry\n\n\nTotal, short-range610,000 - 740,000---\n\n\nCerebral, long-range~50,000100PyramidalWidth of corpus callosum, relationship between brain volume and global connection length\n\n\nTotal, long-range150,000 - 180,000??Length density per white matter volume\n\n\nTotal, all fibers760,000 - 920,000---\n\n\n\n\nOverall, we’re somewhat less confident in our total for long-range fiber length than our other estimates, since this was obtained using a methodology whose reliability we’re not able to judge, and its findings couldn’t be directly corroborated with other methods. However, there is indirect evidence that these numbers will hold up reasonably well: the proportion of total cerebral wiring that cerebral long-distance connections account for (14%) is quite similar to the proportion that long-distance connections purportedly account for overall (20%), despite the former number coming from independent lines of evidence.\nImplications and future directions\nThe brain is the most metabolically expensive organ in the human body by volume, and has pushed the limits of natural birth by increasing the pelvic width of human females, via enlarged infant head sizes, to the edge of feasibility for walking. The massive resource requirements of the brain are clear, but the proportion demanded by communication (versus computation) is less clear.\nCosts to the brain can be expressed in terms of space, energy (for development, maintenance and operation), and the difficulty or error-proneness of orchestrating complex activities. Space may be the cost most strongly influenced by brain wiring, and can easily be predicted to translate to computers, but the amount brain wiring also contributes to energy costs. In computers, this will take the form of operation energy, or the power needed to send “action potentials” along connections.\nBy themselves, our estimates of fiber lengths in the brain won’t answer any questions about the difficulty of communication in computers broadly. However, they can be informative when considering a specific hardware architecture, and are likely to be especially so in the case of massively parallel architectures. Combining our estimates with other estimates relating to information transfer in the brain, like information density, may also yield insights relevant to AI hardware.\nContributions\nResearch, analysis and writing were done by Tegan McCaslin. Katja Grace contributed feedback and editing. Paul Christiano proposed the question and provided guidance on hardware-related matters.\nFootnotes", "url": "https://aiimpacts.org/transmitting-fibers-in-the-brain-total-length-and-distribution-of-lengths/", "title": "Transmitting fibers in the brain: Total length and distribution of lengths", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-03-30T06:51:49+00:00", "paged_url": "https://aiimpacts.org/feed?paged=13", "authors": ["Tegan McCaslin"], "id": "7f04998190220a44928c314aa6c3c885", "summary": []} {"text": "Will AI see sudden progress?\n\nBy Katja Grace, 24 February 2018\nWill advanced AI let some small group of people or AI systems take over the world?\nAI X-risk folks and others have accrued lots of arguments about this over the years, but I think this debate has been disappointing in terms of anyone changing anyone else’s mind, or much being resolved. I still have hopes for sorting this out though, and I thought a written summary of the evidence we have so far (which often seems to live in personal conversations) would be a good start, for me at least.\nTo that end, I started a collection of reasons to expect discontinuous progress near the development of AGI.\nI do think the world could be taken over without a step change in anything, but it seems less likely, and we can talk about the arguments around that another time.\nPaul Christiano had basically the same idea at the same time, so for a slightly different take, here is his account of reasons to expect slow or fast take-off.\nPlease tell us in the comments or feedback box if your favorite argument for AI Foom is missing, or isn’t represented well. Or if you want to represent it well yourself in the form of a short essay, and send it to me here, and we will gladly consider posting it as a guest blog post.\nI’m also pretty curious to hear which arguments people actually find compelling, even if they are already listed. I don’t actually find any of the ones I have that compelling yet, and I think a lot of people who have thought about it do expect ‘local takeoff’ with at least substantial probability, so I am probably missing things.", "url": "https://aiimpacts.org/will-ai-see-sudden-progress/", "title": "Will AI see sudden progress?", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-02-25T05:07:23+00:00", "paged_url": "https://aiimpacts.org/feed?paged=13", "authors": ["Katja Grace"], "id": "e9d88f9e741e24328b776a191f360599", "summary": []} {"text": "Likelihood of discontinuous progress around the development of AGI\n\nWe aren’t convinced by any of the arguments we’ve seen to expect large discontinuity in AI progress above the extremely low base rate for all technologies. However this topic is controversial, and many thinkers on the topic disagree with us, so we consider this an open question.\nDetails\nDefinitions\nWe say a technological discontinuity has occurred when a particular technological advance pushes some progress metric substantially above what would be expected based on extrapolating past progress. We measure the size of a discontinuity in terms of how many years of past progress would have been needed to produce the same improvement. We use judgment to decide how to extrapolate past progress.\nFor instance, in the following trend of progress in chess AI performance, we would say that there was a discontinuity in 2007, and it represented a bit over five years of progress at previous rates.\nFigure 1: Machine chess progress, in particular SSDF records.\nRelevance\nDiscontinuity by some measure, on the path to AGI, lends itself to:\n\nA party gaining decisive strategic advantage\nA single important ‘deployment’ event\nOther very sudden and surprising events\n\nArguably, the first two require some large discontinuity. Thus the importance of planning for those outcomes rests on the likelihood of a discontinuity.\nOutline\nWe investigate this topic in two parts. First, with no particular knowledge of AGI as a technology, how likely should we expect a particular discontinuity to be? We take the answer to be quite low. Second, we review arguments that AGI is different from other technologies, and lends itself to discontinuity. We currently find these arguments uncompelling, but not decisively so.\nDefault chance of large technological discontinuity\nDiscontinuities larger than around ten years of past progress in one advance seem to be rare in technological progress on natural and desirable metrics.1 We have verified around five examples, and know of several other likely cases, though have not completed this investigation.\nThis does not include the discontinuities when metrics initially go from zero to a positive number. For instance, the metric ‘number of Xootr scooters in the world’ presumably went from zero to one on the first day of production, though this metric had seen no progress before. So on our measure of discontinuity size, this was infinitely many years of progress in one step. It is rarer for a broader metric (e.g. ‘scooters’) to go from zero to one, but it must still happen occasionally. We do not mean to ignore this type of phenomenon, but just to deal with it separately, since discontinuity in an ongoing progress curve seems to be quite rare, whereas the beginning of positive progress on a metric presumably occurs once for every metric that begins at zero.\nProposed reasons to expect a discontinuity near AGI\nWe have said that the base rate of large technological discontinuities appears to be low. We now review and examine arguments that AGI is unusually likely to either produce a discontinuity, or follow one.\nHominid variation\nArgument\nHumans are vastly more successful in certain ways than other hominids, yet in evolutionary time, the distance between them is small. This suggests that evolution induced discontinuous progress in returns, if not in intelligence itself, somewhere approaching human-level intelligence. If evolution experienced this, this suggests that artificial intelligence research may do also.\nCounterarguments\nEvolution does not appear to have been optimizing for the activities that humans are good at. In terms of what evolution was optimizing for—short term ability to physically and socially prosper—it is quite unclear that humans are disproportionately better than earlier hominids, compared to differences between earlier hominids and less optimized animals.\nThis counterargument implies that if one were to optimize for human-like success that it would be possible (and likely at some point) to make agents with brains the size of chimps, that were humanlike but somewhat worse in their capabilities.\nIt is also unclear to us that humans succeed so well on something like ‘intelligence’ from vastly superior individual intelligence, rather than vastly superior group intelligence, via innovations such as language.\nStatus\nSee status section of ‘brain scaling’ argument—they are sufficiently related to be combined.\nBrain scaling\nArgument\nIf we speculate that the main relevant difference between us and our hominid relatives is brain size, then we can argue that brain size in particular appears to produce very fast increase in impressiveness. So if we develop AI systems with certain performance, we should expect to make disproportionately better systems by using more hardware.\nVariation in brain size among humans also appears to be correlated with intelligence, and differences in intelligence in the human range anecdotally correspond to large differences in some important capabilities, such as ability to develop new physics theories. So if the relatively minor differences in human brain size cause the differences in intelligence, this would suggest that larger differences in ‘brain size’ possible with computing hardware could lead to quite large differences in some important skills, such as ability to successfully theorize about the world.\nCounterarguments\nThis is closely related to the ‘hominids saw discontinuous progress from intelligence’ argument (see above), so shares the counterargument that evolution does not appear to have been optimizing for those things that humans excel at.\nFurthermore, if using more hardware generally gets outsized gains, one might expect that by the time we have any particular level of AGI performance, we are already using extremely large amounts of hardware. If this is a cheap axis on which to do better, it would be surprising if we left that value on the table until the end. So the only reason to expect a discontinuity here would seem to be if the opportunity to use more hardware and get outsized results started existing suddenly. We know of no particular reason to expect this, unless there was a discontinuity in something else, for instance if there was indeed one algorithm. That type of discontinuity is discussed elsewhere.\nStatus\nThis argument seems weak to us currently, but further research could resolve these questions in directions that would make it compelling:\n\nAre individual humans radically superior to apes on particular measures of cognitive ability? What are those measures, and how plausible is it that evolution was (perhaps indirectly) optimizing for them?\nHow likely is improvement in individual cognitive ability to account for humans’ radical success over apes? (For instance, compared to new ability to share innovations across the population)\nHow does human brain size relate to intelligence or particular important skills, quantitatively?\n\nIntelligence explosion\nArgument\nIntelligence works to improve many things, but the main source of intelligence—human brains—is arguably not currently improved substantially itself. The development of artificial intelligence will change that, by allowing the most intelligent entities to directly contribute to their increased intelligence. This will create a feedback loop where each step of progress in artificial intelligence makes the next steps easier and so faster. Such a feedback loop could potentially go fast. This would not strictly be a discontinuity, in that each step would only be a little faster than the last, however if the growth rate became sufficiently fast sufficiently quickly, it would be much like a discontinuity from our perspective.\nThis feedback loop is usually imagined to kick in after AI was ‘human-level’ on at least some tasks (such as AI development), but that may be well before it is recognized as human-level in general.\nCounterarguments\nPositive feedback loops are common in the world, and very rarely move fast enough and far enough to become a dominant dynamic in the world. So we need strong reason to expect an intelligence-of-AGI feedback loop to be exceptional, beyond the observation of a potential feedback effect. We do not know of such arguments. However, this is a widely discussed topic, so contenders probably exist.\nStatus\nThe counterargument currently seems to carry to us. However, we think the argument could be strong if:\n\nStrong reason is found to expect an unusually fast and persistent feedback loop\nQuantitative models of current relationships between the variables hypothesized to form a feedback loop suggest very fast intelligence growth.\n\nOne algorithm\nArgument\nIf a technology is simple, we might expect to discover it in a small number of steps, perhaps one. If it is also valuable, we might expect each of those those steps to represent huge progress in some interesting metric. The argument here is that intelligence—or something close enough to what we mean by intelligence—is like this.\nSome reasons to suspect that intelligence is conceptually simple:\n\nEvolution discovered it relatively quickly (though see earlier sections on hominid evolution and brain size)\nIt seems conceivable to some that the mental moves required to play Go are basically all the ones you need for a large class of intellectual activities, at least with the addition of some way to navigate the large number of different contexts that are about that hard to think about.\nIt seems plausible that the deep learning techniques we have are already such a thing, and we just need more hardware and adjustment to make it work.\n\nWe expect that this list is missing some motivations for this view.\nThis argument can go in two ways. If we do not have the one algorithm yet, then we might expect that when we find it there will be a discontinuity. If we already have it, yet don’t have AGI, then what remains to be done is probably accruing sufficient hardware to run it. This might traditionally be expected to lead to continuous progress, but it is also sometimes argued to lead to discontinuous progress. Discontinuity in hardware is discussed in a forthcoming section, so here we focus on discontinuity from acquiring the insight.\nCounterarguments\nSociety has invented many simple things before. So even if they are unusually likely to produce discontinuities, the likelihood would still seem to be low, since discontinuities are rare. To the extent that you believe that society has not invented such simple things before, you need a stronger reason to expect that intelligence is such a thing, and we do not know of strong reasons.\nStatus\nWe currently find this argument weak, but would find it compelling if:\n\nStrong reason emerged to expect intelligence to be a particularly simple innovation, among all innovations, and\nWe learned that very simple innovations do tend to cause discontinuities, and we have merely failed to properly explore possible past discontinuities well.\n\nDeployment scaling\nArgument\nHaving put in the research effort to develop one advanced AI system, it is very little further effort to make a large number of them. Thus at the point that a single AGI system is developed, there could shortly thereafter be a huge number of AGI systems. This would seem to represent a discontinuity in something like global intelligence, and also in ‘global intelligence outside of human brains’.\nCounterarguments\nIt is the nature of almost every product that large quantities of upfront effort are required to build a single unit, at which point many units can be made more cheaply. This is especially true of other software products, where a single unit can be copied without the need to build production facilities. So this is far from a unique feature of AGI, and if it produced discontinuities, we would see them everywhere.\nThe reason that the ability to quickly scale the number of instances of a product does not generally lead to a discontinuity seems to be that before the item’s development, there was usually another item that was slightly worse, which there were also many copies of. For instance, if we write a new audio player, and it works well, it can be scaled up across many machines. However on each machine it is only replacing a slightly inferior audio player. Furthermore, because everyone already has a slightly inferior audio player, adoption is gradual.\nIf we developed human-level AGI software right now, it would be much better than what computing hardware is already being used for, and because of that and other considerations, it may spread to a lot of hardware fast. So we would expect a discontinuity. However if we developed human-level AGI software now, this would already be a discontinuity in other metrics.  In general, it seems to us that this scaling ability may amplify the effects of existing discontinuities, but it is hard to see how it could introduce one.\nStatus\nThis argument seems to us to rest on a confusion. We do not think it holds unless there is already some other large discontinuity.\nTrain vs. test\nArgument\nThere are at least three different processes you can think of computation being used for in the development of AGI:\n\nSearching the space of possible software for AGI designs\nTraining specific AGI systems\nRunning a specific AGI system\n\nWe tend to think of the first requiring much more computation than the second, which requires much more computation than the third. This suggests that when we succeed at the first, hardware will have to be so readily available that we can quickly do a huge amount of training, and having done the training, we can run vast numbers of AI systems. This seems to suggest a discontinuity in number of well-trained instances of AGI systems.\nCounterarguments\nThis seems to be a more specific version of the deployment scaling argument above, and so seems to fall prey to the same counterargument: that whenever you find a good AGI design and scale up the number of systems running it, you are only replacing somewhat worse software, so this is not a discontinuity in any metric of high level capability. For there to be a discontinuity, it seems we would need a case where step 1 returns an AGI design that is much better than usual, or returns a first AGI design that is already very powerful compared to other ways of solving the high level problems that it is directed at. These sources of discontinuity are both discussed elsewhere.\nStatus\nThis seems to be a variant of the deployment scaling argument, and so similarly weak. However it is more plausible to us here that we have failed to understand some stronger form of the argument, that others find compelling.\nStarting high\nArgument\nAs discussed in Default chance of technological discontinuity above, the first item in a trend always represents a ‘large’ discontinuity in the number of such items—the number has been zero forever, and now it is one, which is more progress at once than in all of history. When we said that discontinuities were rare, we explicitly excluded discontinuities such as these, which appear to be common but uninteresting. So perhaps a natural place to find an AGI-related discontinuity is here.\nDiscontinuity from zero to one in the number of AGI systems would not be interesting on its own. However perhaps there are other metrics where AGI will represent the first step of progress. For instance, the first roller coaster caused ‘number of roller coasters’ to go from zero to one, but also ‘most joy caused by a roller coaster’ to go from zero to some positive number. And once we admit that inventing some new technology T could cause some initial discontinuity in more interesting trends than ‘number of units of T’, we do not know very much about where or how large such first steps may be (since we were ignoring such discontinuities). So the suggestion is that AGI might see something like this: important metrics going from zero to high numbers, where ‘high’ is relative to social impact (our previous measure of discontinuities being unhelpful in the case where progress was previously flat). Call this ‘starting high’.\nHaving escaped the presumption of a generally low base rate of interesting discontinuities then, there remain the questions of what the base rate should be for new technologies ‘starting high’ on interesting metrics, and whether we should expect AGI in particular to see this even above the new base rate.\nOne argument that the base rate should be high is that new technologies have a plethora of characteristics, and empirically some of them seem to start ‘high’, in an intuitive sense. For instance, even if the first plane did not go very far, or fast, or high, it did start out 40 foot wide—we didn’t go through flying machines that were only an atom’s breadth, or only slightly larger than albatrosses. ‘Flying machine breadth’ is not a very socially impactful metric, but if non-socially impactful metrics can start high, then why not think that socially impactful metrics can?\nEither way, we might then argue that AGI is especially likely to produce high starting points on socially impactful metrics:\n\nAGI will be the first entry in an especially novel class, so it is unusually likely to begin progress on metrics we haven’t seen progress on before.\nFully developed AGI would represent progress far above our current capabilities on a variety of metrics. If fully developed AGI is unusually impactful, this suggests that the minimal functional AGI is also unusually impactful.\nAGI is unusually non-amenable to coming in functional yet low quality varieties. (We do not understand the basis for this argument well enough to relay it. One possibility is that the insights involved in AGI are sufficiently few or simple that having a partial version naturally gets you a fully functional version quickly. This is discussed in another section. Another is that AGI development will see continuous progress in some sense, but go through non-useful precursors—much like ‘half of a word processor’ is not a useful precursor to a word processor. This is also discussed separately elsewhere.)\n\nCounterarguments\nWhile it is true that new technologies sometimes represent discontinuities in metrics other than ‘number of X’, these seem very unlikely to be important metrics. If they were important, there would generally be some broader metric that had previously been measured that would also be discontinuously improved. For instance, it is hard to imagine AGI suddenly possessing some new trait Z to a degree that would revolutionize the economy, without this producing discontinuous change in previously measured things like ‘ability to turn resources into financial profit’.\nWhich is to say, it is not a coincidence that the examples of high starting points we can think of tend to be unimportant. If they were important, it would be strange if we had not previously made any progress on achieving similar good outcomes in other ways, and if the new technology didn’t produce a discontinuity in any such broader goal metrics, then it would not be interesting. (However it could still be the case that we have so far systematically failed to look properly at very broad metrics as they are affected by fairly unusual new technologies, in our search for discontinuities.)\nWe know of no general trend where technologies that are very important when well developed cause larger discontinuities when they are first begun. We have also not searched for this specifically, but isn’t an immediately apparent trend in the discontinuities we know of.2\nSupposing that an ideal AGI is much smarter than a human, then humans clearly demonstrate the possibility of functional lesser AGIs. (Which is not to say that ‘via human-like deviations from perfection’ is a particularly likely trajectory.) Humans also seem to demonstrate the possibility of developing human-level intelligence, then not immediately tending to have far superior intelligence. This is usually explained by hardware limitations, but seems worth noting nonetheless.\nStatus\nWe currently think it is unlikely that any technology ‘starts high’ on any interesting metric, and either way AGI does not seem especially likely to do so. We would find this argument more compelling if:\n\nWe were shown to be confused about the inference that ‘starting high’ on an interesting metric should necessarily produce discontinuity in broader, previously improved metrics. For instance, if a counterexample were found.\nHistorical data was found to suggest that ultimately valuable technologies tended to ‘start high’ or produce discontinuities.\nFurther arguments emerged to expect AGI to be especially likely to ‘start high’.\n\nAwesome AlphaZero\nArgument\nWe have heard the claim that AlphaZero represented a discontinuity, at least in the ability to play a variety of games using the same system, and maybe on other metrics. If so, this would suggest that similar technologies were unusually likely to produce discontinuities.\nCounterarguments\nSupposing that AlphaZero did represent discontinuity on playing multiple games using the same system, there remains a question of whether that is a metric of sufficient interest to anyone that effort has been put into it. We have not investigated this.\nWhether or not this case represents a large discontinuity, if it is the only one among recent progress on a large number of fronts, it is not clear that this raises the expectation of discontinuities in AI very much, and in particular does not seem to suggest discontinuity should be expected in any other specific place.\nStatus\nWe have not investigated the claims this argument is premised on, or examined other AI progress especially closely for discontinuities. If it turned out that:—\n\nRecent AI progress represented at least one substantial discontinuity\nThe relevant metric had been a focus of some prior effort\n\n—then we would consider more discontinuities around AGI more likely. Beyond raising the observed base rate of such phenomena, this would also suggest to us that the popular expectation of AI discontinuity is well founded, even if we have failed to find good explicit arguments for it. So this may raise our expectation of discontinuous change more than the change in base rate would suggest.\nUneven skills\nArgument\nThere are many axes of ability, and to replace a human at a particular task, a system needs to have a minimal ability on all of them. At which point, it should be substantially superhuman on some axes. For instance, by the time that self-driving cars are safe enough in all circumstances to be allowed to drive, they might be very much better than humans at noticing bicycles. So there might be a discontinuity in bicycle safety at the point that self-driving cars become common.\nIf this were right, it would also suggest a similar effect near overall human-level AGI: that the first time we can automate a variety of tasks as well as a human, we will be able to automate them much better than a human on some axes. So those axes may see discontinuity.\nFigure 2: Illustration of the phenomenon in which the first entirely ‘human-level’ system is substantially superhuman on most axes. (Image from Superintelligence Reading Group)\nCounterarguments\nThis phenomenon appears to be weakly related to AGI in particular, and should show up whenever one type of process replaces another one. Replacement of one type of system with another type does not seem very rare. So if it is true that large discontinuities are rare, then this type of dynamic does not produce large discontinuities often.\nThis argument benefits from the assumption that you have to ‘replace’ a human in one go, which seems usually false. New technology can usually be used to complement humans, admittedly with inefficiency relative to a more integrated system. For instance, one reason there will not be a discontinuity in navigation ability of cars on the road is that humans already use computers for their navigation. However this counterargument doesn’t apply to all axes. For instance, if computers operate quite fast but cannot drive safely, you can’t lend their speed to humans and somehow still use the human steering faculties.\nStatus\nThis argument seems successfully countered, unless:\n\nWe have missed some trend of discontinuities of this type elsewhere. If there were such a trend, it seems possible that we would have systematically failed to notice them because they look somewhat like ‘initial entry goes from zero to one’, or are in hard to measure metrics.\nThere are further reasons to expect this phenomenon to apply in the AGI case specifically, especially without being substantially mitigated by human-complementary technologies\n\nPayoff thresholds\nArgument\nEven if we expect progress on technical axes to change continuously, there seems no reason this shouldn’t translate to discontinuous change in a related axis such as ‘economic value’ or ‘ability to take over the world’. For instance, continuous improvement in boat technology might still lead to a threshold in whether or not you can reach the next island.\nCounterarguments\nThere actually seems to be a stronger theoretical reason to expect continuous progress in metrics that we care about directly, such as economic value, than on more technical metrics. This is that if making particular progress is expected to bring outsized gains, more effort should be directed toward the project until that effort has offset a large part of the gains. For instance, suppose that for some reason guns were a hundred times more useful if they could shoot bullets at just over 500m/s than just under, and otherwise increasing speed was slightly useful. Then if you can make a 500m/s gun, you can earn a hundred times more for it than a lesser gun. So if guns can currently shoot 100m/s, gun manufacturers will be willing to put effort into developing the 500m/s one up to the point that it is costing nearly a hundred times more to produce it. Then if they manage to produce it, they do not see a huge jump in value of making guns, and the buyer doesn’t see a huge jump in value of gun – their gun is much better but also much more expensive. However at the same time, there may have been a large jump in ‘speed of gun’.\nThis is a theoretical argument, and we do not know how well supported it is. (It appears to be at least supported by the case of nuclear weapons, which were discontinuous in the technical metric ‘explosive power per unit mass’, but not obviously so in terms of cost-effectiveness).\nIn practice, we don’t know of many large discontinuities in metrics of either technical performance or things that we care about more directly.\nEven if discontinuities in value metrics were common, this argument would need to be paired with another for why AGI in particular should be expected to bring about a large discontinuity.\nStatus\nThis argument currently seems weak. In the surprising event that that discontinuities are actually common in metrics more closely related to value (rather than generally rare, and especially so that case), the argument would only be mildly stronger, since it doesn’t explain why AGI is especially susceptible to this.\nHuman-competition threshold\nArgument\nThere need not be any threshold in nature that makes a particular level of intelligence much better. In a world dominated by humans, going from being not quite competitive with humans to slightly better could represent a step change in value generated.\nThis is a variant of the above argument from payoff thresholds that explains why AGI is especially likely to see discontinuity, even if it is otherwise rare: it is not often that something as ubiquitous and important as humans is overtaken by another technology.\nCounterarguments\nHumans have a variety of levels of skill, so even ‘competing with humans’ is not a threshold. You might think that humans are in some real sense very close to one another, but this seems unlikely to us.\nEven competing with a particular human is unlikely to have threshold behavior—without for instance a requirement that someone hire one or the other—because there are a variety of tasks, and the human-vs.-machine differential will vary by task.\nStatus\nThis seems weak. We would reconsider if further evidence suggested that the human range is very narrow, in some ubiquitously important dimension of cognitive ability.\n Footnotes", "url": "https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/", "title": "Likelihood of discontinuous progress around the development of AGI", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-02-23T21:23:41+00:00", "paged_url": "https://aiimpacts.org/feed?paged=14", "authors": ["Katja Grace"], "id": "297a3e6af0eb4994293eeadd86afa4f4", "summary": []} {"text": "Electrical efficiency of computing\n\nComputer performance per watt has probably doubled every 1.5 years between 1945 and 2000. Since then the trend slowed. By 2015, performance per watt appeared to be doubling every 2.5 years.\nDetails\nIn 2011 Jon Koomey reported that computation per kWh had doubled every roughly 1.5 years since around 1950, as shown in figure 1 (taken from him).1 Wikipedia calls this trend ‘Koomey’s Law‘. In 2015 Koomey and Naffziger reported in IEEE Spectrum that Koomey’s law began to slow down in around 2000 and by 2015, electrical efficiency was taking 2.5 years to double.2\nWe have not investigated beyond this, except to note that there is not obvious controversy on the topic. We do not know the details of the methods involved in this research, for instance how ‘computations’ are measured.\nFigure 1. Computations per kWh over recent history. Taken from Dr Jon Koomey, http://www.koomey.com/post/14466436072, CC BY-SA 3.0\n ", "url": "https://aiimpacts.org/electrical-efficiency-of-computing/", "title": "Electrical efficiency of computing", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-02-19T01:56:40+00:00", "paged_url": "https://aiimpacts.org/feed?paged=14", "authors": ["Katja Grace"], "id": "b561d1b2aaab3183b1d724d9917f6508", "summary": []} {"text": "Nordhaus hardware price performance dataset\n\nThis page contains the data from Appendix 2 of William Nordhaus’ The progress of computing in usable formats.\nNotes\nThis data was collected from Appendix 2 of The progress of computing, using Tabula (a program for turning tables in pdfs into other table formats). We have not checked its accuracy beyond a graph of the resulting data looking visually similar to a graph of the original.\nImportant: we previously noted that this data appears to be orders of magnitude different from other sources, and haven’t had time to look into this discrepancy.\nData\nHere is a Google sheet of the data. See ‘Nordhaus via Tabula’ page.", "url": "https://aiimpacts.org/nordhaus-hardware-price-performance-dataset/", "title": "Nordhaus hardware price performance dataset", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-02-17T03:30:05+00:00", "paged_url": "https://aiimpacts.org/feed?paged=14", "authors": ["Katja Grace"], "id": "0a740e455565c4640b72e999e627b4a8", "summary": []} {"text": "2018 price of performance by Tensor Processing Units\n\nTensor Processing Units (TPUs) perform around 1 GFLOPS/$, when purchased as cloud computing.\nDetails\nIn February 2018, Google Cloud Platform blog says their TPUs can perform up to 180 TFLOPS, and currently cost $6.50/hour.1 This gives us $171,000 to rent one TPU continually for a roughly three year lifecycle2 Which is 1.05 GFLOPS/$.\nThis service apparently began on February 12 2018.3 So this does not appear to be competitive with the cheapest GPUs, in terms of FLOPS/$, or even the cheapest cloud computing.", "url": "https://aiimpacts.org/2018-price-of-performance-by-tensor-processing-units/", "title": "2018 price of performance by Tensor Processing Units", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-02-13T23:55:29+00:00", "paged_url": "https://aiimpacts.org/feed?paged=14", "authors": ["Katja Grace"], "id": "fbc496d530ccb08d0ed712bad263f20f", "summary": []} {"text": "Examples of AI systems producing unconventional solutions\n\nThis page lists examples of AI systems producing solutions of an unexpected nature, whether due to goal misspecification or successful optimization.  This list is highly incomplete.\nList\n\nCoastRunners’ burning boat\nIncomprehensible evolved logic gates\nAlphaGo’s inhuman moves\nWaze direction into fires\n", "url": "https://aiimpacts.org/examples-of-ai-systems-producing-unconventional-solutions/", "title": "Examples of AI systems producing unconventional solutions", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-02-12T03:58:01+00:00", "paged_url": "https://aiimpacts.org/feed?paged=14", "authors": ["Katja Grace"], "id": "5a3176eaa1ab7d2a0308b6c5d0ded66d", "summary": []} {"text": "Historic trends in altitude\n\nPublished 7 Feb 2020\nAltitude of objects attained by man-made means has seen six discontinuities of more than ten years of progress at previous rates since 1783, shown below.\nYearHeight (m)Discontinuity (years)Entity178440001032Balloon180372801693Balloon191842,300227Paris gun194285,000120V-2 Rocket1944174,60011V-2 Rocket1957864,000,00035Pellets (after one day)\n\nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nTrends\nAltitude of objects attained by manmade means\nWe looked for records in height from the ground reached by any object via man-made technology. \n‘Man-made technology’ is ambiguous, but we exclude for instance objects tied to birds and debris carried up by hurricanes. We include debris launched unintentionally via gunpowder explosion, and rocks launched via human arms. \nWe measure ‘altitude’ from the ground at the launch site. This excludes mountain climbing, but also early flight attempts that involve jumping from towers and traveling downward slowly.1 It also excludes early parachutes, which were mentioned in fiction thousands of years ago.2 \nMeasured finely enough, there are never discontinuities in altitude, since objects travel continuously.3 This prohibits finding discontinuities in continuously measured altitude, but doesn’t interfere with the dataset being relevant evidence to us. We are interested in discontinuities because they tell us about how much surprising progress can happen in a short time, and how much progress can come from a single innovation. So to make use of this data, we need to find alternate ways of measuring it that fulfill these purposes. \nFor the purpose of knowing about progress in short periods, we can choose a short period of interest, and measure jumps in progress made at that scale. For the purpose of knowing about progress made by single innovations, we can assign the maximum altitude reached to the time that the relevant innovation was made, for instance.4 \nWe could measure both of these trends, but currently only measure a version of the former. For short periods of travel, we assign the maximum altitude reached to the date given (our understanding is that most of the entries took place over less than one day). For travel that appears to have taken more than a day, we record any altitudes we have particular information about, and otherwise estimate records on roughly an annual basis, including a record for the peak altitude (and possibly more than a year apart to allow for the final record to have the maximum altitude). This is ad hoc, but for the current purpose, converting what we have to a more consistent standard does not seem worth it. Instead, we consider these the effects of these choices when measuring discontinuities. They do not appear to matter, except to make modest differences to the size of the pellet discontinuity, discussed below (section, ‘Discontinuity measurement’). \nData\nWe collected data from various sources, and added them to this spreadsheet, tab ‘Manned and unmanned’. This data is shown in Figures 1-3 below. We have not thoroughly verified this data. \nRecord altitudes might plausibly be reached by a diversity of objects for a diversity of purposes, so collecting such data is especially dependent on imagination for the landscape of these.5 For this reason, this data is especially likely to be incomplete. \nWe also intentionally left the data less complete than usual in places where completeness seemed costly and unlikely to affect conclusions about discontinuities. The following section discusses our collection of data for different periods in history and details of our reasoning about it.\nDetailed overview of data\nHere we describe the history of progress in altitude reached and the nature of the data we collected during different times. See the spreadsheet for all uncited sources.\nChimps throw rocks, so we infer that humans have probably also done this from the beginning.6 A good rock throw can apparently reach around 25m. Between then and the late 1700s, humanity developed archery, sky lanterns, kites, gunpowder, other projectile weapons, rockets, and primitive wings7, among probably other things. However records before the late 1700s are hard or impossible to find, so we do not begin the search for discontinuities until a slew of hot air balloon records beginning in 1783s. We collected some earlier records in order to have a rough trend to compare later advances to, but we are likely missing many entries, and the entries we have are quite uncertain. (It is more important to have relatively complete data for measuring discontinuities than it is for estimating a trend.) \nThe highest altitude probably attained before the late 1700s that we know of was reached by debris in a large gunpowder building explosion in 1280, which we estimate traveled around 2.5km into the air. Whether to treat this as a ‘man-made technology’ is ambiguous, given that it was not intentional, but we choose to ignore intention.8\nKites may also have traveled quite high, quite early. It appears that they have been around for at least two thousand years.9 and were used in ancient warfare and even occasionally for lifting people. We find it hard to rule out the possibility that early kites could travel one or two thousand meters into the air: modern kites frequently fly at 2km altitudes, silk has been available for thousands of years, and modern silk at least appears to be about as strong as nylon.10 Thus if we are wrong about the gunpowder factory explosion, it is still plausible that two thousand meter altitudes were achieved by kites. \nOver a period of three and a half months from August 1783, manned hot air balloons were invented,11 and taken from an initial maximum altitude of 24m up to a maximum altitude of 2700m. While this was important progress in manned travel12, most of these hot air balloons were still lower than the gunpowder explosion and perhaps kites. Nonetheless, there are enough records from around this time, that we begin our search for discontinuities here.\nThe first time that humanity sent any object clearly higher than ancient kites or explosion debris was December 1783, when the first hydrogen balloon flight ascended to 2,700m. This was not much more than we (very roughly) estimate that those earlier objects traveled. However the hot air balloon trend continued its steep incline, and in 1784 a balloon reached 4000m, which is over a thousand years of discontinuity given our estimates (if we estimated the rate of progress as an order of magnitude higher or lower, the discontinuity would remain large, so the uncertainties involved are not critical.) \nThe next hot air balloon that we have records for ascended nearly twice as high—7280m—in 1803, representing another over a thousand years of discontinuity. We did not thoroughly search for records between these times. However if that progress actually accrued incrementally over the twenty years between these records, then still every year would have seen an extra 85 years of progress at the previous rate, so there must have been at least one year that saw at least that much progress, and it seems likely that in fact at least one year saw over a hundred years of progress. Thus there was very likely a large discontinuity at that time, regardless of the trend between 1784 and 1803.\nWe collected all entries from Wikipedia’s Flight altitude record page, which claims to cover ‘highest aeronautical flights conducted in the atmosphere, set since the age of ballooning’.13 It is not entirely clear to us what ‘aeronautical flights’ covers, but seemingly at least hot air balloons and planes. The list includes some unmanned balloons, but it isn’t clear whether they are claiming to cover all of them. They also include two cannon projectiles, but not 38 cm SK L/45 “Max”, which appears to be a record relative to anything they have, and cannon projectiles are probably not ‘flights’, so we think they are not claiming to have exhaustively covered those. Thus between the late 1700s, and the first flights beyond the atmosphere, the main things this data seems likely to be missing is military projectiles, and any other non-flight atmospheric-level objects. \nWe searched separately for military projectiles during this period. Wikipedia claims, without citation, that the 1918 Paris gun represented the greatest height reached by a human-made projectile until the first successful V-2 flight test in October 194214, which matches what we could find. We searched for military records prior to the Paris gun, and found only one other, “Max” mentioned above, a 38cm German naval gun from 1914. \nWe expect there are no much higher military records we are missing during this time but that we could easily have missed some similar ones. As shown in Figure 1, the trend of military records we are aware of is fairly linear, and that line is substantially below the balloon record trend until around 1900. So it would be surprising if there were earlier military records that beat balloon records, and less surprising if we were missing something between 1900 and 1918. It seems unlikely however that we could have missed enough data that the Paris Gun did not represent at least a moderate discontinuity.15\nWe could not think of other types of objects that might have gone higher than aeronautical flights and military projectiles between the record 1803 balloon and V-2 rockets reaching ‘the edge of space’ from 1942. Thus the data in this period seems likely to be relatively complete, or primarily missing less important military projectiles.\nThe German V-2 rockets are considered the first man-made objects to travel to space (though the modern definition of space is higher)16 so they are presumably the highest thing at that time (1942). They are also considered the first projectile record since the Paris gun, supporting this. Wikipedia has an extensive list of V-2 test launches and their outcomes, from which we infer than three of them represent altitude records.17\nThe two gun records we know of were both German WWI guns, and the V2 rockets that followed were German WWII weapons, apparently developed in an attempt to replace the Paris Gun when it was banned under the Versailles Treaty.18 So all altitude records between the balloons of the 1800s and the space rockets of the 50s appear to be German military efforts.\nBetween the last record V-2 rocket in 1946 and 1957, we found a series of rockets that traveled to increasing altitudes. We are not confident that there were no other record rocket altitudes in this time. However the rockets we know of appear to have been important ones, so it seems unlikely that other rockets at the time were radically more powerful, and there does not appear to have been surprising progress over that entire period considered together, so there could not have been much surprising progress in any particular year of it, unless the final record should be substantially higher than we think. We are quite unsure about the final record (the R-7 Semyorka), however it doesn’t seem as though it could have gone higher than 3000km, which would only add a further four years of surprising progress to be distributed in the period. \nIn October 1957, at least one centimeter-sized pellet was apparently launched into solar orbit, using shaped charges and a rocket. As far as we know, this was the first time an object escaped Earth’s gravity to orbit the sun.19 This episode does not appear to be mentioned often, but we haven’t found anyone disputing its being the first time a man-made object entered solar orbit, or offering an alternate object. \nBecause the pellets launched were just pellets, with no sophisticated monitoring equipment, it is harder to know what orbit they ended up in, and therefore exactly how long it took to reach their furthest distance from Earth, or what it was. Based on their speed and direction, we estimate they should still have been moving at around 10km/s as they escaped Earth’s gravity. Within a day we estimate that they should have traveled more than six hundred times further away than anything earlier that we know of. Then conservatively they should have reached the other side of the sun, at a distance from it comparable to that of Earth, in around 1.5 years. However this is all quite uncertain.\nAt around this time, reaching maximum altitudes goes from taking on the order of days to on the order of years. As discussed at the start of section ‘Altitude of objects attained by manmade means’ above, from here on we record new altitudes every year or so for objects traveling at increasing altitudes over more than a year. \nIn the years between 1959 and 1973, various objects entered heliocentric orbit.20 It is possible that some of them reached greater altitudes than the pellets, via being in different orbits around the sun. Calculating records here is difficult, because reaching maximal distance from Earth takes years,21 and how far an object is from Earth at any time depends on how their (eccentric) orbits relate to Earth’s, in 3D space. Often, the relevant information isn’t available. \nAmong artificial objects in heliocentric orbit listed by Wikipedia22 none are listed as having orbits where they travel more than 1.6 times further from the Sun than Earth does23, though many are missing such data. This is probably less far than the pellets, though further away than our conservative estimate for the pellets. For an object to reach this maximal distance from the Earth, it would need to be at this furthest part of its orbit, while being on the opposite side of the Sun from Earth, on the same plane as Earth. \nGiven all of this, it seems implausible that anything went ten times as far from the Sun as Earth by 1960, but even this would not have represented a discontinuity of even ten years. Given this and the difficulty of calculating records, we haven’t investigated this period of solar orbiters thoroughly. \nIn 1973 Pioneer 10 became the first of five space probes to begin a journey outside the solar system. In 1998 it was overtaken by Voyager 1. We know that no other probes were the furthest object during that time, however have not checked whether various other objects exiting the solar system (largely stages of multi-stage rockets that launched the aforementioned probes) might have gone further. \nFigure 1 shows all of the altitude data we collected, including entries that turned out not to be records. Figures 2 and 3 show the best current altitude record over time. \n Figure 1: Post-1750 altitudes of various objects, including many non-records. Whether we collected data for non-records is inconsistent, so this is not a complete picture of progress within object types. It should however contain most aircraft and balloon records since 1783. See image in detail here. \nTo see detail:Download\n\nFigure 2: record altitudes known to us since 1750\nFigure 3: Record altitudes known to us since 6000 BC (early ones estimated imprecisely)\nDiscontinuity measurement\nFor measuring discontinuities, we treat the past trend at a given point as linear or exponential and as starting from earlier or later dates depending on what fits well at that time.24 Relative to these previous rates, this altitude trend contains six discontinuities of greater than ten years, with four of them being greater than 100 years:25\nYearHeight (m)Discontinuity (years)Entity178440001032Balloon180372801693Balloon191842,300227Paris gun194285,000120V-2 Rocket1944174,60011V-2 Rocket1957864,000,00035Pellets (after one day)\nThe 1957 pellets would be a 66 year discontinuity if we counted all of their ultimate estimated altitude as one jump on the day after their launch, so exactly how one decides to treat altitudes that grow over years is unlikely to prevent these pellets representing a discontinuity of between ten and a hundred years.\nIn addition to the size of these discontinuities in years, we have tabulated a number of other potentially relevant metrics here.26\n\nPrimary authors: Katja Grace, Rick Korzekwa\nThanks to Stephen Jordan and others for suggesting a potential discontinuity in altitude records.\nNotes", "url": "https://aiimpacts.org/discontinuity-in-altitude-records/", "title": "Historic trends in altitude", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-02-10T03:47:39+00:00", "paged_url": "https://aiimpacts.org/feed?paged=14", "authors": ["Katja Grace"], "id": "4f786d998d81576796eb9300bfdd624a", "summary": []} {"text": "2015 FLOPS prices\n\nIn April 2015, the lowest GFLOPS prices we could find were approximately $3/GFLOPS. However recent records of hardware performance from 2015 and earlier imply substantially lower prices, suggesting that something confusing has happened with these sources of data. We have not resolved this.\nRecent data\nWe have not finished exploring the apparent discrepancies between 2015 prices for performance and current records of 2015 prices for performance. However in the data described in our 2017 assessment of recent price trends (key figure here), prices appear to have been below $1 since 2008.1 The measurements are not entirely comparable, but we would not expect the differences to produce such a large price difference.\n2015 research\nThe rest of this page is largely taken from our page written in 2015.\nIn April 2015, the lowest recorded GFLOPS prices we knew of were approximately $3/GFLOPS, for various CPU and GPU combinations. Amortized over three years, this was $1.1E-13/FLOPShour. Prices in the $3-5/GFLOPS range seemed to be common, for GPU and CPU combinations and sometimes for supercomputers. Using CPUs, prices were at least $11/GFLOPS, and computing as a service cost more like $160/GFLOPS.\nBackground\nWe have written about long term trends in the costs of computing hardware. We were interested in evaluating the current prices more thoroughly, both to validate the long term trend data, and because current hardware prices are particularly important to know about.\nDetails\nWe separately investigated CPUs, GPUs, computing as a service, and supercomputers. In all categories, we collected some contemporary instances which we judged heuristically as especially likely to be cost-effective. We did not find any definitive source on the most cost-effective in any category, or in general, so our examples are probably not the very cheapest.  Nevertheless, these figures give a crude sense for the cost of computation in the contemporary market. Our full dataset of CPUs, GPUs and supercomputers is here, and contains data on twenty two machines. Our data on computing as a service is all included in this page.\nIncluded costs\nFor CPUs and GPUs, we list the price of the CPU and/or GPU (GPUs were always used with a CPU, so we include the cost for both), but not other computer components. We compared prices between one complete rack server and the set of four processors inside it, and found the complete server was around 36% more expensive ($30,000 vs. $22,000). We expect this is representative at this scale, but diminishes with scale.\nFor computing services, we list the cheapest price for renting the instance for a long period, with no additional features. We do not include spot prices.\nFor supercomputers, we list costs cited, which don’t tend to come with elaboration. We expect that they only include upfront costs, and that most of the costs are for hardware.\nWe have not included the costs of energy or other ongoing expenses in any prices. Non-energy costs are hard to find, and we suspect a relatively small and consistent fraction of costs. Energy costs appear to be around 10% of hardware costs. For instance, the Intel Xeon E5-2699 uses 527.8 watts and costs $5,190.2 Over three years, with $0.05/kWh this is $694, or 13% of the hardware cost. Titan also uses 13% of its hardware costs in energy over three years.3 We might add these costs later for a more precise estimate.\nFLOPS measurements\nTo our knowledge we report only empirical performance figures from benchmark tests, rather than theoretical maximums. We sometimes use figures for LINPACK and sometimes for DGEMM benchmarks, depending on which are available. Geekbench in particular does not use the common LINPACK, but LINPACK relies heavily on DGEMM, suggesting DGEMM is fairly comparable. We guess they differ by around 10%.4\nPrices\nCentral processing units (CPUs)\nWe found prices and performance data for five contemporary CPUs, including three different instances of one of them. They ranged from $11-354/GFLOPS with most prices below $100/GFLOPS.5 The cheapest of these CPUs still looks several times more expensive than some GPUs and supercomputers, so we did not investigate these numbers in great depth, or search far for cheaper CPUs.\nGraphics processing units (GPUs)\nWe found performance data for six recent combinations of CPUs and GPUs (with much overlap between CPUs and GPUs between combinations. They ranged from $3.22/GFLOPS to $4.17/GFLOPS.\nNote that graphics cards are typically significantly restricted in the kinds of applications they can run efficiently; this performance is achieved for highly regular computations that can be carried out in parallel throughout a GPU (of the sort that are required for rendering scenes, but which have also proved useful in scientific computing).\nComputing as service\nAnother way to purchase FLOPS is via virtual computers.\nAmazon Elastic Cloud Compute (EC2) is a major seller of virtual computing. Based on their current pricing, renting a c4.8xlarge instance costs about $1.17 / hour.6 This is their largest instance optimized for computing performance (rather than e.g. memory). A c4.8xlarge instance delivers around 97.5 GFLOPS.7 This implies that a GFLOPShour costs $0.012. If we suppose this is an alternative to buying computer hardware, then the relevant time horizon is about three years. Over three years, renting this hardware will cost $316/GFLOPS, i.e. around two orders of magnitude more than buying GFLOPS in the form of GPUs.\nOther sources of virtual computing seem to be similarly priced. An informal comparison of computing providers suggests that on a set of “real-world java benchmarks” three providers are quite closely comparable, with all between just above Amazon’s price and just under half Amazon’s price for completing the benchmarks, across different instance sizes. This analysis also suggests Amazon is a relatively costly provider, and suggests a cheap price for virtual computing is closer to $0.006/GFLOPShour or $160/GFLOPS over three years.\nEven with this optimistic estimate, virtual computing appears to cost something like fifty times more than GPUs. This high price is presumably partly because there are non-hardware costs which we have not accounted for in the prices of buying hardware, but are naturally included in the cost of renting it. However it is unlikely that these additional costs make up a factor of fifty.\nSupercomputing\nThe Titan supercomputer purportedly cost about $97M to produce, or about $4,000 dollars per hour amortized over 3 years. It performs 17,590,000 GFLOPS which comes to $5.51/GFLOPS. This makes it around the same price as the cheapest GPUs. It is made of a combination of GPUs and CPUs, so this similarity is unsurprising.\nThe other six built supercomputers we looked at were more expensive, ranging up to $95/GFLOPS. Another cost-effective supercomputer, the L-CSC, was being built at the time it was most recently reported on, and while it should be completed now we could not find more data on it. Extrapolating from the figures before it was finished, when completed it should cost $2.39/GFLOPS, and thus be the cheapest source of FLOPS we are aware of.\nSummary\nThe lowest recorded GFLOPS prices we know of are approximately $3/GFLOPS, for various CPU and GPU combinations. Amortized over three years, this is $1.1E-13/FLOPShour. Prices in the $3-5/GFLOPS range seem to be common, for GPU and CPU combinations and sometimes for supercomputers. Using CPUs, prices are at least $11/GFLOPS, and computing as a service costs more like $160/GFLOPS.", "url": "https://aiimpacts.org/2015-flops-prices/", "title": "2015 FLOPS prices", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2018-01-19T03:43:14+00:00", "paged_url": "https://aiimpacts.org/feed?paged=14", "authors": ["Katja Grace"], "id": "c6e488f83a5ddfb0cb4f3642f6104dd7", "summary": []} {"text": "Effect of marginal hardware on artificial general intelligence\n\nWe do not know how AGI will scale with marginal hardware. Several sources of evidence may shed light on this question.\nDetails\nBackground\nSuppose that at some point in the future, general artificial intelligence can be run on some quantity of hardware, h, producing some measurable performance p. We would like to know how running approximately the same algorithms using additional hardware (increasing h) affects p.\nThis is important, because if performance scales superlinearly, then at the time we can run a human-level intelligence on hardware that costs as much as a human, we can run an intelligence that performs more than twice as well as a human on twice as much hardware, and so already have superhuman efficiency at converting hardware into performance. And perhaps go on, for instance producing an entity which is 64 times as costly as a human yet almost incomparably better at thinking.\nThis might mean that the first ‘human-level’ effectiveness at converting dollars of hardware into performance would be earlier, when perhaps a mass of hardware costing one thousand times as much as a human can be used to produce something which performs a thousand times as well as a human. However it might be that the first time we have software to produce something roughly human-like, it does so at human-level with much smaller amounts of hardware. In which case, immediately scaling up the hardware might produce a substantially superhuman intelligence. This is one reason some people expect fast progress from sub-human to superhuman intelligence.\nWhether gains are superexponential or sublinear depends on the metrics of performance. For instance, imagine hypothetically that doubling hardware generally produces a twenty point IQ increase (logarithmic gains in IQ), but twenty IQ points above the smartest human is enough to conquer areas of science that thousands of scientists have puzzled over ineffectually forever (much better than exponential gains in some metric of discovery or economic value). So the question must be how marginal hardware affects metrics that matter to us.\nConsiderations\nWe have not investigated this question, but the following sources of evidence seem promising.\nEvidence from existing algorithms\nWe do not yet have any kind of artificial general intelligence. However we can look at how performance scales with hardware in other kinds of software applications, especially narrow AI.\nEvidence from human brain scaling\nAmong humans, brain size and intelligence are related. The exact relationship has been studied, but we have not reviewed the literature.\nEvidence from between-animal brain scaling\nBetween animals, brain size relative to body size is related to intelligence. The exact relationship has probably been studied, but we have not reviewed the literature.\nEvidence from new types of computation being possible with additional hardware\nSome gains with hardware could come not from better performance on a particular task, but from being able to perform new tasks that were previously infeasible with so little hardware. We do not know how big an issue this is, but examining past experience with increasing hardware availability (e.g. by talking to researchers) seems promising.", "url": "https://aiimpacts.org/effect-of-marginal-hardware-on-artificial-general-intelligence/", "title": "Effect of marginal hardware on artificial general intelligence", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2017-12-29T03:35:17+00:00", "paged_url": "https://aiimpacts.org/feed?paged=14", "authors": ["Katja Grace"], "id": "70f958027f37acad78b123f5be799a59", "summary": []} {"text": "Human-level hardware timeline\n\nWe estimate that ‘human-level hardware’— hardware able to perform as many computations per second as a human brain, at a similar cost to a human brain—has a 30% chance of having already occurred, a 45% third chance of occurring by 2040, and a 25% chance of occurring later. We are not confident about these estimates.\nSupport\nBackground\nSay that computing hardware has reached ‘human-level’ performance when machines can perform as many computations as a human brain performs (under some natural interpretation of the brain as performing computations), at no greater cost than that of running a human brain.\nWe are interested in when hardware reaches this level of performance, because then if we have software that is also at least ‘human-level’,1 we will have ‘human-level’ AI overall: AI that can perform the tasks that a human brain performs, as efficiently as a human brain.\nWe may have human-level AI before having hardware and software that are both at least as good as the human brain—intuitively, if one of them is better than that, then the other may be worse. So the implications of human-level hardware alone are not straightforward. (We may also have disruptive change before we have human-level AI—the event of human-level AI is an upper bound for when some large changes in society can be expected to occur.)\nIt is unclear to us how important the contributions from hardware and software progress are, respectively, to overall AI progress. If hardware progress is much more important than software progress, then human-level hardware should approximately co-occur with human-level AI.\nCalculation\nTo roughly forecast when computing hardware will reach ‘human-level’, we can combine estimates of how much computing the human brain performs per dollar, current hardware performance per dollar, and the rate of improvement in hardware performance.\nLet:\nT = time until human-level hardware performance per dollar.\nH = human-level hardware performance per dollar = 0.4-13*1010 FLOPS/$.2\nC = current hardware performance per dollar = 0.3-30 *109 FLOPS/$3\nR = 1 + growth rate of hardware performance per dollar = 1.16-1.784\nThen we have:\nT =  logR(H/C)\n=log1.16(0.4 x 1010/(30 x 109)) to log1.16(13 x 1010/(.3 x 109))\n= -14 to 41 years\n \nThese are rough calculations, and the breadth of the intervals don’t necessarily mean a lot—the intervals were non-specific to begin with, and then we combined several of them.\nIf we do something similar (shown here), using more realistic distributions for each variable and calculating using the entire distributions rather than end points, we get -14 to 22 years using the narrower estimates for human-level hardware that we used above, or -31 to 99 years for a very wide set of estimates for human-level hardware. The chance that human-level hardware has already occurred is around 20-40%, according to these calculations.\nBased on these calculations, we estimate a 30% chance we are already past human-level hardware (at human cost), a 45% chance it occurs by 2040, and a 25% chance it occurs later.5\nImplications\nThese figures suggest that the period when we most expect human-level hardware has already begun, and we are a substantial part of the way through it. In the case that hardware progress matters a lot more than software progress, this means that we should expect to see human-level AI in the next several decades, or possibly in the past. This is some evidence against hardware progress being so important, but still overall makes human-level AI likely to be sooner than one might have thought without the evidence considered here.", "url": "https://aiimpacts.org/human-level-hardware-timeline/", "title": "Human-level hardware timeline", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2017-12-23T07:59:19+00:00", "paged_url": "https://aiimpacts.org/feed?paged=14", "authors": ["Katja Grace"], "id": "eba032b1f66f31fcbd6c0d71b7e3f93f", "summary": []} {"text": "Chance date bias\n\nThere is modest evidence that people consistently forecast events later when asked the probability that the event occurs by a certain year, rather than the year in which a certain probability of the event will have accrued.\nDetails\nIn the 2016 ESPAI and its preparation, AI experts and Mechanical Turk workers both consistently gave later probability distributions for events relating to AI when asked to give the probability that the event would occur by a given year, rather than the year by which there was a certain probability. See more on the ESPAI page.\nWe do not know which framing produces more reliable answers. We have not seen this bias elsewhere.\nThe following figure shows an example for some key figures: the distributions with stars are consistently a little flatter than those with circles.\nFigure 1. Median answers to questions about probabilities by dates (‘fixed year’) and dates for probabilities (‘fixed probability’), for different occupations, all current occupations, and all tasks (HLMI).", "url": "https://aiimpacts.org/chance-date-bias/", "title": "Chance date bias", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2017-12-12T07:59:02+00:00", "paged_url": "https://aiimpacts.org/feed?paged=14", "authors": ["Katja Grace"], "id": "27f7cb943fa462df82fb189238a94f39", "summary": []} {"text": "GoCAS talk on AI Impacts findings\n\nBy Katja Grace, 27 November 2017\nHere is a video summary of some highlights from AI Impacts research over the past years, from the GoCAS Existential Risk workshop in Göteborg in September. Thanks to the folks there for recording it.\n", "url": "https://aiimpacts.org/gocas-talk-on-ai-impacts-findings/", "title": "GoCAS talk on AI Impacts findings", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2017-11-28T06:04:19+00:00", "paged_url": "https://aiimpacts.org/feed?paged=15", "authors": ["Katja Grace"], "id": "5bd768117165be9e3a8b5ea0dea7557c", "summary": []} {"text": "Price performance Moore’s Law seems slow\n\nBy Katja Grace, 26 November 2017\nWhen people make predictions about AI, they often assume that computing hardware will carry on getting cheaper for the foreseeable future, at about the same rate that it usually does. Since this is such a common premise, and whether reality has yet proved it false is checkable, it seems good to check sometimes. So we did.\nLooking up the price and performance of some hardware turned out to be a real mess, with conflicting numbers everywhere and the resolution of each error or confusion mostly just leading to several more errors and confusions.\nI suppose the way people usually make meaningful figures depicting computer performance changing over time is that they are doing it over long enough periods of time that even if each point is only accurate to within three orders of magnitude, it is fine because the whole trend is traversing ten or fifteen orders of magnitude. But since I wanted to know what was happening in the last few years, this wouldn’t do—half an order of magnitude of progress could be entirely lost in that much noise.\nIn the end, the two best looking sources of data we could find are the theoretical performance of GPUs (via Wikipedia), and Passmark‘s collection of performance records for their own benchmark. Neither is perfect, but both make it look like prices for computing are falling substantially slower than they were. Over the last couple of decades it had been taking about four years for computing to get ten times cheaper, and now (on these measures) it’s taking more like twelve years. Which could in principle be to do with these measures being different from usual, but I think probably not.\nThere are quite a few confusions still to resolve here. For instance, in spite of showing slower progress, these numbers look a lot cheaper than what would have been predicted by extrapolating past trends (or sometimes more expensive). Which might be because we are comparing performance using different metrics, and converting between them badly. Different records of past trends seem to disagree with one another too, which is perhaps a hint. Or it could be that there was faster growth somewhere in between that we didn’t see. Or we might not have caught all of the miscellaneous errors in this cursed investigation.\nBut before we get too bogged down trying to work these things out, I just wanted to say that price performance Moore’s Law tentatively looks slower than usual.\nSee full investigation at: Recent Trend in the Cost of Computing", "url": "https://aiimpacts.org/price-performance-moores-law-seems-slow/", "title": "Price performance Moore’s Law seems slow", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2017-11-27T07:58:03+00:00", "paged_url": "https://aiimpacts.org/feed?paged=15", "authors": ["Katja Grace"], "id": "153f16792998a1eafece93f2acf03d86", "summary": []} {"text": "2017 trend in the cost of computing\n\nThe cheapest hardware prices (for single precision FLOPS/$) appear to be falling by around an order of magnitude every 10-16 years. This rate is slower than the trend of FLOPS/$ observed over the past quarter century, which was an order of magnitude every 4 years. There is no particular sign of slowing between 2011 and 2017.\nSupport\nBackground\nComputing power available per dollar has increased fairly evenly by a factor of ten roughly every four years in the last quarter of a century (a phenomenon sometimes called ‘price-performance Moore’s Law‘). Because this trend is important and regular, it is useful in predictions. For instance, it is often used to determine when the hardware for an AI mind might become cheap. This means that a primary way such predictions might err is if this trend in computing prices were to leave its long run trajectory. This must presumably happen eventually, and has purportedly happened with other exponential trends in information technology recently.1\nThis page outlines our assessment of whether the long run trend is on track very recently, as of late 2017. This differs from assessing the long run trend (as we do here) in that it requires recent and relatively precise data. Data that may be off by one order of magnitude is still useful when assessing a long run trend that grows by many orders of magnitude. But if we are judging whether the last five years of that trend are on track, it is important to have more accurate figures.\nSources of evidence\nWe sought public data on computing performance, initial price, and date of release for different pieces of computing hardware. We tried to cover different types of computing hardware, and to prioritize finding large, consistent datasets using comparable metrics, rather than one-off measurements. We searched for computing performance measured using the Linpack benchmark, or something similar.\nWe ran into many difficulties finding consistently measured performance in FLOPS for different machines, as well as prices for those same machines. What data we could find used a variety of different benchmarks. Sometimes performance was reported as ‘FLOPS’ without explanation. Twice the ‘same’ benchmarks turned out to give substantially different answers at different times, at least for some machines, apparently due to the benchmarks being updated. Performance figures cited often refer to ‘theoretical peak performance’, which is calculated from the computer’s specifications, rather than measured, and is higher than actual performance.\nPrices are also complicated, because each machine can have many sellers, and each price fluctuates over time. We tried to use the release price, the manufacturer’s ‘recommended customer price’, or similar where possible. However, many machines don’t seem to have readily available release prices.\nThese difficulties led to many errors and confusions, such that progress required running calculations, getting unbelievable results, and searching for an error that could have made them unbelievable. This process is likely to leave remaining errors at the end, and those errors are likely to be biased toward giving results that we find believable. We do not know of a good remedy for this, aside from welcoming further error-checking, and giving this warning.\nEvidence\nGPUs appear to be substantially cheaper than CPUs, cloud computing (including TPUs), or supercomputers.2 Since GPUs alone are at the frontier of price performance, we focus on them. We have two useful datasets: one of theoretical peak performance, gathered from Wikipedia, and one of empirical performance, from Passmark.\nGPU theoretical peak performance\nWe collected data from several Wikipedia pages, supplemented with other sources for some dates and prices.3 We think all of the performance numbers are theoretical peak performance, generally calculated from specifications given by the developer, but we have not checked Wikipedia’s sources or calculations thoroughly. Our impression is that the prices given are recommended prices at launch, by the developers of the hardware, though again we have only checked a few of them.\nWe look at Nvidia and AMD GPUs and Xeon Phi processors here because they are the machines for which we could find data on Wikipedia easily. However, Nvidia and AMD are the leading producers of GPUs, so this should cover the popular machines. We excluded many machines because they did not have prices listed.\nFigure 1 shows performance (single precision) over time for processors for which we could find all of the requisite data.\nThe recent rate of progress in this figure looks like somewhere between half an order of magnitude in the past eight years and an order of magnitude in the past ten, for an order of magnitude about every 10-16 years. We don’t think the figure shows particular slowing down—the most cost-effective hardware has not improved in almost a year, but that is usual in the rest of the figure.\nFigure 1\nWe also collected double precision performance figures for these machines, but the machines do not appear to be optimized for double precision performance,4 so we focus on single precision.\nPeak theoretical performance is generally higher than actual performance, but our impression is that this should be by a roughly constant factor across time, so not make a difference to the trend.\nGPU Passmark value\nPassmark maintains a collection of benchmark results online, for both CPUs and GPUs. They also collect prices, and calculate price for performance (though it was not clear to us on brief inspection where their prices come from). Their performance measure is from their own benchmark, which we do not know a lot about. This makes their absolute prices hard to compare to others using more common measures, but the trend in progress should be more comparable.\nWe used archive.org to collect old versions of Passmark’s page of the most cost-effective GPUs available, to get a history of price for passmark performance. The prices are from the time of the archive, not necessarily from when the hardware was new. That is, if we collected all of the results on the page on January 1, 2013, it might contain hardware that was built in 2010 and has maybe come down in price due to being old. You might wonder whether this means we are just getting a lot of really cheap old hardware with hardly any performance, which might be bad in other ways and so not represent a realistic price of hardware. This is possible, however given that people show interest in this (for instance, Passmark keep these records) it would be surprising to us if this metric mostly caught useless hardware.\nFigure 2: Top GPU passmark performance per dollar scores over time.\n \nFigure 3: The same data in Figure 1, showing progress for different percentiles.\n \nFigure 4: Figure 3, with a log y axis.\nWe are broadly interested in the cheapest hardware available, but we probably don’t want to look at the very cheapest in data like this, because it seems likely to be due to error or other meaningless exploitation of the particular metric.5 The 95th percentile machines (out of the top 50) appear to be relatively stable, so are probably close to the cheapest hardware without catching too many outliers. For this reason, we take them as a proxy for the cheapest hardware.\nFigure 4 shows the 95th percentile fits an exponential trendline quite well, with a doubling time of 3.7 years, for an order of magnitude every 12 years. This has been fairly consistent, and shows no sign of slowing by early 2017. This supports the 10-16 year time we estimated from the Wikipedia theoretical performance above.\n \nOther sources we investigated, but did not find relevant\n\nThe Wikipedia page on FLOPS contains a history of GFLOPS over time. The recent datapoints appear to overlap with the theoretical performance figures we have already.\nGoogle has developed Tensor Processing Units (TPUs) that specialize in computation for machine learning. Based on information from Google, we estimate that they perform around 1.05 GFLOPS/$.\nIn 2015, cloud computing appeared to be around a hundred times more expensive than other forms of computing.6 Since then the price appears to have roughly halved.7 So cloud computing is not a competitive way to buy FLOPS all else equal, and the price of FLOPS may be a small influence on the cloud-computing price trend, making the trend less relevant to this investigation.\nTop supercomputers perform at around $3/GFLOPS, so they do not appear to be on the forefront of cheap performance. See Price performance trend in top supercomputers for more details.\nGeekbench has empirical performance numbers for many systems, but their latest version does not seem to have anything for GPUs. We looked at a small number of popular CPUs on Geekbench from the past five years, and found the cheapest to be around $0.71/GFLOPS. However there appear to be 5x disparities between different versions of Geekbench, which makes it less useful for fine-grained estimates.\n\nConclusions\nWe have seen that the theoretical peak single-precision performance of GPUs is improving at about an order of magnitude every 10-16 years. And that the Passmark performance/$ trend is improving by an order of magnitude every 12 years. These are slower than the long run price-performance trends of an order of magnitude every eight years (75 year trend) or four years (25 year trend).\nThe longer run trends are based on a slightly different set of measures, which might explain a difference in rates of progress.\nWithin these datasets the pace of progress does not appear to be slower in recent years relative to earlier ones.", "url": "https://aiimpacts.org/recent-trend-in-the-cost-of-computing/", "title": "2017 trend in the cost of computing", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2017-11-12T07:59:31+00:00", "paged_url": "https://aiimpacts.org/feed?paged=15", "authors": ["Katja Grace"], "id": "6f4f13cee7520dd04f6a1cfdb174d9e7", "summary": []} {"text": "Price-performance trend in top supercomputers\n\nA top supercomputer can perform a GFLOP for around $3, in 2017.\nThe price of performance in top supercomputers continues to fall, as of 2016.\nDetails\nTOP500.org maintains a list of top supercomputers and their performance on the Linpack benchmark. The figure below is based on empirical performance figures (‘Rmax’) from Top500 and price figures collected from a variety of less credible sources, for nine of the ten highest performing supercomputers (we couldn’t find a price for the tenth). Our data and sources are here.\nSunway Teihu Light performs the cheapest GFLOPS, at $2.94/GFLOPS. This is around one hundred times more expensive than peak theoretical performance of certain GPUs, but we do not know why there is such a difference (peak performance is generally higher than actual performance, but by closer to a factor of two).\nThere appears to be a downward trend in price, but it is not consistent, and with so few data points its slope is ambiguous. The best price for performance roughly halved in the last 4-5 years, for a 10x drop in 13-17 years. The K computer in 2011 was much more expensive, but appears to have been substantially more expensive than earlier computers.\n\n\n ", "url": "https://aiimpacts.org/price-performance-trend-in-top-supercomputers/", "title": "Price-performance trend in top supercomputers", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2017-11-09T07:31:27+00:00", "paged_url": "https://aiimpacts.org/feed?paged=15", "authors": ["Katja Grace"], "id": "e01d794dd3f998a2711ded7ba1b30752", "summary": []} {"text": "Computing hardware performance data collections\n\nThis is a list of public datasets that we know of containing either measured or theoretical performance numbers for computer processors.\nList\n\nTop 500 maintains a list of the top 500 supercomputers, updated every six months. It includes measured performance.\nList of Nvidia Graphics Processing Units contains GFLOPS figures for a large number of GPUs. Probably they are all theoretical peak performance numbers. It also contains release dates and release prices.\nList of AMD Graphics Processing Units is much like the list of Nvidia GPUs, but for the other leading GPU brand.\nWikipedia’s FLOPS page contains a small amount of data, seemingly empirical, from a variety of sources.\nWikipedia has other small collections of theoretical performance data. For instance on the Intel Xeon Phi page.\nMoravec has perhaps the oldest and best known dataset. We link to an article discussing it, but its actual page was down last we checked.\nNordhaus expands on Moravec’s data.\nKoh and Magee expand on Moravec’s data.\nRieber and Muehlhauser did have a dataset (discussed here) but links to it appear to be broken.\nJohn McCallum’s dataset (doesn’t load at time of writing, but is discussed in Sandberg and Bostrom 2008 and on our page on trends in the cost of computing)\nPassmark has a huge quantity of empirical performance data, for CPUs and GPUs. However it is all in terms of their own benchmark, so hard to compare to other things. They also list current prices. Looking at it over time (via archive.org) can let you also see past prices. Doing so suggests that they change their benchmarks on occasion, which makes it even harder to interpret what they mean.\nGeekbench Browser collects empirical performance data from people testing their computers with Geekbench’s service. They list many benchmark numbers for many computers. However identically named benchmark figures from ‘Geekbench v4’ vs. ‘Geekbench v3’ for the same hardware differ a lot (one of us recollects about a factor of five), apparently because they changed what the benchmark actually was then. This suggests care should be taken to use numbers from the same version of Geekbench, and also that any version is not necessarily comparable to other apparently identical measures from elsewhere. We are also not sure whether differences in benchmark meaning only occur between saliently labeled versions.\nExport compliance metrics for Intel Processors is a collection of PDFs listing processors alongside a number for ‘FLOP’, which we suppose is related to FLOPS. It does not contain much explanation, and has some worrying characteristics.1\nKarl Rupp has collected some data and made it available. He has also blogged about it here and here. However he says he got it from a combination of the Intel compliance metrics (listed above), and the list of Intel Xeon Microprocessors (below), and a) the export compliance metrics data seems strange, and b) we couldn’t actually track down his data in those sources. Possibly we are misunderstanding the export compliance metrics, and he is interpreting them correctly, resolving both problems.\nAsteroids@home lists Whetstone benchmark GFLOPS per core by CPU model for computers participating in their project.\nThe Microway knowledge center has a lot of pages containing at least some theoretical peak performance numbers (see any called ‘detailed specifications of —‘, but most of the numbers on each page are inside figures, and so hard to export or read in detail.\n\nOther useful hardware data\n\nList of Intel Xeon Microprocessors does not include figures for FLOPS, but has price and release date data.\n", "url": "https://aiimpacts.org/computing-hardware-performance-data-collections/", "title": "Computing hardware performance data collections", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2017-10-26T22:34:47+00:00", "paged_url": "https://aiimpacts.org/feed?paged=15", "authors": ["Katja Grace"], "id": "b167b9ef9c615f307cb1bd15d25e848d", "summary": []} {"text": "2016 ESPAI Narrow AI task forecast timeline\n\nThis is an interactive timeline we made, illustrating the median dates when respondents said they expected a 10%, 50% and 90% chance of different tasks being automatable, in the 2016 Expert Survey on progress in AI (further details on that page).\nTimeline\n", "url": "https://aiimpacts.org/2016-espai-narrow-ai-task-forecast-timeline/", "title": "2016 ESPAI Narrow AI task forecast timeline", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2017-10-04T18:23:07+00:00", "paged_url": "https://aiimpacts.org/feed?paged=15", "authors": ["Katja Grace"], "id": "cd692058e349c023fc835e12a72c1508", "summary": []} {"text": "When do ML Researchers Think Specific Tasks will be Automated?\n\nBy Katja Grace, 26 September 2017\nWe asked the ML researchers in our survey when they thought 32 narrow, relatively well defined tasks would be feasible for AI. Eighteen of them were included in our paper earlier, but the other fourteen results are among some new stuff we just put up on the survey page.\nWhile the researchers we talked to don’t expect anything like human-level AI for a long time, they do expect a lot of specific tasks will be open to automation soon. Of the 32 tasks we asked about, either 16 or 28 of them were considered more likely than not within ten years by the median respondent (depending on how the question was framed).\nAnd some of these would be pretty revolutionary, at an ordinary ‘turn an industry on its head’ level, rather than a ‘world gets taken over by renegade robots’ level. You have probably heard that the transport industry is in for some disruption. And phone banking, translation and answering simple questions have already been on their way out. But also forecast soon: the near-obsoletion of musicians.\nThe task rated easiest was human-level Angry Birds playing, with a 90% chance of happening within six or ten years, depending on the question framing. The annual Angry Birds Man vs. Machine Challenge did just happen, but the results are yet to be announced.\nThe four tasks that were not expected within ten years regardless of question framing were translating a new language using something like a Rosetta Stone, selecting and proving publishable mathematical theorems, doing well in the Putnam math contest, and writing a New York Times bestselling story.\nThe fact that the respondents gave radically different answers to other questions depending on framing suggests to us that their guesses are not super reliable. Nonetheless, we expect they are better than nothing, and that they are a good place to start if we want to debate what will happen.\nTo that end, below is a timeline (full screen version here) showing the researchers’ estimates for all 32 questions. These estimates are using the question framing that yielded slightly earlier results – forecasts were somewhat later given a different framing of the question.\n", "url": "https://aiimpacts.org/when-do-ml-researchers-think-specific-tasks-will-be-automated/", "title": "When do ML Researchers Think Specific Tasks will be Automated?", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2017-09-26T22:33:51+00:00", "paged_url": "https://aiimpacts.org/feed?paged=15", "authors": ["Katja Grace"], "id": "4db4ece0e02660b7ab37488dc50aabe2", "summary": []} {"text": "What do ML researchers think you are wrong about?\n\nBy Katja Grace, 25 September 2017\nSo, maybe you are concerned about AI risk. And maybe you are concerned that many people making AI are not concerned enough about it. Or not concerned about the right things. But if so, do you know why they disagree with you?\nWe didn’t, exactly. So we asked the machine learning (ML) researchers in our survey. Our questions were:\n\nTo what extent do you think people’s concerns about future risks from AI are due to misunderstandings of AI research?\nWhat do you think are the most important misunderstandings, if there are any?\n\nThe first question was multiple choice on a five point scale, while the second was more of a free-form, compose-your-own-succinct-summary-critique-of-a-diverse-constellation-of-views type thing. Nonetheless, more than half of the people who did the first also kindly took a stab at the second. Some of their explanations were pretty long. Some not. Here is my attempt to cluster and paraphrase them:\n \nNumber of respondents giving each response, out of 74.\n \nOur question might have been a bit broad. ‘People’s concerns about AI risk’ includes both Stuart Russell’s concerns about systems optimizing n-variable functions based on fewer than n variables, and reporters’ concerns about killer sex robots. Which at a minimum should probably be suspected of resting on different errors. [Edited for clarity Oct 15 ’17]\nSo are we being accused of any misunderstandings, or are they all meant for the ‘put pictures of Terminator on everything’ crowd?\nThe comments about unemployment and surprising events, and some of the ones about AI ruling over or fighting us seem likely to be directed at people like me. On the other hand, they are also all about social consequences, and none of these issues seem to be considered resolved by the relevant social scientists. So I am not too worried if I find myself in disagreement with some AI researchers there.\nI am more interested if AI researchers complain that I am mistaken about AI. And I think they probably are here, at least a bit.\nMy sense from reading over all these responses is that the first three categories listed in the figure represent basically the same view, and that people talk about it at different levels of generality. I’d put them together like this:\nThe state of the art right now looks great in the few examples you see, but those are actually a large fraction of the things that it can do, and it often can’t even do very slight variations on those things. The problems AI can currently deal with all have to be very well specified. Getting from here to AI that can just wander out into the world and even live a successful life as a rat seems wildly ambitious. We don’t know how to make general AI at all. So we are really unimaginably far from human-level AI, because it would have to be general.\nBut this is a guess on my part, and I am curious to hear whether any AI researchers reading have a better sense of what views are like.\nWhether these first three categories are all the same view or not, they do sound plausibly directed at people like me. And if ML researchers want to disagree with me about the state of the art in AI or how easy it is to extend it or improve upon it, it would be truly shocking if I were in the right. So I tentatively conclude that we are probably further away from general AI than I might have thought.\nOn the other hand, I wouldn’t be surprised if the respondents were misdiagnosing the disagreement here. My impression is that AI researchers (among others) often take for granted that you shouldn’t worry about things decades before they are likely to happen. So when they see people worried about AI risk, they naturally suppose that those people anticipate dangerous AI much sooner than they really do. My weak impression is that this kind of misunderstanding happens often.\nBy the way, the respondents did mostly think concerns are based largely on misunderstandings (which is not to imply that they aren’t concerned):\nNumber of respondents giving each response, out of 118.\n(Results taken from our survey page. More new results are also up there.)", "url": "https://aiimpacts.org/what-do-ml-researchers-think-you-are-wrong-about/", "title": "What do ML researchers think you are wrong about?", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2017-09-26T05:04:13+00:00", "paged_url": "https://aiimpacts.org/feed?paged=15", "authors": ["Katja Grace"], "id": "f199731346f135a581cec48c2b5c7b6c", "summary": []} {"text": "Automation of music production\n\nMost machine learning researchers expect machines will be able to create top quality music by 2036.\nDetails\nEvidence from survey data\nIn the 2016 ESPAI, participants were asked two relevant questions:\n[Top forty] Compose a song that is good enough to reach the US Top 40. The system should output the complete song as an audio file.\n[Taylor] Produce a song that is indistinguishable from a new song by a particular artist, e.g. a song that experienced listeners can’t distinguish from a new song by Taylor Swift.\nSummary results\nAnswers were as follows, suggesting these milestones are likely to be reached in ten years, and quite likely to be reached in twenty years.\n\n\n\n\n\n\n\n\n\n10 years\n20 years\n50 years\n\n\nTop forty\n27.5%\n50%\n90%\n\n\nTaylor\n60%\n75%\n99%\n\n\n\n\n\n\n\n\n\n\n\n\n10%\n50%\n90%\n\n\nTop forty\n5 years\n10 years\n20 years\n\n\nTaylor\n5 years\n10 years\n20 years\n\n\n\nDistributions of answers to Taylor question\nThe three figures below show how respondents were spread between different answers over time, for the respondents who answered the ‘fixed years’ framing.\n10\n20\n50", "url": "https://aiimpacts.org/automation-of-music-production/", "title": "Automation of music production", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2017-09-13T00:03:34+00:00", "paged_url": "https://aiimpacts.org/feed?paged=15", "authors": ["Katja Grace"], "id": "b1620b964aa7307b6cad1f20fccdf1aa", "summary": []} {"text": "Stuart Russell’s description of AI risk\n\nStuart Russell has argued that advanced AI poses a risk, because it will have the ability to make high quality decisions, yet may not share human values perfectly.\nDetails\nStuart Russell describes a risk from highly advanced AI here. In short:\nThe primary concern [with highly advanced AI] is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken […]. Now we have a problem:\n1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.\n2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.\nA system that is optimizing a function of n variables, where the objective depends on a subset of size k30%* (30% in 40y)\n\n\n\n*Due to a typo, this question asked about 40 years rather than 50 years, so doesn’t match the others.\nFigure 1: Median answers to questions about probabilities by dates (‘fixed year’) and dates for probabilities (‘fixed probability’), for different occupations, all current occupations, and all tasks (HLMI).\nInteresting things to note:\n\nFixed years framings (‘Fyears —‘, labeled with stars) universally produce later timelines.\nHLMI (thick blue lines) is logically required to be after full automation of labor (‘Occ’) yet is forecast much earlier than it, and earlier even than the specific occupation ‘AI researcher’.\nEven the more pessimistic Fyears estimates suggest retail salespeople have a good chance of being automated within 20 years, and are very likely to be within 50.\n\nIntelligence Explosion\nProbability of dramatic technological speedup\nQuestion\nParticipants were asked1:\nAssume that HLMI will exist at some point. How likely do you then think it is that the rate of global technological improvement will dramatically increase (e.g. by a factor of ten) as a result of machine intelligence:\nWithin two years of that point?       ___% chance\nWithin thirty years of that point?    ___% chance\nAnswers\nMedian P(…within two years) = 20%\nMedian P(…within thirty years) = 80%\nProbability of superintelligence\nQuestion\nParticipants were asked:\nAssume that HLMI will exist at some point. How likely do you think it is that there will be machine intelligence that is vastly better than humans at all professions (i.e. that is vastly more capable or vastly cheaper):\nWithin two years of that point?       ___% chance\nWithin thirty years of that point?    ___% chance\nAnswers\nMedian P(…within two years) = 10%\nMedian P(…within thirty years) = 50%\nThis is the distribution of answers  to the former:\n\nChance that the intelligence explosion argument is about right\nQuestion\nParticipants were asked:\nSome people have argued the following:\n\nIf AI systems do nearly all research and development, improvements in AI will accelerate the pace of technological progress, including further progress in AI.\n.\nOver a short period (less than 5 years), this feedback loop could cause technological progress to become more than an order of magnitude faster.\n\nHow likely do you find this argument to be broadly correct?\n\n\n\nQuite unlikely (0-20%)\nUnlikely (21-40%)\nAbout even chance (41-60%)\nLikely (61-80%)\nQuite likely (81-100%)\n\n\n\nAnswers\n\nThese are the Pearson product-moment correlation coefficients for the different answers, among people who received both of a pair of questions:\n\nImpacts of HLMI\nQuestion\nParticipants were asked:\nAssume for the purpose of this question that HLMI will at some point exist. How positive or negative do you expect the overall impact of this to be on humanity, in the long run? Please answer by saying how probable you find the following kinds of impact, with probabilities adding to 100%:\n______ Extremely good (e.g. rapid growth in human flourishing) (1)\n______ On balance good (2)\n______ More or less neutral (3)\n______ On balance bad (4)\n______ Extremely bad (e.g. human extinction) (5)\nAnswers\n\nSensitivity of progress to changes in inputs\nQuestion\nParticipants were told:\nThe next questions ask about the sensitivity of progress in AI capabilities to changes in inputs.\n‘Progress in AI capabilities’ is an imprecise concept, so we are asking about progress as you naturally conceive of it, and looking for approximate answers.\nParticipants then received a random three of the following five parts:\nImagine that over the past decade, only half as much researcher effort had gone into AI research. For instance, if there were actually 1,000 researchers, imagine that there had been only 500 researchers (of the same quality).How much less progress in AI capabilities would you expect to have seen? e.g. If you think progress is linear in the number of researchers, so 50% less progress would have been made, write ’50’. If you think only 20% less progress would have been made write ’20’.\n……% less\nOver the last 10 years the cost of computing hardware has fallen by a factor of 20. Imagine instead that the cost of computing hardware had fallen by only a factor of 5 over that time (around half as far on a log scale).   How much less progress in AI capabilities would you expect to have seen? e.g. If you think progress is linear in 1/cost, so that 1-5/20=75% less progress would have been made, write ’75’. If you think only 20% less progress would have been made write ’20’.\n……% less\nImagine that over the past decade, there had only been half as much effort put into increasing the size and availability of training datasets. For instance, perhaps there are only half as many datasets, or perhaps existing datasets are substantially smaller or lower quality.How much less progress in AI capabilities would you expect to have seen? e.g. If you think 20% less progress would have been made, write ‘20’\n……% less\nImagine that over the past decade, AI research had half as much funding (in both academic and industry labs). For instance, if the average lab had a budget of $20 million each year, suppose their budget had only been $10 million each year.  How much less progress in AI capabilities would you expect to have seen? e.g. If you think 20% less progress would have been made, write ‘20’\n……% less\nImagine that over the past decade, there had been half as much progress in AI algorithms. You might imagine this as conceptual insights being half as frequent.  How much less progress in AI capabilities would you expect to have seen? e.g. If you think 20% less progress would have been made, write ‘20’\n……% less\nAnswers\nThe following five figures are histograms, showing the number of people who gave different answers to the five question parts above.\n \nSample sizes\n\n \n\n\nResearcher effort\nCost computing\nTraining data\nFunding\nAlgorithm progress\n\n\n71\n64\n71\n68\n59\n\n\n\nMedians\nThe following figure shows median answers to the above questions.\n \n\n\nResearcher effort\nCost computing\nTraining data\nFunding\nAlgorithm progress\n\n\n30\n50\n40\n40\n50\n\n\n\nCorrelations\n\nOutside view implied HLMI forecasts\nQuestions\nParticipants were asked:\nWhich AI research area have you worked in for the longest time?\n————————————\nHow long have you worked in this area?\n———years\nConsider three levels of progress or advancement in this area:\nA. Where the area was when you started working in it\nB. Where it is now\nC. Where it would need to be for AI software to have roughly human level abilities at the tasks studied in this area\nWhat fraction of the distance between where progress was when you started working in the area (A) and where it would need to be to attain human level abilities in the area (C) have we come so far (B)?\n———%\nDivide the period you have worked in the area into two halves: the first and the second. In which half was the rate of progress in your area higher?\n\n\n\nThe first half\nThe second half\nThey were about the same\n\n\n\nAnswers\nEach person told us how long they had been in their subfield, and what fraction of the remaining path to human-level performance (in their subfield) they thought had been traversed in that time. From this, we can estimate when the subfield should reach ‘human-level performance’, if progress continued at the same rate. The following graph shows those forecast dates.\n\n\nDisagreements and Misunderstandings\nQuestions\nParticipants were asked:\nTo what extent do you think you disagree with the typical AI researcher about when HLMI will exist?\n\n\n\nA lot (17)\nA moderate amount (18)\nNot much (19)\n\n\n\n \nIf you disagree, why do you think that is?\n_______________________________________________________\nTo what extent do you think people’s concerns about future risks from AI are due to misunderstandings of AI research?\n\n\n\nAlmost entirely (1)\nTo a large extent (2)\nSomewhat (4)\nNot much (3)\nHardly at all (5)\n\n\n\n \nWhat do you think are the most important misunderstandings, if there are any?\n________________________________________________________\nAnswers\n\n\nOne hundred and eighteen people responded to the question on misunderstandings, and 74 of them described what they thought the most important misunderstandings were. The table and figures below show our categorization of the responses.2\n\n \n\n\n Most important misunderstandings\n Number\n Fraction of non-empty responses\n\n\nUnderestimate distance from generality, open-ended tasks\n9\n12%\n\n\nOverestimate state of the art (other)\n10\n14%\n\n\nUnderestimate distance from AGI at this rate\n13\n18%\n\n\nThink AI will be in control of us or in conflict with us\n11\n15%\n\n\nExpect humans to be obsoleted\n7\n9%\n\n\nOverly influenced by fiction\n7\n9%\n\n\nExpect AI to be human-like or sentient\n6\n8%\n\n\nExpect sudden or surprising events\n5\n7%\n\n\nThink AI will go outside its programming\n5\n7%\n\n\nInfluenced by poor reporting\n5\n7%\n\n\nWrongly equate intelligence with something else\n4\n5%\n\n\nUnderestimate systemic social risks\n2\n3%\n\n\nOverestimate distance to strong AI\n2\n3%\n\n\nOther ignorance of AI\n4\n5%\n\n\nOther\n12\n16%\n\n\nEmpty\n44\n59%\n\n\n\n\nNarrow tasks\nQuestions\nRespondents were each asked one of the following two questions:\nFixed years framing:\n\nHow likely do you think it is that the following AI tasks will be feasible within the next:\n\n10 years?\n20 years?\n50 years?\n\nLet a task be ‘feasible’ if one of the best resourced labs could implement it in less than a year if they chose to. Ignore the question of whether they would choose to.\n\nFixed probabilities framing:\n\nHow many years until you think the following AI tasks will be feasible with:\n\na small chance (10%)?\nan even chance (50%)?\na high chance (90%)?\n\nLet a task be ‘feasible’ if one of the best resourced labs could implement it in less than a year if they chose to. Ignore the question of whether they would choose to.\n\nEach researcher was then presented with a random four of the following tasks:\n[Rosetta] Translate a text written in a newly discovered language into English as well as a team of human experts, using a single other document in both languages (like a Rosetta stone). Suppose all of the words in the text can be found in the translated document, and that the language is a difficult one.\n[Subtitles] Translate speech in a new language given only unlimited films with subtitles in the new language. Suppose the system has access to training data for other languages, of the kind used now (e.g. same text in two languages for many languages and films with subtitles in many languages).\n[Translate] Perform translation about as good as a human who is fluent in both languages but unskilled at translation, for most types of text, and for most popular languages (including languages that are known to be difficult, like Czech, Chinese and Arabic).\n[Phone bank] Provide phone banking services as well as human operators can, without annoying customers more than humans. This includes many one-off tasks, such as helping to order a replacement bank card or clarifying how to use part of the bank website to a customer.\n[Class] Correctly group images of previously unseen objects into classes, after training on a similar labeled dataset containing completely different classes. The classes should be similar to the ImageNet classes.\n[One-shot] One-shot learning: see only one labeled image of a new object, and then be able to recognize the object in real world scenes, to the extent that a typical human can (i.e. including in a wide variety of settings). For example, see only one image of a platypus, and then be able to recognize platypuses in nature photos. The system may train on labeled images of other objects.   Currently, deep networks often need hundreds of examples in classification tasks1, but there has been work on one-shot learning for both classification2 and generative tasks3.\n1 Lake et al. (2015). Building Machines That Learn and Think Like People2 Koch (2015). Siamese Neural Networks for One-Shot Image Recognition3 Rezende et al. (2016). One-Shot Generalization in Deep Generative Models\n[Video scene] See a short video of a scene, and then be able to construct a 3D model of the scene that is good enough to create a realistic video of the same scene from a substantially different angle.\nFor example, constructing a short video of walking through a house from a video taking a very different path through the house.\n[Transcribe] Transcribe human speech with a variety of accents in a noisy environment as well as a typical human can.\n[Read aloud] Take a written passage and output a recording that can’t be distinguished from a voice actor, by an expert listener.\n[Theorems] Routinely and autonomously prove mathematical theorems that are publishable in top mathematics journals today, including generating the theorems to prove.\n[Putnam] Perform as well as the best human entrants in the Putnam competition—a math contest whose questions have known solutions, but which are difficult for the best young mathematicians.\n[Go low] Defeat the best Go players, training only on as many games as the best Go players have played.\nFor reference, DeepMind’s AlphaGo has probably played a hundred million games of self-play, while Lee Sedol has probably played 50,000 games in his life1.\n1 Lake et al. (2015). Building Machines That Learn and Think Like People\n[Starcraft] Beat the best human Starcraft 2 players at least 50% of the time, given a video of the screen.\nStarcraft 2 is a real time strategy game characterized by:\n\n\n\nContinuous time play\nHuge action space\nPartial observability of enemies Long term strategic play, e.g. preparing for and then hiding surprise attacks.\n\n\n\n[Rand game] Play a randomly selected computer game, including difficult ones, about as well as a human novice, after playing the game less than 10 minutes of game time. The system may train on other games.\n[Angry birds] Play new levels of Angry Birds better than the best human players. Angry Birds is a game where players try to efficiently destroy 2D block towers with a catapult. For context, this is the goal of the IJCAI Angry Birds AI competition1.\n1 aibirds.org\n[Atari] Outperform professional game testers on all Atari games using no game-specific knowledge. This includes games like Frostbite, which require planning to achieve sub-goals and have posed problems for deep Q-networks1, 2.\n1 Mnih et al. (2015). Human-level control through deep reinforcement learning2 Lake et al. (2015). Building Machines That Learn and Think Like People\n[Atari fifty] Outperform human novices on 50% of Atari games after only 20 minutes of training play time and no game specific knowledge.\nFor context, the original Atari playing deep Q-network outperforms professional game testers on 47% of games1, but used hundreds of hours of play to train2.\n1 Mnih et al. (2015). Human-level control through deep reinforcement learning2 Lake et al. (2015). Building Machines That Learn and Think Like People\n[Laundry] Fold laundry as well and as fast as the median human clothing store employee.\n[Race] Beat the fastest human runners in a 5 kilometer race through city streets using a bipedal robot body.\n[Lego] Physically assemble any LEGO set given the pieces and instructions, using non-specialized robotics hardware.\nFor context, Fu 20161 successfully joins single large LEGO pieces using model based reinforcement learning and online adaptation.\n1 Fu et al. (2016). One-Shot Learning of Manipulation Skills with Online Dynamics Adaptation and Neural Network Priors\n[Sort] Learn to efficiently sort lists of numbers much larger than in any training set used, the way Neural GPUs can do for addition1, but without being given the form of the solution.\nFor context, Neural Turing Machines have not been able to do this2, but Neural Programmer-Interpreters3 have been able to do this by training on stack traces (which contain a lot of information about the form of the solution).\n1 Kaiser & Sutskever (2015). Neural GPUs Learn Algorithms2 Zaremba & Sutskever (2015). Reinforcement Learning Neural Turing Machines3 Reed & de Freitas (2015). Neural Programmer-Interpreters\n[Python] Write concise, efficient, human-readable Python code to implement simple algorithms like quicksort. That is, the system should write code that sorts a list, rather than just being able to sort lists.\nSuppose the system is given only:\n\n\n\nA specification of what counts as a sorted list\nSeveral examples of lists undergoing sorting by quicksort\n\n\n\n[Factoid] Answer any “easily Googleable” factoid questions posed in natural language better than an expert on the relevant topic (with internet access), having found the answers on the internet.\nExamples of factoid questions:\n\n\n\n “What is the poisonous substance in Oleander plants?”\n“How many species of lizard can be found in Great Britain?”\n\n\n\n[Open quest] Answer any “easily Googleable” factual but open ended question posed in natural language better than an expert on the relevant topic (with internet access), having found the answers on the internet.\nExamples of open ended questions:\n\n\n\n“What does it mean if my lights dim when I turn on the microwave?”\n“When does home insurance cover roof replacement?”\n\n\n\n[Unkn quest] Give good answers in natural language to factual questions posed in natural language for which there are no definite correct answers.\nFor example:”What causes the demographic transition?”, “Is the thylacine extinct?”, “How safe is seeing a chiropractor?”\n[Essay] Write an essay for a high-school history class that would receive high grades and pass plagiarism detectors.\nFor example answer a question like ‘How did the whaling industry affect the industrial revolution?’\n[Top forty] Compose a song that is good enough to reach the US Top 40. The system should output the complete song as an audio file.\n[Taylor] Produce a song that is indistinguishable from a new song by a particular artist, e.g. a song that experienced listeners can’t distinguish from a new song by Taylor Swift.\n[Novel] Write a novel or short story good enough to make it to the New York Times best-seller list.\n[Explain] For any computer game that can be played well by a machine, explain the machine’s choice of moves in a way that feels concise and complete to a layman.\n[Poker] Play poker well enough to win the World Series of Poker.\n[Laws phys] After spending time in a virtual world, output the differential equations governing that world in symbolic form.\nFor example, the agent is placed in a game engine where Newtonian mechanics holds exactly and the agent is then able to conduct experiments with a ball and output Newton’s laws of motion.\nAnswers\nFixed years framing\n\nProbabilities by year (medians)\n \n\n\n \n10 years\n20 years\n50 years\n\n\nRosetta\n20\n50\n95\n\n\nSubtitles\n30\n50\n90\n\n\nTranslate\n50\n65\n94.5\n\n\nPhone bank\n40\n75\n99\n\n\nClass\n50\n75\n99\n\n\nOne-shot\n25\n60\n90\n\n\nVideo scene\n50\n70\n99\n\n\nTranscribe\n65\n95\n99\n\n\nRead aloud\n50\n90\n99\n\n\nTheorems\n5\n20\n40\n\n\nPutnam\n5\n20\n50\n\n\nGo low\n10\n25\n60\n\n\nStarcraft\n70\n90\n99\n\n\nRand game\n25\n50\n80\n\n\nAngry birds\n90\n95\n99.4995\n\n\nAtari\n50\n60\n92.5\n\n\nAtari fifty\n40\n75\n95\n\n\nLaundry\n55\n95\n99\n\n\nRace\n30\n70\n95\n\n\nLego\n57.5\n85\n99\n\n\nSort\n50\n90\n95\n\n\nPython\n50\n79\n90\n\n\nFactoid\n50\n82.5\n99\n\n\nOpen quest\n50\n65\n90\n\n\nUnkn quest\n40\n70\n90\n\n\nEssay\n25\n50\n90\n\n\nTop forty\n27.5\n50\n90\n\n\nTaylor\n60\n75\n99\n\n\nNovel\n1\n25\n62.5\n\n\nExplain\n30\n60\n90\n\n\nPoker\n70\n90\n99\n\n\nLaws phys\n20\n40\n80\n\n\n\nFixed probabilities  framing\n\nYears by probability (medians)\n \n\n\n \n10 percent\n50 percent\n90 percent\n\n\nRosetta\n10\n20\n50\n\n\nSubtitles\n5\n10\n15\n\n\nTranslate\n3\n7\n15\n\n\nPhone bank\n3\n6\n10\n\n\nClass\n2\n4.5\n6.5\n\n\nOne-shot\n4.5\n8\n20\n\n\nVideo scene\n5\n10\n20\n\n\nTranscribe\n5\n10\n20\n\n\nRead aloud\n5\n10\n15\n\n\nTheorems\n10\n50\n90\n\n\nPutnam\n15\n35\n55\n\n\nGo low\n3.5\n8.5\n19.5\n\n\nStarcraft\n2\n5\n10\n\n\nRand game\n5\n10\n15\n\n\nAngry birds\n2\n4\n6\n\n\nAtari\n5\n10\n15\n\n\nAtari fifty\n2\n5\n10\n\n\nLaundry\n2\n5.5\n10\n\n\nRace\n5\n10\n20\n\n\nLego\n5\n10\n15\n\n\nSort\n3\n5\n10\n\n\nPython\n3\n10\n20\n\n\nFactoid\n3\n5\n10\n\n\nOpen quest\n5\n10\n15\n\n\nUnkn quest\n4\n10\n17.5\n\n\nEssay\n2\n7\n15\n\n\nTop forty\n5\n10\n20\n\n\nTaylor\n5\n10\n20\n\n\nNovel\n10\n30\n50\n\n\nExplain\n5\n10\n15\n\n\nPoker\n1\n3\n5.5\n\n\nLaws phys\n5\n10\n20\n\n\n\nSafety\nStuart Russell’s problem\nQuestion\nParticipants were asked:\nStuart Russell summarizes an argument for why highly advanced AI might pose a risk as follows:\nThe primary concern [with highly advanced AI] is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken […]. Now we have a problem:\n1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.\n2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.\nA system that is optimizing a function of n variables, where the objective depends on a subset of size k10km diameter) NEO colliding with Earth would probably cause human extinction. This has an estimated  0.0001% chance of occurring during any given 100 year period. In response to this threat, many NEOs have been identified (p. 3), and efforts are underway to test our ability to deflect dangerous asteroids.\n\n\n\nType of Event\nCharacteristic Diameter\nof Impacting Object (m)\nApproximate Impact\nEnergy (MT)\nApproximate Average Impact\nInterval (years)\n\n\nAirburst\n25\n1\n200\n\n\nLocal scale\n50\n10\n2,000\n\n\nRegional scale\n140\n300\n30,000\n\n\nContinental scale\n300\n2,000\n100,000\n\n\nBelow global catastrophe threshold\n600\n20,000\n200,000\n\n\nPossible  global catastrophe\n1,000\n100,000\n700,000\n\n\nAbove global catastrophe threshold\n5,000\n10,000,000\n30,000,000\n\n\nMass extinction\n10,000\n100,000,000\n100,000,000\n\n\n\nTable 1: Scale and frequency of NEO impact\nNotes: In the above table, “mass extinction” includes human extinction, based on comments on p. 23 of the below source. The level of danger of any particular impacting object depends crucially on its size, but additionally on other factors, notably its composition, speed of impact, and location of impact.\nSource: Defending Planet Earth: Near-Earth Object Surveys and Hazard Mitigation Strategies, 2010 (p. 19). \nOne estimate for planetary defense costs was given in the 2010 report Defending Planet Earth (pp. 97–99). The indicated cost of monitoring 90% of near-Earth objects of 140 meters in diameter or greater, and launching a test program to deflect an asteroid, was $250m per year. This level of expenditure was expected to be required for somewhat under a decade, and would reduce after the completion of the deflection test.\nThus for CBA purposes we might assume an indefinite cost of $250m per year. This could be combined with an assumption that these efforts would be 50% effective against NEOs threatening human extinction (as per Matheny 2007, p. 1340). The resulting cost per 0.01% of NEO-originating extinction probability eliminated is given in the final section of this article, alongside that from other sources.\nOf course there are significant uncertainties in the above cost estimate. Monitoring costs may be greater or lower than expected, and it is not known whether or on how many occasions a NEO would need to be deflected during any time period. Further, it has been noted (p. 849, see footnote to table) that, for those NEOs large enough to cause human extinction, deflection may not be possible with current technology. \nOne source of overestimation in the above cost estimate is that successful planetary defense against NEOs does not only protect against extinction; it also averts lesser catastrophes and costs. Thus, in an ideal CBA analysis of extinction risk mitigation, not all associated expenses would be counted as costs of averting extinction. In particular, most expenditure is likely to be driven by the desire to avoid less destructive events that are much more frequent and visible. \nAnalysis: Climate change\nThe issue of human activity-induced climate change has received abundant global attention in recent years. Unfortunately, authoritative attention is most strongly focused (pp. 45–46) on measurements of the centre point of the range of possibilities, without considering the likelihood and potential impact of more extreme temperature increases. Perhaps because of this, the author has found little research on the risk of extinction posed by climate change.\nWagner and Weitzman, in their book Climate Shock, perform their own analysis (pp. 53–54) of the probability of global temperatures rising by at least 6°C, a threshold beyond which they believe disaster is nearly certain, of which the full implications cannot be known (although they estimate that the costs of allowing this threshold to be crossed may be equivalent to 10%-30% of global GDP). They estimate that the probability of reaching this threshold is 11% if carbon dioxide equivalent concentration (CO2eq) in the atmosphere is 700 parts per million (ppm), and 1.2% if the concentration is 500 ppm.\n“Disaster” is not extinction. However, insofar as the 6°C-or-higher scenario is representative of extreme climate scenarios in general, it may be helpful for CBA purposes to estimate the cost of achieving a reduction of its probability from 11% to 1.2%. In this vein, Figure 1, taken from the Intergovernmental Panel on Climate Change’s (IPCC) Fifth Assessment in 2014, summarises estimates of economic impacts (costs) of five CO2eq concentration scenarios. The medians of the various cost estimates for the 650–720 ppm and 480–530 ppm scenarios could be taken as indicative of the cost of reducing CO2eq from 700 ppm to 500 ppm. This suggests that this cost may be somewhere in the vicinity of 1–2% of the Net Present Value (NPV) of global GDP and consumption during the period 2015–2100. Thus, an approximate NPV of US $31.9tr–$63.8tr of GDP, or US $24.3tr–$48.5tr of consumption, may be foregone globally to reduce the probability of global temperatures rising by at least 6°C from 11% to 1.2%.1. For comparability purposes, annual costs can be estimated by assuming that these costs are evenly spread in real terms from 2015–2100. This produces estimates of annual costs of approximately US $1.5tr–$3.1tr of GDP, or approximately US $1.2tr–$2.3tr of consumption.\n\nFigure 1: Mitigation costs (NPV 2015–2100, 5% discount rate)\nNotes: Multiple models are used to produce Figure 1, and each model is used to run one or more scenarios. Each scenario result is represented by a dot, with the number of scenarios included in the boxplots indicated at the bottom of the panels. Costs are expressed as a fraction of economic output in the baseline, or in the case of consumption losses, consumption in the baseline. The number of scenarios outside the figure range is noted at the top.\nSource: Mitigation of Climate Change: Working Group III contribution to the Fifth Assessment Report of the IPCC, Chapter 6, p. 450.\nHowever, it should be borne in mind that a temperature increase of 6°C does not correspond to certain human extinction, and there may even be a small chance of extinction arising from below-6°C temperatures. Overall, the reduction in extinction risk is likely to be somewhat lower than 9.8%. For example, a collaborative effort led by the Global Challenges Foundation established (p. 142) highly approximate estimates of the probability of “infinite impact” (a concept closer to, but not the same as, extinction) from a range of sources. Their estimate of the probability of infinite impact from climate change was 0.01% over the next 200 years; much lower than the 11% probability above. If, by reducing CO2eq concentration from 700 ppm to 500 ppm, the 0.01% probability were reduced by the same proportion as the probability of a 6°C or higher temperature increase, then this would reduce the probability of infinite impact by 0.0088% (i.e. to 0.0012%). This assumption of proportionality is highly approximate.\nSimilarly to costs incurred in defending against NEOs, the cost of combating climate change is not wholly attributable to the mitigation of extinction risk, since it also mitigates many lesser risks. In particular as noted above, public dialogue and authoritative attention focuses on the more moderate temperature increases that are unlikely to cause human extinction. Since mitigation expenditure may be driven by this focus on lesser risks, a lesser portion of such costs would be included in an ideal CBA analysis of extinction risk mitigation.\nThis issue can be partially avoided by focusing on the technique of geoengineering (climate engineering) as a way to reduce climate-related risk. Geoengineering may align more closely to extreme climate-related risks, since many geoengineering methods are viewed (pp. 44–45) as a last resort given their problems and dangers. Climate Shock (p. 99) estimates that using geoengineering as a solution to climate change would cost $1b–$10b per year, a much smaller price to pay for removing (most of) the aforementioned 0.01% risk of infinite impact.\nAlthough geoengineering is more likely to be used in the direst circumstances, its use may still be primarily motivated by problems less severe than extinction. Thus the issue of separating extinction mitigation costs from other costs is only reduced, rather than eliminated. Further, the extent to which geoengineering actually reduces the overall risk of human extinction is uncertain, since this technique carries (pp. 58–62) its own risks, particularly if it is executed without adequate quality of governance or before its implications are fully understood.\nSources of uncertainty in extrapolating this analysis to ASI-originating extinction risk\nThe approach taken in this article, while useful, is highly approximate. The approach is one of extrapolation, where the costs of the mitigation of one extinction risk are used as a proxy for the cost of mitigating ASI-originating extinction risk. This is based on the idea that the challenges of mitigating climate change- and NEO-originating extinction risk may share some characteristics with ASI-originating extinction risk, such as being unprecedented tasks involving sophisticated technology, often at a global scale. However, different sources of extinction risk are of course highly heterogeneous, and thus the monetary cost per unit of probability reduction for one source may not be indicative for another.\nIn particular, not all actions that mitigate extinction risk are costly from the perspective of society as a whole. For example, the mitigation of the risk of nuclear war through successful nuclear disarmament is actually associated with a reduction in expenditure on nuclear weapons. Similarly, there are likely to be aspects of ASI-originating extinction risk that can also be mitigated through mutual agreements to refrain from certain activities, such as racing to be the first to develop ASI regardless of the adequacy of safety precautions. However, there are also aspects of ASI risk that can indeed be mitigated through expenditure, such as research into and creation of safety mechanisms. It is to the latter type of mitigation effort that CBA is more readily applicable.\nThe selection of climate change mitigation and planetary defense for analysis in this article was influenced positively by the availability of data. This may lead to assessing relatively cheap risk mitigation measures, because information will be more available for measures that are being taken, which may be those that seem more worthwhile.\nAnother source of inaccuracy is the phenomenon of diminishing returns, a pattern whereby additional expenditure produces greater gains when total past expenditure is lower. A form of this is often observed in research, and can serve as a model for progress in solving problems of unknown difficulty. This implies that, for a source of extinction risk where relatively little effort or money has been expended such as ASI, a small amount of additional expenditure is likely to be able to achieve relatively large benefits. However, in this article, the estimated cost of reducing extinction risk through planetary defense and climate change mitigation are based on assessments of complete solutions, which include estimates of both the highly beneficial initial expenditure and the later, less beneficial expenditure. They may therefore overestimate the cost of achieving reductions through small increments to expenditure on mitigation of ASI-originating extinction risk.\nDespite these numerous and large uncertainties, the overall CBA may still be capable of drawing reasonably firm conclusions, given that there is potential for the benefits of mitigation efforts to exceed their costs by many orders of magnitude. \nSummary of results\nIn Table 2, the three potential mitigation activities considered in this article are compared, with the last column examining their cost effectiveness in reducing the probability of extinction by 0.01%. Only annual costs are presented in Table 2, since the calculation of total costs requires the use of a discount rate, the choice of which is contentious and will receive attention in a related article (forthcoming).\n\n\n\nMitigation activity\nCost per year, 2015–2100\nReduction in probability of extinction, 2015–2100\nScaled cost of achieving probability reduction of 0.01%^\n\n\nNEOs: detect & deflect\n$250m\n0.000043%*\n$58.1b\n\n\nClimate change: CO2eq emission abatement\n$1.2tr–$3.1tr**\n0.008800%***\n$1.3tr–$3.5tr**\n\n\nClimate change: geoengineering\n$1b–$10b\n0.008800%***\n$1.1b–$11.4b\n\n\n\nTable 2: Estimates of annual costs of reducing the probability of human extinction\nNotes: All cost figures in US dollars at constant prices. ^ Hypothetical, for comparison purposes. * Annual probability of one in 10^8, summed over the 2015–2100 period, combined with 50% success rate of deflection. ** Total of GDP and consumption ranges reported. *** For simplicity, it is assumed that all of the impact on the probability of infinite impact is attributable to costs borne in the period 2015–2100. \nPlease note the very approximate nature of these estimates.\nSource: Author’s calculations and estimates as set out in this article.\nTable 2 hints at an interesting line of research that could be pursued; the use of a CBA framework to investigate the most cost-effective way to reduce the risk of human extinction. However, Table 2 is unfortunately too approximate to support firm conclusions on such questions, and the topic is outside the scope of this article.\nThe analysis in this article establishes very rough guidance as to the order of magnitude of the cost component of this CBA exercise. Specifically, we estimate that the annual cost of reducing the probability of human extinction by 0.01% is within the range of $1.1 billion to $3.5 trillion, with the endpoints of this range being highly approximate. In a related article (forthcoming) the benefit component is quantified and compared to these results.\nContributions\nResearch and writing were done by Michael Wulfsohn. Katja Grace did review and editing.\nFootnotes\n ", "url": "https://aiimpacts.org/costs-of-extinction-risk-mitigation/", "title": "Costs of extinction risk mitigation", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2016-08-04T20:57:56+00:00", "paged_url": "https://aiimpacts.org/feed?paged=17", "authors": ["Michael Wulfsohn"], "id": "c6cd9ef5fcc81705abd7a2b6573d0d7f", "summary": []} {"text": "Returns to scale in research\n\nWhen universities or university departments produce research outputs—such as published papers—they sometimes experience increasing returns to scale, sometimes constant returns to scale, and sometimes decreasing returns to scale. At the level of nations however, R&D tends to see increasing returns to scale. These results are preliminary.\nBackground\n“Returns to scale” refers to the responsiveness of a process’ outputs when all inputs (e.g. researcher hours, equipment) are increased by a certain proportion. If all outputs (e.g. published papers, citations, patents) increase by that same proportion, the process is said to exhibit constant returns to scale. Increasing returns to scale and decreasing returns to scale refer to situations where outputs still increase, but by a higher or lower proportion, respectively.\nAssessing returns to scale in research may be useful in predicting certain aspects of the development of artificial intelligence, in particular the dynamics of an intelligence explosion.\nResults\nThe conclusions in this article are drawn from an incomplete review of academic literature assessing research efficiency, presented in Table 1. These papers assess research in terms of its direct outputs such as published papers, citations, and patents. The broader effects of the research are not considered.\nMost of the papers listed below use the Data Envelopment Analysis (DEA) technique, which is a quantitative technique commonly used to assess the efficiency of universities and research activities. It is capable of isolating the scale efficiency of the individual departments, universities or countries being studied.\n\n\n\nPaper\nLevel of comparison\nActivities assessed\nResults pertaining to returns to scale\n\n\nWang & Huang 2007\nCountries’ overall R&D activities\nResearch\nIncreasing returns to scale in research are exhibited by more than two-thirds of the sample\n\n\nKocher, Luptacik & Sutter 2006\nCountries’ R&D in economics\nResearch\nIncreasing returns to scale are found in all countries in the sample except the US\n\n\nCherchye & Abeele 2005\nDutch universities’ research in Economics and Business Management\nResearch\nReturns to scale vary between decreasing, constant and increasing depending on each university’s specialization\n\n\nJohnes & Johnes 1993\nUK universities’ research in economics\nResearch\nConstant returns to scale are found in the sample as a whole\n\n\nAvkiran 2001\nAustralian universities\nResearch, education\nConstant returns to scale found in most sampled universities\n\n\nAhn 1988\nUS universities\nResearch, education\nDecreasing returns to scale on average\n\n\nJohnes 2006\nEnglish universities\nResearch, education\nClose to constant returns to scale exhibited by most universities sampled\n\n\nKao & Hung 2008\nDepartments of a Taiwanese university\nResearch, education\nIncreasing returns to scale exhibited by the five most scale-inefficient departments. However, no aggregate measure of returns to scale within the sample is presented.\n\n\n\nTable 1: Sample of studies of research efficiency that assess returns to scale\nNote: This table only identifies increasing/constant/decreasing returns to scale, rather than the size of this effect. Although DEA can measure the relative size of the effect for individual departments/universities/countries within a sample, such results cannot be readily compared between samples/studies.\nDiscussion of results\nOf the studies listed in Table 1, the first four are the most relevant to this article, since they focus solely on research inputs and outputs. While the remaining four include educational inputs and outputs, they can still yield worthwhile insights.\nTable 1 implies a difference between country-level and university-level returns to scale in research. \n\nThe two studies assessing R&D efficiency at the country level, Wang & Huang (2007) and Kocher, Luptacik & Sutter (2006), both identify increasing returns to scale.\nThe two university-level studies that assessed the scale efficiency of research alone found mixed results. Concretely, Johnes & Johnes (1993) concluded that returns to scale are constant among UK universities, and Cherchye & Abeele (2005) concluded that returns to scale vary among Dutch universities. This ambiguity is echoed by the remainder of the studies listed above, which assess education and research simultaneously and which find evidence of constant, decreasing and increasing returns to scale in different contexts.\n\nSuch differences are consistent with the possibility that scale efficiency may be influenced by scale (size) itself. In this framework, as an organisation increases its size, it may experience increasing returns to scale initially, resulting in increased efficiency. However, the efficiency gains from growth may not continue indefinitely; after passing a certain threshold the organisation may experience decreasing returns to scale. The threshold would represent the point of scale efficiency, at which returns to scale are constant and efficiency is maximized with respect to size.\nUnder this framework, size will influence whether increasing, constant or decreasing returns to scale are experienced. Applying this to research activities, the observation of different returns to scale between country-level and university-level research may mean that the size of a country’s overall research effort and the typical size of its universities are not determined by similar factors. For example, if increasing returns to scale at the country level and decreasing returns to scale at the university level are observed, this may indicate that the overall number of universities is smaller than needed to achieve scale efficiency, but that most of these universities are individually too large to be scale efficient.\nOther factors may also contribute to the differences between university-level and country-level observations. \n\nThe country level studies use relatively aggregated data, capturing some of the non-university research and development activities in the countries sampled.\nCountry level research effort is not necessarily subject to some of the constraints which may cause decreasing returns to scale in large universities, such as excessive bureaucracy. \nResults may be arbitrarily influenced by differences in the available input and output metrics at the university versus country level.\n\nLimitations to conclusions drawn\nOne limitation of this article is the small scope of the literature review. A more comprehensive review may reveal a different range of conclusions.\nAnother limitation is that the research outputs studied—published papers, citations, and patents, inter alia—cannot be assumed to correspond directly to incremental knowledge or productivity. This point is expanded upon under “Topics for further investigation” below.\nFurther limitations arise due to the DEA technique used by most of the studies in Table 1. \n\nDEA is sensitive to the choice of inputs and outputs, and to measurement errors. \nStatistical hypothesis tests are difficult within the DEA framework, making it more difficult to separate signal from noise in interpreting results.\nDEA identifies relative efficiency (composed of scale efficiency and also “pure technical efficiency”) within the sample, meaning that at least one country, university, or department is always identified as fully efficient (including exhibiting full scale efficiency or constant returns to scale). Of course, in practice, no university, organisation or production process is perfectly efficient. Therefore, conclusions drawn from DEA analysis are likely to be more informative for countries, universities, or departments that are not identified as fully efficient.\nIt may be questionable whether such a framework—where an optimal scale of production exists, past which decreasing returns to scale are experienced—is a good reflection of the dynamics of research activities. However, the frequent use of the DEA framework in assessing research activities would suggest that it is appropriate.\n\nTopics for further investigation\nThe scope of this article is limited to direct research outputs (such as published papers, citations, and patents). While this is valuable, stronger conclusions could be drawn if this analysis were combined with further work investigating the following:\n\nThe impact of other sources of new knowledge apart from universities or official R&D expenditure. For example, innovations in company management discovered through “learning by doing” rather than through formal research may be an important source of improvement in economic productivity. \nThe translation of research outputs (such as published papers, citations, and patents) into incremental knowledge, and the translation of incremental knowledge into extra productive capacity. Assessment of this may be achievable through consideration of the economic returns to research, or of the value of patents generated by research.\n\nImplications for AI\nThe scope for an intelligence explosion is likely to be greater if the returns to scale in research are greater. In particular, an AI system capable of conducting research into the improvement of AI may be able to be scaled up faster and more cheaply than the training of human researchers, for example through deployment on additional hardware. In addition, in the period before any intelligence explosion, a scaling-up of AI research may be observed, especially if the resultant technology were seen to have commercial applications.\nThis review is one component of a larger project to quantitatively model an intelligence explosion. This project, in addition to drawing upon the conclusions in this article, will also consider inter alia the effect of intelligence on research productivity, and actual increases in artificial intelligence that are plausible from research efforts.", "url": "https://aiimpacts.org/returns-to-scale-in-research/", "title": "Returns to scale in research", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2016-07-06T16:13:56+00:00", "paged_url": "https://aiimpacts.org/feed?paged=17", "authors": ["Michael Wulfsohn"], "id": "67e2b7bbaffffccbd1a19faeb9c71103", "summary": []} {"text": "Selected Citations\n\nThis page is out-of-date. Visit the updated version of this page on our wiki.\n\nA non-exhaustive collection of places where AI Impacts’ work has been cited.\nAI Timelines\n\nMuehlhauser, Luke. 2015. “What Do We Know about AI Timelines?” Open Philanthropy Project. (archive)\nMuehlhauser, Luke. 2015. “What should we learn from past AI forecasts?” Open Philanthropy Project. (archive)\nBrundage, Miles. 2016. “Modeling Progress in AI“. arXiv:1512.05849. arXiv. \nMcClusky, Peter. 2016. “AGI Timelines“. Bayesian Investor Blog. (archive)\n\nComputing Power and the Brain\n\nHsu, Jeremy. 2016. “Estimate: Human Brain 30 Times Faster than Best Supercomputers“. IEEE Spectrum. (archive)\n\nCost of Computing\n\nHoffman, Ben. 2016. “Paths to singleton: a hierarchical conceptual framework“. Ben Models the World. (archive)\n\nMathematical Conjectures\n\nFuture of Life Institute. 2015. “A survey of research questions for robust and beneficial AI“. The Future of Life Institute.\n\nExpert Predictions\n\nHanson, Robin. 2016. “Age of Em“. Oxford University Press.\n\nDiscontinuous Technological Progress Bounties\n\nHsu, Jeremy. 2016. “Making Sure AI’s Rapid Rise Is No Surprise“. Discover Blogs: Lovesick Cyborg. (archive)\n", "url": "https://aiimpacts.org/selected-citations/", "title": "Selected Citations", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2016-06-29T01:58:02+00:00", "paged_url": "https://aiimpacts.org/feed?paged=18", "authors": ["Katja Grace"], "id": "19953516878673f57db77a952cbd9fd3", "summary": []} {"text": "Error in Armstrong and Sotala 2012\n\nBy Katja Grace, 17 May 2016\nCan AI researchers say anything useful about when strong AI will arrive?\nBack in 2012, Stuart Armstrong and Kaj Sotala weighed in on this question in a paper called ‘How We’re Predicting AI—or Failing To‘. They looked at a dataset of predictions about AI timelines, and concluded that predictions made by AI experts were indistinguishable from those of non-experts. (Which might suggest that AI researchers don’t have additional information).\nAs far as I can tell—and Armstrong and Sotala agree—this finding is based on an error. Not a fundamental philosophical error, but a spreadsheet construction and interpretation error.\nThe main clue that there has been a mistake is that their finding is about experts and non-experts, and their public dataset does not contain any division of people into experts and non-experts. (Hooray for publishing data!)\nAs far as we can tell, the column that was interpreted as ‘is this person an expert?’ was one of eight tracking ‘by what process did this person arrive at a prediction?’ The possible answers are ‘outside view’, ‘noncausal model’, ‘causal model’, ‘philosophical argument’, ‘expert authority’, ‘non-expert authority’, ‘restatement’ and ‘unclear’.\nBased on comments and context, ‘expert authority’ appears to mean here that either the person who made the prediction is an expert who consulted their own intuition on something without providing further justification, or that the predictor is a non-expert who used expert judgments to inform their opinion. So the predictions not labeled ‘expert authority’ are a mixture of predictions made by experts using something other than their intuition—e.g. models and arguments—and predictions made by non-experts which are based on anything other than reference to experts. Plus restatements and unclarity that don’t involve any known expert intuition.\nThe reasons to think that the ‘expert authority’ column was misintepreted as an ‘expert’ column are A) that there doesn’t seem to be any other plausible expert column, B) that the number of predictions labeled with ‘expert authority’ is 62, the same as the number of experts Armstrong and Sotala claimed to have compared (and the rest of the set is 33, the number of non-experts they report), and C) Sotala suggests this is what must have happened.\nHow bad a problem is this? How badly does using unexplained expert opinion as a basis for prediction align with actually being an expert?\nEven without knowing exactly what an expert is, we can tell the two aren’t all that well aligned because Armstrong and Sotala’s dataset contains many duplicates: multiple records of the same person making predictions in different places. All of these people appear at least twice, at least once relying ‘expert authority’ and at least once not: Rodney Brooks, Ray Kurzweil, Jürgen Schmidhuber, I. J. Good, Hans Moravec. It is less surprising that experts and non-experts have similar predictions when they are literally the same people! But multiple entries of the same people listed as experts and non-experts only accounts for a little over 10% of their data, so this is not the main thing going on.\nI haven’t checked the data carefully and assessed people’s expertise, but here are other names that look to me like they fall in the wrong buckets if we intend ‘expert’ to mean something like ‘works/ed in the field of artificial intelligence’: Ben Goertzel (not ‘expert authority’), Marcus Hutter (not ‘expert authority’), Nick Bostrom (‘expert authority’), Kevin Warwick (not ‘expert authority’), Brad Darrach (‘expert authority’).\nExpertise and ‘expert authority’ seem to be fairly related (there are only about 10 obviously dubious entries, out of 95—though 30 are dubious for other reasons), but not enough to take the erroneous result as much of a sign about experts I think.\nOn the other hand, it seems Armstrong and Sotala have a result they did not intend: predictions based on expert authority look much like those not based on expert authority. Which sounds interesting, though given the context is probably not surprising: whether someone cites reasons with their prediction is probably fairly random, as indicated by several people basing their predictions on expert authority half of the time. e.g. Whether Kurzweil mentions hardware extrapolation on a given occasion doesn’t vary his prediction much. A worse problem is that the actual categorization is ‘most non-experts’ + ‘experts who give reasons for their judgments’ vs. ‘experts who don’t mention reasons’ + ‘non-experts who listen to experts’, which is pretty random, and so hard to draw useful conclusions from.\nWe don’t have time right now to repeat this analysis after actually classifying people as experts or not, even though it looks straightforward. We delayed some in the hope of doing that, but it looks like we won’t get to it soon, and it seems best to publish this post sooner to avoid anyone relying on the erroneous finding.\nIn the meantime, here is our graph again of predictions from AI researchers, AGI researchers, futurists and other people—the best proxy we have of ‘expert vs. non-expert’. We think they look fairly different, though they are from the same dataset that Armstrong and Sotala used (though an edited version).\nPredictions made by different groups since 2000 from the MIRI AI predictions dataset.\n \nStuart Armstrong adds the following analysis, in the style of the graphs on figure 18 of their paper that it could replace:\n\nAlso please forgive the colour hideousness of the following graph:\n.\n\n\n\n.\nHere I did a bar chart of “time to AI after” for the four groups (and for all of them together), in 5-year bar increments (the last bar has all the 75 year+ predictions, not just 75-80). The data is incredibly sparse, but a few patterns do emerge: AGI are optimistic (and pretty similar to futurists), Others are pessimistic.\n.\nHowever, to within the limits of the data, I’d say that all groups (apart from “other”) still have a clear tendency to predict 10-25 years in the future more often than other dates. Here’s the % predictions in 10-25 years, and over 75 years:\n.\n\n\n\n \n\n\n%10-25\n%>75\n\n\n54%\n8%\n AGI\n\n\n27%\n23%\n AI\n\n\n47%\n20%\n Futurist\n\n\n13%\n50%\n Other\n\n\n36%\n22%\n\n\n\n\n \n ", "url": "https://aiimpacts.org/error-in-armstrong-and-sotala-2012/", "title": "Error in Armstrong and Sotala 2012", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2016-05-17T21:04:30+00:00", "paged_url": "https://aiimpacts.org/feed?paged=18", "authors": ["Katja Grace"], "id": "9c2ac72c5bb871fc7ff2cd72a7a5839c", "summary": []} {"text": "Metasurvey: predict the predictors\n\nBy Katja Grace, 12 May 2016\nAs I mentioned earlier, we’ve been making a survey for AI researchers.\nThe survey asks when AI will be able to do things like build a lego kit according to the instructions, be a surgeon, or radically accelerate global technological development. It also asks about things like intelligence explosions, safety research, how hardware hastens AI progress, and what kinds of disagreement AI researchers have with each other about timelines.\nWe wanted to tell you more about the project before actually surveying people, to make criticism more fruitful. However it turned out that we wanted to start sending out the survey soon even more than that, so we did. We did get an abundance of private feedback, including from readers of this blog, for which we are grateful.\nWe have some responses so far, and still have about a thousand people to ask. Before anyone (else) sees the results though, I thought it might be amusing to guess what they look like. That way, you can know whether you should be surprised when you see the results, and we can know more about whether running surveys like this might actually change anyone’s beliefs about anything.\nSo we made a second copy of the survey to act as metasurvey, in which you can informally register your predictions.\nIf you want to play, here is how it works:\n\nGo to the survey here.\nInstead of answering the questions as they are posed, guess what the median answer given by our respondents is for each question.\nIf you want to guess something other than the median given by our other respondents, do so, then write what you are predicting in the box for comments at the end. (e.g. maybe you want to predict the mode, or the interquartile range, or what the subset of respondents who are actually AI researchers say).\nIf you want your predictions to be identifiable to you, give us your name and email at the end.  This will for instance let us alert you if we notice that you are surprisingly excellent at predicting. We won’t make names or emails public.\nAt the end, you should be redirected to a printout of your answers, which you can save somewhere if you want to be able to demonstrate later how right you were about stuff. There is a tiny pdf export button in the top right corner.\nYou will only get a random subset of questions to predict, because that’s how the survey works. If you want to make more predictions, the printout has all of the questions.\nWe might publish the data or summaries of it, other than names and email addresses, in what we think is an unidentifiable form.\n\nSome facts about the respondents, to help predict them:\n\nThey are NIPS 2015/ICML 2015 authors (so a decent fraction are not AI researchers)\nThere are about 1600 of them, before we exclude people who don’t have real email addresses etc.\n\nJohn Salvatier points out to me that the Philpapers survey did something like this (I think more formally). It appears to have been interesting—they find that ‘philosophers have substantially inaccurate sociological beliefs about the views of their peers’, and that ‘In four cases [of thirty], the community gets the leading view wrong…In three cases, the community predicts a fairly close result when in fact a large majority supports the leading view’. If it turned out that people thinking about the future of AI were that wrong about the AI community’s views, I think that would be good to know about.\n\n \nFeatured image: By DeFacto (Own work) [CC BY-SA 4.0], via Wikimedia Commons ", "url": "https://aiimpacts.org/metasurvey-predict-the-predictors/", "title": "Metasurvey: predict the predictors", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2016-05-13T00:11:05+00:00", "paged_url": "https://aiimpacts.org/feed?paged=18", "authors": ["Katja Grace"], "id": "abce7384ffc944153aa19f8772cac5ca", "summary": []} {"text": "Concrete AI tasks bleg\n\nBy Katja Grace, 30 March 2016\nWe’re making a survey. I hope to write soon about our general methods and plans, so anyone kind enough to criticize them has the chance. Before that though, we have a different request: we want a list of concrete tasks that AI can’t do yet, but may achieve sometime between now and surpassing humans at everything. For instance, ‘beat a top human Go player in a five game match’ would have been a good example until recently. We are going to ask AI researchers to predict a subset of these tasks, to better chart the murky path ahead.\nWe hope to:\n\nInclude tasks from across the range of AI subfields\nInclude tasks from across the range of time (i.e. some things we can nearly do, some things that are really hard)\nHave the tasks relate relatively closely to narrowish AI projects, to make them easier to think about (e.g. winning a 5k bipedal race is fairly close to existing projects, whereas winning an interpretive dance-off would require a broader mixture of skills, so is less good for our purposes)\nHave the tasks relate to specific hard technical problems (e.g. one-shot learning or hierarchical planning)\nHave the tasks relate to large changes in the world (e.g. replacing all drivers would viscerally change things)\n\nHere are some that we have:\n\nWin a 5km race over rough terrain against the best human 5k runner.\nPhysically assemble any LEGO set given the pieces and instructions.\nBe capable of winning an International Mathematics Olympiad Gold Medal (ignoring entry requirements). That is, solve mathematics problems with known solutions that are hard for the best high school students in the world, better than those students can solve them.\nWatch a human play any computer game a small number of times (say 5), then perform as well as human novices at the game without training more on the game. (The system can train on other games).\nBeat the best human players at Starcraft, with a human-like limit on moves per second.\nTranslate a new language using unlimited films with subtitles in the new language, but the kind of training data we have now for other languages (e.g. same text in two languages for many languages and films with subtitles in many languages).\nBe about as good as unskilled human translation for most popular languages (including difficult languages like Czech, Chinese and Arabic).\nAnswer tech support questions as well as humans can.\nTrain to do image classification on half a dataset (say, ImageNet) then take the other half of the images, containing previously unseen objects, and separate them into the correct groupings (without the correct labels of course).\nSee a small number of examples of a new object (say 10), then be able to recognize it in novel scenes as well as humans can.\nReconstruct a 3d scene from a 2d image as reliably as a human can.\nTranscribe human speech with a variety of accents in a quiet environment as well as humans can.\nRoutinely and autonomously prove mathematical theorems that are publishable in mathematics journals today.\n\nCan you think of any interesting ones?", "url": "https://aiimpacts.org/concrete-ai-tasks-bleg/", "title": "Concrete AI tasks bleg", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2016-03-30T18:09:28+00:00", "paged_url": "https://aiimpacts.org/feed?paged=18", "authors": ["Katja Grace"], "id": "c330f5ba4a8ad6466b60c9b3363c7ec5", "summary": []} {"text": "Mysteries of global hardware\n\n\nBy Katja Grace, 7 March 2016\nThis blog post summarizes recent research on our Global Computing Capacity page. See that page for full citations and detailed reasoning.\nWe recently investigated this intriguing puzzle:\n\nFLOPS (then) apparently performed by all of the world’s computing hardware: 3 x 1022 – 3 x 1024\n\n(Support: Vipul Naik estimated 1022 – 1024 IPS by February 2014, and reports a long term growth rate of 85% per year for application-specific computing hardware, which made up 97% of hardware by 2007, suggesting total hardware should have tripled by now. FLOPS are roughly equivalent to IPS).\n\nPrice of FLOPS: $3 x 10-9\n\n(Support: See our page on it)\n\nImplied value of global hardware: $1014-1016\n\n(Support: 3 x 1022 to 3 x 1024 * $3 x 10-9 = $1014-1016 )\n\nEstimated total global wealth: $2.5 * 1014\n\n(Support: see for instance Credit Suisse)\n\nImplication: 40%-4,000% of global wealth is in the form of computing hardware.\n\nQuestion: What went wrong?\n\nClues\nCould most hardware be in large-scale, unusually cheap, projects? Probably not – our hardware price figures include supercomputing prices. Also, Titan is a supercomputer made from GPUs and CPUs, and doesn’t seem to be cheaper per computation than the component GPUs and CPUs.\nCould the global wealth figure be off? We get roughly the same anomaly when comparing global GDP figures and the value of computation used annually.\nOur solution\nWe think the estimate of global hardware is the source of the anomaly. We think this because the amount that people apparently spend on hardware each year doesn’t seem like it would buy nearly this much hardware.\nAnnual hardware revenue seems to be around $300bn-$1,500bn recently.1 Based on the prices of FLOPS, (and making some assumptions, e.g. about how long hardware lasts) this suggests the total global stock of hardware can perform around 7.5 x 1019 – 1.5 x 1021 FLOPS/year.2 However the lower end of this range is below a relatively detailed estimate of global hardware made in 2007. It seems unlikely that the hardware base actually shrunk in recent years, so we push our estimate up to 2 x 1020 – 1.5 x 1021 FLOPS/year.\nThis is about 0.3%-1.9% of global GDP—a more plausible number, we think—so resolves the original problem. But a big reason Naik gave such high estimates for global hardware was that the last time someone measured it—between 1986 and 2007—computing hardware was growing very fast. General purpose computing was growing at 61% per year, and the application specific computers studied (such as GPUs) were growing at 86% per year. Application specific computers made up the vast majority too, so we might expect growth to progress at close to 86% per year.\nHowever if global hardware is as low as we estimate, the growth rate of total computing hardware since 2007 has been 25% or less, much lower than in the previous 21 years. Which would present us with another puzzle: what happened?\nWe aren’t sure, but this is still our best guess for the solution to the original puzzle. Hopefully we will have time to look into this puzzle too, but for now I’ll leave interested readers to speculate.\n\n \nAdded March 11 2016: Assuming the 2007 hardware figures are right, how much of the world’s wealth was in hardware in 2007? Back then, GWP was probably about $66T (in 2007 dollars). According to Hilbert & Lopez, the world could then perform 2 x 1020 IPS, which is  2 x 1014 MIPS. According to Muehlhauser & Rieber, hardware cost roughly $5 x 10-3/MIPS in 2007. Thus the total value of hardware would have been around $5 x 10-3/MIPS x 2 x 1014 MIPS = $1012 (a trillion dollars), or 1.5% of GWP.\n\n \nTitan Supercomputer. By an employee of the Oak Ridge National Laboratory.\n\n ", "url": "https://aiimpacts.org/mysteries-of-global-hardware/", "title": "Mysteries of global hardware", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2016-03-08T00:45:24+00:00", "paged_url": "https://aiimpacts.org/feed?paged=18", "authors": ["Katja Grace"], "id": "d5fba1afeff979c94a3ab774afb6e816", "summary": []} {"text": "Global computing capacity\n\n[This page is out of date and its contents may have been inaccurate in 2015, in light of new information that we are yet to integrate. See Computing capacity of all GPUs and TPUs for a related and more up-to-date analysis.]\nComputing capacity worldwide was probably around 2 x 1020 – 1.5 x 1021 FLOPS, at around the end of 2015.\nSupport\nWe are not aware of recent, plausible estimates for hardware capacity.\nVipul Naik estimated global hardware capacity in February 2014, based on Hilbert & Lopez’s estimates for 1986-2007. He calculated that if all computers ran at full capacity, they would perform 10-1000 zettaFLOPS, i.e. 1022 – 1024 FLOPS.1 We think these are substantial overestimates, because producing so much computing hardware would cost more than 10% of gross world product (GWP), which is implausibly high. The most cost-efficient computing hardware we are aware of today are GPUs, which still cost about $3/GFLOPS, or $1/GFLOPSyear if we assume hardware is used for around three years. This means maintaining hardware capable of 1022 – 1024 FLOPS would cost at least $1013 – $1015  per year. Yet gross world product (GWP) is only around $8 x 1013, so this would imply hardware spending constitutes more than 13% – 1300% of GWP. Even the lower end of this range seems implausible.2\nOne way to estimate global hardware capacity ourselves is based on annual hardware spending. This is slightly complicated because hardware lasts for several years. So to calculate how much hardware exists in 2016, we would ideally like to know how much was bought in every preceding year, and also how much of each annual hardware purchase has already been discarded. To simplify matters, we will instead assume that hardware lasts for around three years.\nIt appears that very roughly $300bn-$1,500bn was spent on hardware in 2015.3 We previously estimated that the cheapest available hardware (in April 2015) was around $3/GFLOPS. So if humanity spent $300bn-$1,500bn on hardware in 2015, and it was mostly the cheapest hardware, then the hardware we bought should perform around 1020 – 5 x 1020 FLOPS. If we multiply this by three to account for the previous two years’ hardware purchases still being around, we have about  3 x 1020 – 1.5 x 1021 FLOPS.\nThis estimate is rough, and could be improved in several ways. Most likely, more hardware is being bought each year than the previous year. So approximating last years’ hardware purchase to this years’ will yield too much hardware. In particular, the faster global hardware is growing, the closer the total is to whatever humanity bought this year (that is, counterintuitively, if you think hardware is growing faster, you should suppose that there is less of it by this particular method of estimation). Furthermore, perhaps a lot of hardware is not the cheapest for various reasons. This too suggests there is less hardware than we estimated.\nOn the other hand, hardware may often last for more than three years (we don’t have a strong basis for our assumption there). And our prices are from early 2015, so hardware is likely somewhat cheaper now (in early 2016). Our guess is that overall these considerations mean our estimate should be lower, but probably by less than a factor of four in total. This suggests 7.5 x 1019 – 1.5 x 1021 FLOPS of hardware.\nHowever Hilbert & Lopez (2012) estimated that in 2007 the world’s computing capacity was around 2 x 1020 IPS (similar to FLOPS) already, after constructing a detailed inventory of technologies.4 Their estimate does not appear to conflict with data about the global economy at the time.5 Growth is unlikely to have been negative since 2007, though Hilbert & Lopez may have overestimated. So we revise our estimate to 2 x 1020 – 1.5 x 1021 FLOPS for the end of 2015.\nThis still suggests that in the last nine years, the world’s hardware has grown by a factor of 1-7.5, implying a growth rate of 0%-25%. Even 25% would be quite low compared to growth rates between 1986 and 2007 according to Hilbert & Lopez (2012), which were 61% for general purpose computing and 86% for the set of ASICs they studied (which in 2007 accounted for 32 times as much computing as general purpose computers).6 However if we are to distrust estimates which imply hardware is a large fraction of GWP, then we must expect hardware growth has slowed substantially in recent years. For comparison, our estimates are around 2-15% of Naik’s lower bound, and suggest that hardware constitutes around 0.3%-1.9% of GWP.\nSuch large changes in the long run growth rate are surprising to us, and—if they are real—we are unsure what produced them. One possibility is that hardware prices have stopped falling so fast (i.e. Moore’s Law is ending for the price of computation). Another is that spending on hardware decreased for some reason, for instance because people stopped enjoying large returns from additional hardware. We think this question deserves further research.\nImplications\nGlobal computing capacity in terms of human brains\nAccording to different estimates, the human brain performs the equivalent of between 3 x 1013 and 1025 FLOPS. The median estimate we know of is 1018 FLOPS. According to that median estimate and our estimate of global computing hardware, if the world’s entire computing capacity could be directed at running minds around as efficient as those of humans, we would have the equivalent of 200-1500 extra human minds.7 That is, turning all of the world’s hardware into human-efficiency minds at present would increase the world’s population of minds by at most about 0.00002%. If we select the most favorable set of estimates for producing large numbers, turning all of the world’s computing hardware into minds as efficient as humans’ would produce around 50 million extra minds, increasing the world’s effective population by about 1%.8\nFigure: Projected number of human brains equivalent to global hardware under various assumptions. For brains, ‘small’ = 3 x 10^ 13, ‘median’ = 10^18, ‘large’ = 10^25. For ‘world hardware’, ‘high’ =2 x 10^20, ‘low’ = 1.5 x 10^21. ‘Growth’ is growth in computing hardware, the unlabeled default used in most projections is 25% per annum (our estimate above), ‘high’ = 86% per annum (the apparent growth rate in ASIC hardware in around 2007).\n", "url": "https://aiimpacts.org/global-computing-capacity/", "title": "Global computing capacity", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2016-02-17T01:21:19+00:00", "paged_url": "https://aiimpacts.org/feed?paged=18", "authors": ["Katja Grace"], "id": "9c3b7f40329d59bae99d83a0ba2b5571", "summary": []} {"text": "Coordinated human action as example of superhuman intelligence\n\nCollections of humans organized into groups and institutions provide many historical examples of the creation and attempted control of intelligences that routinely outperform individual humans. A preliminary look at the available evidence suggests that individuals are often cognitively outperformed in head-to-head competition with groups of similar average intelligence. This article surveys considerations relevant to the topic and lays out what a plausible research agenda in this area might look like.\nBackground\nHumans are often organized into groups in order to perform tasks beyond the abilities of any single human in the group. Many such groups perform cognitive tasks. The history of forming such groups is long and varied, and provides some evidence about what new forms of superhuman intelligence might be like. \nSome examples of humans cooperating on a cognitive task that no one member could perform include:\n\nTen therapists can see ten times as many patients as one therapist can.\nA hospital can perform many more kinds of medical procedure and treat many more kinds of illness than any one person in the hospital.\nA team of friends on trivia night might be able to answer more questions than any one of them individually might.\n\nHow such institutions are formed, and the sensitivity of their behavior to starting conditions, may help us predict the behavior of similarly constituted AIs or systems of AIs. This information would be especially useful if control or value alignment problems have been solved in some cases, or to the extent that existing human institutions resemble superintelligences or constitute an intelligence explosion.\nThere are several reasons these kinds of groups may present only a limited analogy to digital artificial intelligence. For instance, humans have no software-hardware distinction, so physical measures such as fences that can control the spread of humans are not likely to be as reliable at controlling the spread of digital intelligences. An individual human cannot easily be separated into different cognitive modules, which limits the design flexibility of intelligences constructed from humans. More generally, AIs may be programmed in ways very different from the heuristics and algorithms executed by the human brain, so while human organizations may be a kind of superhuman intelligence, they are not necessarily representative of the broader space of possible superintelligences.\nQuestions for further investigation:\n\nDo any human organizations have the characteristics of superintelligences that some AI researchers and futurists expect to cause an intelligence explosion with catastrophic consequences? If so, do we expect catastrophe from human organizations? If not, what distinguishes them from other, potential artificial intelligences?\nHow similar is the problem of controlling institutional behavior to the value alignment problem with respect to powerful digital AIs? Are the expected consequences similar?\nDo control mechanisms require limiting the cognitive performance of groups, or are there control mechanisms that do not appear to degrade in effectiveness as the intelligence of the group increases?\nHow relevant are the differences between human collective intelligence and digital AI?\n\nGroup vs individual performance\nInstitutions are mainly relevant as an example of constructed intelligence if their intelligence is higher than that of humans, in some sense. This section examines reasons to believe this might be the case.\nMechanisms for cognitive superiority of groups\nWe can think of several mechanisms by which a group might outperform individual humans on cognitive tasks, although this list is not comprehensive:\n\nAggregation – A large number of people can often perform cognitive tasks at a higher rate than a single person performing the same tasks. For example, a large accounting firm ought to be able to perform more audits, or prepare more tax returns, than a single accountant. In practice, there are often impediments to work scaling linearly with the number of people involved, as noted in observations such as Parkinson’s Law.\nCognitive economies of scale\n\nIt is often less costly to teach someone how to perform a task than for them to figure it out on their own. Knowledge transfer between members of a group may therefore accelerate the learning process.\nIndividuals with different skills can cooperate to produce things or quantities of things that no one person could have produced, through specialization and gains from trade. For example, I, Pencil describes the large number of processes, each requiring a very different set of skills and procedures that it would take a long time to learn, to produce a single pencil.\n\n\n\n\nModel combination and adjustment\n\nIn groups solving problems, people can make different suggestions and identify one another’s incorrect suggestions, which may help the group avoid wasting time on blind alleys or adopting premature, incorrect solutions.\nThe average of the individual estimates from a group of people is typically more reliably accurate than the estimate of any individual in the group, because random errors tend to cancel each other out. This is often called the “wisdom of crowds”.\nGroups of people can also coordinate by comparing predictions and accepting the claim the group finds most credible. Trivia teams typically use this strategy. Groups of people have also been pitted against individuals in chess games.\nMarkets can be used to combine information from many individuals.\n\n\n\nFurther investigation on this topic could include:\n\nGenerating a more comprehensive list of potential mechanisms by which institutions and groups may have a cognitive advantage, by examining the historical record, arguments, and experimental and case studies of individual vs group performance.\nAssessing which mechanisms can be shown to work, and how much group intelligence can exceed individual intelligence, by evaluating historical examples, case studies, and experimental studies.\nAssessing in which aspects of intelligence, if any, groups have not outperformed individuals.\n\nEvidence of cognitive superiority of groups\nAn incomplete survey of literature on collective intelligence found several measures where group performance, distinct from individual performance, has been explicitly evaluated:\n\nWooley et al. 2010 examined the performance of groups on tasks such as solving visual puzzles, brainstorming, making collective moral judgments, negotiating over limited resources, and playing checkers against a standardized computer opponent. The study found correlation between performance on different tasks, related more to the ability of members to coordinate than to the average or maximum intelligence of group members.\nShaw 1932 compared the timed performance of individuals and four-person groups on simple spatial and logical reasoning problems, and verbal tasks (arranging a set of words to form the end of some text). The study found that on problems where anyone was able to solve them, groups substantially outperformed individuals, mostly by succeeding much more often than individuals did. No one was able to solve the last two problems, but the study did find that on those problems, suggestions rejected during the process of group problem-solving were predominantly incorrect suggestions rejected by someone other than the person who proposed them, which shows error-correction to be potentially an important part of the advantage of group cognition.\nThorndike 1938 compared group and individual performance on vocabulary completion, limerick completion, and solving and making cross-word puzzle tests. Groups outperformed individuals on everything except making crossword puzzles.\nTaylor and Faust 1952 tested the ability of individuals, groups of two, and groups of four, to solve “twenty questions” style problems. Groups outperformed individuals, but larger groups did not outperform smaller groups.\nGurnee 1936 compared individual and group performance at maze learning. Groups completed mazes faster and with fewer false moves.\nGordon 1924 compared individual estimates of an object’s weight with the average of members of a group. The study found that group averages outperformed individual estimates, and that larger groups performed better than smaller groups.\nMcHaney et al. 2015 compared the performance of individuals, ad hoc groups, and groups with a prior history of working together, at detecting deception. The study found that groups with a prior history of working together outperform ad hoc groups, and refers to earlier literature that found no difference between the performance of individuals and that of ad hoc groups.\n\nMostly these studies appear to show groups outperforming individuals. We also found review articles referencing tens of other studies. We may follow up with a more comprehensive review of the evidence in this area in the future.\nQuestions for further investigation:\n\nWhich of the possible mechanisms for cognitive superiority of groups do human institutions demonstrate in practice? Do they have important advantages other than the ones enumerated?\nIn what contexts has the difference between group and individual performance been measured? Are there measures on which large organizations do much better than a single human? On what kinds of tasks does group performance most exceed that of individuals? How are these groups constituted?\nAre there measures on which large organizations cannot be arbitrarily better than a single human? (These might still be things that an AI could do much better, and so where organizations are not a good analogue.) Are there measures for which large organizations have not yet even reached human level intelligence? (It is deprecatory to say something was “written by a committee.”)\n", "url": "https://aiimpacts.org/coordinated-human-action-example-superhuman-intelligence/", "title": "Coordinated human action as example of superhuman intelligence", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2016-01-21T16:24:12+00:00", "paged_url": "https://aiimpacts.org/feed?paged=18", "authors": ["Ben Hoffman"], "id": "0eea8983796c77191d40392efd5582dc", "summary": []} {"text": "Recently at AI Impacts\n\nBy Katja Grace, 24 November 2015\nWe’ve been working on a few longer term projects lately, so here’s an update in the absence of regular page additions.\nNew researchers\nStephanie Zolayvar and John Salvatier have recently joined us, to try out research here.\nStephanie recently moved to Berkeley from Seattle, where she was a software engineer at Google. She is making sense of a recent spate of interviews with AI researchers (more below), and investigating purported instances of discontinuous progress. She also just made this glossary of AI risk terminology.\nJohn also recently moved to Berkeley from Seattle, where he was a software engineer at Amazon. He has been interviewing AI researchers with me, helping to design a new survey on AI progress, and evaluating different research avenues.\nI’ve also been working on several smaller scale collaborative projects with other researchers.\nAI progress survey\nWe are making a survey, to help us ask AI researchers about AI progress and timelines. We hope to get answers that are less ambiguous and more current than past timelines surveys. We also hope to learn about the landscape of progress in more detail than we have, to help guide our research.\nAI researcher interviews\nWe have been having in-depth conversations with AI researchers about AI progress and predictions of the future. This is partly to inform the survey, but mostly because there are lots of questions where we want elaborate answers from at least one person, instead of hearing everybody’s one word answers to potentially misunderstood questions. We plan to put up notes on these conversations soon.\nBounty submissions\nTen people have submitted many more entries to our bounty experiment. We are investigating these, but are yet to verify that any of them deserve a bounty. Our request was for examples of discontinuous progress, or very early action on a risk. So far the more lucrative former question has been substantially more popular.\nGlossary\nWe just put up a glossary of AI safety terms. Having words for things often helps in thinking about them, so we hope to help in the establishment of words for things. If you notice important words without entries, or concepts without words, please send them our way.", "url": "https://aiimpacts.org/recently-at-ai-impacts/", "title": "Recently at AI Impacts", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-11-24T17:09:48+00:00", "paged_url": "https://aiimpacts.org/feed?paged=18", "authors": ["Katja Grace"], "id": "7010ff4e891aa1580cc9b167b3d92643", "summary": []} {"text": "Glossary of AI Risk Terminology and common AI terms\n\nTerms\nA\nAI timeline\nAn expectation about how much time will lapse before important AI events, especially the advent of human-level AI or a similar milestone. The term can also refer to the actual periods of time (which are not yet known), rather than an expectation about them.\nArtificial General Intelligence (also, AGI)\nSkill at performing intellectual tasks across at least the range of variety that a human being is capable of. As opposed to skill at certain specific tasks (‘narrow’ AI). That is, synonymous with the more ambiguous Human-Level AI for some meanings of the latter.\nArtificial Intelligence (also, AI)\nBehavior characteristic of human minds exhibited by man-made machines, and also the area of research focused on developing machines with such behavior. Sometimes used informally to refer to human-level AI or another strong form of AI not yet developed.\nAssociative value accretion\nA hypothesized approach to value learning in which the AI acquires values using some machinery for synthesizing appropriate new values as it interacts with its environment, inspired by the way humans appear to acquire values (Bostrom 2014, p189-190)1.\nAnthropic capture\nA hypothesized control method in which the AI thinks it might be in a simulation, and so tries to behave in ways that will be rewarded by its simulators (Bostrom 2014 p134).\nAnthropic reasoning\nReaching beliefs (posterior probabilities) over states of the world and your location in it, from priors over possible physical worlds (without your location specified) and evidence about your own situation. For an example where this is controversial, see The Sleeping Beauty Problem. For more on the topic and its relation to AI, see here.\nAugmentation\nAn approach to obtaining a superintelligence with desirable motives that consists of beginning with a creature with desirable motives (eg, a human), then making it smarter, instead of designing good motives from scratch (Bostrom 2014, p142).\nB\nBackpropagation\nA fast method of computing the derivative of cost with respect to different parameters in a network, allowing for training neural nets through gradient descent. See Neural Networks and Deep Learning2 for a full explanation.\nBoxing\nA control method that consists of constructing the AI’s environment so as to minimize interaction between the AI and the outside world. (Bostrom 2014, p129).\nC\nCapability control methods\nStrategies for avoiding undesirable outcomes by limiting what an AI can do (Bostrom 2014, p129).\nCognitive enhancement\nImprovements to an agent’s mental abilities.\nCollective superintelligence\n“A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system” (Bostrom 2014, p54).\nComputation\nA sequence of mechanical operations intended to shed light on something other than this mechanical process itself, through an established relationship between the process and the object of interest.\nThe common good principle\n“Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals” (Bostrom 2014, p254).\nCrucial consideration\nAn idea with the potential to change our views substantially, such as by reversing the sign of the desirability of important interventions.\nD\nDecisive strategic advantage\nStrategic superiority (by technology or other means) sufficient to enable an agent to unilaterally control most of the resources of the universe.\nDirect specification\nAn approach to the control problem in which the programmers figure out what humans value, and code it into the AI (Bostrom 2014, p139-40).\nDomesticity\nAn approach to the control problem in which the AI is given goals that limit the range of things it wants to interfere with (Bostrom 2014, p140-1).\nE\nEmulation modulation\nStarting with brain emulations with approximately normal human motivations (see ‘Augmentation’), and modifying their motivations using drugs or digital drug analogs.\nEvolutionary selection approach to value learning\nA hypothesized approach to the value learning problem which obtains an AI with desirable values by iterative selection, the same way evolutionary selection produced humans  (Bostrom 2014, p187-8).\nExistential risk\nRisk of an adverse outcome that would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential (Bostrom 2002)\nF\nFeature\nA dimension in the vector space of activations in a single layer of a neural network (i.e. a neuron activation or linear combination of activations of different neurons)\nFirst principal-agent problem\nThe well-known problem faced by a sponsor wanting an employee to fulfill their wishes (usually called ‘the principal agent problem’).\nG\nGenie\nAn AI that carries out a high level command, then waits for another (Bostrom 2014, p148).\nH\nHardware overhang\nA situation where large amounts of hardware being used for other purposes become available for AI, usually posited to occur when AI reaches human-level capabilities.\nHuman-level AI\nAn AI that matches human capabilities in virtually every domain of interest.  Note that this term is used ambiguously; see our page on human-level AI.  \nHuman-level hardware\nHardware that matches the information-processing ability of the human brain.\nHuman-level software\nSoftware that matches the algorithmic efficiency of the human brain, for doing the tasks the human brain does.\nI\nImpersonal perspective\nThe view that one should act in the best interests of everyone, including those who may be brought into existence by one’s choices (see Person-affecting perspective).\nIncentive methods\nStrategies for controlling an AI that consist of setting up the AI’s environment such that it is in the AI’s interest to cooperate. e.g. a social environment with punishment or social repercussions often achieves this for contemporary agents (Bostrom 2014, p131).\nIncentive wrapping\nProvisions in the goals given to an AI that allocate extra rewards to those who helped bring the AI about  (Bostrom 2014, p222-3).\nIndirect normativity\nAn approach to the control problem in which we specify a way to specify what we value, instead of specifying what we value directly (Bostrom, p141-2).\nInstrumental convergence thesis\nWe can identify ‘convergent instrumental values’. That is, subgoals that are useful for a wide range of more fundamental goals, and in a wide range of situations (Bostrom 2014, p109).\nIntelligence explosion\nA hypothesized event in which an AI rapidly improves from ‘relatively modest’ to superhuman level (usually imagined to be as a result of recursive self-improvement).\nM\nMacrostructural development accelerator\nAn imagined lever used in thought experiments which slows the large scale features of history (e.g. technological change, geopolitical dynamics) while leaving the small scale features the same.\nMind crime\nThe mistreatment of morally relevant computations.\nMoore’s Law\nAny of several different consistent, many-decade patterns of exponential improvement that have been observed in digital technologies. The classic version concerns the number of transistors in a dense integrated circuit, which was observed to be doubling around every year when the ‘law’ was formulated in 1965. Price-Performance Moore’s Law is often relevant to AI forecasting.\nMoral rightness (MR) AI\nAn AI which seeks to do what is morally right.\nMotivational scaffolding\nA hypothesized approach to value learning in which the seed AI is given simple goals, and these goals are replaced with more complex ones once it has developed sufficiently sophisticated representational structure (Bostrom 2014, p191-192).\nMultipolar outcome\nA situation after the arrival of superintelligence in which no single agent controls most of the resources. \nO\nOptimization power\nThe strength of a process’s ability to improve systems.\nOracle\nAn AI that only answers questions (Bostrom 2014, p145).\nOrthogonality thesis\nIntelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.\nP\nPerson-affecting perspective\nThe view that one should act in the best interests of everyone who already exists, or who will exist independent of one’s choices (see Impersonal perspective).\nPerverse instantiation\nA solution to a posed goal (eg, make humans smile) that is destructive in unforeseen ways (eg, paralyzing face muscles in the smiling position).\nPrice-Performance Moore’s Law\nThe observed pattern of relatively consistent, long term, exponential price decline for computation.\nPrinciple of differential technological development\n“Retard the development of dangerous and harmful technologies, especially the ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risk posed by nature or by other technologies” (Bostrom 2014, p230).\nPrinciple of epistemic deference\n“A future superintelligence occupies an epistemically superior vantage point: it’s beliefs are (probably, on most topics) more likely than our to be true.  We should therefore defer to the superintelligence’s position whenever feasible” (Bostrom 2014, p226).\nQ\nQuality superintelligence\n“A system that is at least as fast as a human mind and vastly qualitatively smarter” (Bostrom 2014, p56).\nR\nRecalcitrance\nHow difficult a system is to improve.\nRecursive self-improvement\nThe envisaged process of AI (perhaps a seed AI) iteratively improving itself.\nReinforcement learning approach to value learning\nA hypothesized approach to value learning in which the AI is rewarded for behaviors that more closely approximate human values (Bostrom 2014, p188-9).\nS\nSecond principal-agent problem\nThe emerging problem of a developer wanting their AI to fulfill their wishes.\nSeed AI\nA modest AI which can bootstrap into an impressive AI by improving its own architecture.\nSingleton\nAn agent that is internally coordinated and has no opponents.\nSovereign\nAn AI that acts autonomously in the world, in pursuit of potentially long range objectives (Bostrom 2014, p148).\nSpeed superintelligence\n“A system that can do all that a human intellect can do, but much faster” (Bostrom 2014, p53).\nState risk\nA risk that comes from being in a certain state, such that the amount of risk is a function of the time spent there. For example, the state of not having the technology to defend from asteroid impacts carries risk proportional to the time we spend in it.\nStep risk\nA risk that comes from making a transition. Here the amount of risk is not a simple function of how long the transition takes.  For example, traversing a minefield is not safer if done more quickly.\nStunting\nA control method that consists of limiting the AI’s capabilities, for instance as by limiting the AI’s access to information (Bostrom 2014, p135).\nSuperintelligence\nAny intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest (Bostrom 2014, p22).\nT\nTakeoff\nThe event of the emergence of a superintelligence, often characterized by its speed: ‘slow takeoff’ takes decades or centuries, ‘moderate takeoff’ takes months or years and ‘fast takeoff’ takes minutes to days.\nTechnological completion conjecture\nIf scientific and technological development efforts do not cease, then all important basic capabilities that could be obtained through some possible technology will be obtained (Bostrom 2014, p127).\nTechnology coupling\nA predictable timing relationship between two technologies, such that hastening of the first technology will hasten the second, either because the second is a precursor or because it is a natural consequence (Bostrom 2014, p236-8) e.g. brain emulation is plausibly coupled to ‘neuromorphic’ AI, because the understanding required to emulate a brain might allow one to more quickly create an AI on similar principles.\nTool AI\nAn AI that is not ‘like an agent’, but like a more flexible and capable version of contemporary software. Most notably perhaps, it is not goal-directed (Bostrom 2014, p151).\nU\nUtility function\nA mapping from states of the world to real numbers (‘utilities’), describing an entity’s degree of preference for different states of the world. Given the choice between two lotteries, the entity prefers the lottery with the highest ‘expected utility’, which is to say, sum of utilities of possible states weighted by the probability of those states occurring.\nV\nValue learning\nAn approach to the value loading problem in which the AI learns the values that humans want it to pursue (Bostrom 2014, p207).\nValue loading problem\nThe problem of causing the AI to pursue human values (Bostrom 2014, p185).\nW\nWise-Singleton Sustainability Threshold\nA capability set exceeds the wise-singleton threshold if and only if a patient and existential risk-savvy system with that capability set would, if it face no intelligent opposition or competition, be able to colonize and re-engineer a large part of the accessible universe (Bostrom 2014, p100).\nWhole-brain emulation\nMachine intelligence created by copying the computational structure of the human brain.\nWord embedding\nA mapping of words to high-dimensional vectors that has been trained to be useful in a word task such that the arrangement of words in the vector space is meaningful. For instance, words near one other in the vector-space are related, and similar relationships between different pairs of words correspond to similar vectors between them, so that e.g. if E(x) is the vector for the word ‘x’, then E(king) – E(queen) ≈ E(woman) – E(man). Word embeddings are explained in more detail here.\nNotes", "url": "https://aiimpacts.org/ai-risk-terminology/", "title": "Glossary of AI Risk Terminology and common AI terms", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-10-30T22:58:34+00:00", "paged_url": "https://aiimpacts.org/feed?paged=18", "authors": ["Katja Grace"], "id": "0b113582794901092db13e4a9e4c62f0", "summary": []} {"text": "AI timelines and strategies\n\nAI Impacts sometimes invites guest posts from fellow thinkers on the future of AI. These are not intended to relate closely to our current research, nor to necessarily reflect our views. However we think they are worthy contributions to the discussion of AI forecasting and strategy.\nThis is a guest post by Sarah Constantin, 20 August 2015\nOne frame of looking at AI risk is the “geopolitical” stance. Who are the major players who might create risky strong AI? How could they be influenced or prevented from producing existential risks? How could safety-minded institutions gain power or influence over the future of AI? What is the correct strategy for reducing AI risk?\nThe correct strategy depends sharply on the timeline for when strong AI is likely to be developed. Will it be in 10 years, 50 years, 100 years or more? This has implications for AI safety research. If a basic research program on AI safety takes 10-20 years to complete and strong AI is coming in 10 years, then research is relatively pointless. If basic research takes 10-20 years and strong AI is coming more than 100 years from now (if at all), then research can wait. If basic research takes 10-20 years and strong AI is coming in around 50 years, then research is a good idea.\nAnother relevant issue for AI timelines and strategies is the boom-and-bust cycle in AI. Funding for AI research and progress on AI has historically fluctuated since the 1960s, with roughly 15 years between “booms.” The timeline between booms may change in the future, but fluctuation in investment, research funding, and popular attention seems to be a constant in scientific/technical fields.\nEach AI boom has typically focused on a handful of techniques (GOFAI in the 1970’s, neural nets and expert systems in the 1980’s) which promised to deliver strong AI but eventually ran into limits and faced a collapse of funding and investment. The current AI boom is primarily focused on massively parallel processing and machine learning, particularly deep neural nets.\nThis is relevant because institutional and human capital is lost between booms. While leading universities can survive for centuries, innovative companies are usually only at their peak for a decade or so. It is unlikely that the tech companies doing the most innovation in AI during one boom will be the ones leading subsequent booms. (We don’t usually look to 1980’s expert systems companies for guidance on AI today.) If there were to be a Pax Googleiana lasting 50 years, it might make sense for people concerned with AI safety to just do research and development within Google. But the history of the tech industry suggests that’s not likely. Which means that any attempt to influence long-term AI risk will need to survive the collapse of current companies and the end of the current wave of popularity of AI.\nThe “extremely short-term AI risk scenario” (of strong AI arising within a decade) is not a popular view among experts; most contemporary surveys of AI researchers predict that strong AI will arise sometime in the mid-to-late 21st century. If we take the view that strong AI in the 2020’s is vanishingly unlikely (which is more “conservative” than the results of most AI surveys, but may be more representative of the mainstream computer science view), then this has various implications for AI risk strategy that seem to be rarely considered explicitly.\nIn the “long-term AI risk scenario”, there will be at least one “AI winter” before strong AI is developed. We can expect a period (or multiple periods) in the future where AI will be poorly funded and popularly discredited. We can expect that there are one or more jumps in innovation that will need to occur before human-level AI will be possible. And, given the typical life cycle of corporations, we can expect that if strong AI is developed, it will probably be developed by an institution that does not exist yet.\nIn the “long-term AI risk scenario”, there will probably be time to develop at least some theory of AI safety and the behavior of superintelligent agents. Basic research in computer science (and perhaps neuroscience) may well be beneficial in general from an AI risk perspective. If research on safety can progress during “AI winters” while progress on AI in general halts, then winters are particularly good news for safety. In this long-term scenario, there is no short-term imperative to cease progress on “narrow AI”, because contemporary narrow AI is almost certainly not risky.\nIn the “long-term AI risk scenario”, another important goal besides basic research is to send a message to the future. Today’s leading tech CEOs will not be facing decisions about strong AI; the critical decisionmakers may be people who haven’t been born yet, or people who are currently young and just starting their careers. Institutional cultures are rarely built to last decades. What can we do today to ensure that AI safety will be a priority decades from now, long after the current wave of interest in AI has come to seem faddish and misguided?\nThe mid- or late 21st century may be a significantly different place than the early 21st century. Economic and political situations fluctuate. The US may no longer be the world’s largest economy. Corporations and universities may look very different. Imagine someone speculating about artificial intelligence in 1965 and trying to influence the world of 2015. Trying to pass laws or influence policy at leading corporations in 1965 might not have had a lasting effect (this would be a useful historical topic to investigate in more detail.)\nAnd what if the next fifty years looks more like the cataclysmic first half of the 20th century than the comparatively stable second half of the 20th century? How could a speculative thinker of 1895 hope to influence the world of 1945?\nEducational and cultural goals, broadly speaking, seem relevant in this scenario. It will be important to have a lasting influence on the intellectual culture of future generations.\nFor instance: if fields of theoretical computer science relevant for AI risk are developed and included in mainstream textbooks, then the CS majors of 2050 who might grow up to build strong AI will know about the concerns being raised today as more than a forgotten historical curiosity. Of course, they might not be CS majors, and perhaps they won’t even be college students. We have to think about robust transmission of information.\nIn the “long-term AI risk scenario”, the important task is preparing future generations of AI researchers and developers to avoid dangerous strong AI. This means performing and disseminating and teaching basic research in new theoretical fields necessary for understanding the behavior of superintelligent agents.\nA “geopolitical” approach is extremely difficult if we don’t know who the players will be. We’d like the future institutions that will eventually develop strong AI to be run and staffed by people who will incorporate AI safety into their plans. This means that a theory of AI safety needs to be developed and disseminated widely.\nUltimately, long-term AI strategy bifurcates, depending on whether the future of AI is more “centralized” or “decentralized.”\nIn a “centralized” future, a small number of individuals, perhaps researchers themselves, contribute most innovation in AI, and the important mission is to influence them to pursue research in helpful rather than harmful directions.\nIn a “decentralized” future, progress in AI is spread over a broad population of institutions, and the important mission is to develop something like “industry best practices” — identifying which engineering practices are dangerous and instituting broadly shared standards that avoid them. This may involve producing new institutions focused on safety.\nBasic research is an important prerequisite for both the “centralized” and “decentralized” strategies, because currently we do not know what kinds of progress in AI (if any) are dangerous.\nThe “centralized” strategy means promoting something like an intellectual culture, or philosophy, among the strongest researchers of the future; it is something like an educational mission. We would like future generations of AI researchers to have certain habits of mind: in particular, the ability to reason about the dramatic practical consequences of abstract concepts. The discoverers of quantum mechanics were able to understand that the development of the atomic bomb would have serious consequences for humanity, and to make decisions accordingly. We would like the future discoverers of major advances in AI to understand the same. This means that today, we will need to communicate (through books, schools, and other cultural institutions, traditional and new) certain intellectual and moral virtues, particularly to the brightest young people.\nThe “decentralized” strategy will involve taking the theoretical insights from basic AI research and making them broadly implementable. Are some types of “narrow AI” particularly likely to lead to strong AI? Are there some precautions which, on the margin, make harmful strong AI less likely? Which kinds of precautions are least costly in immediate terms and most compatible with the profit and performance needs of the tech industry? To the extent that AI progress is decentralized and incremental, the goal is to ensure that it is difficult to go very far in the wrong direction. Once we know what we mean by a “wrong direction”, this is a matter of building long-term institutions and incentives that shape AI progress towards beneficial directions.\nThe assumption that strong AI is a long-term rather than a short-term risk affects strategy significantly. Influencing current leading players is not particularly important; promoting basic research is very important; disseminating information and transmitting culture to future generations, as well as building new institutions, is the most effective way to prepare for AI advances decades from now.\nIn the event that AI never becomes a serious risk, developing institutions and intellectual cultures that can successfully reason about AI is still societally valuable. The skill (in institutions and individuals) of taking theoretical considerations seriously and translating them into practical actions for the benefit of humanity is useful for civilizational stability in general. What’s important is recognizing that this is a long-term strategy — i.e. thinking more than ten years ahead. Planning for future decades looks different from taking advantage of the current boom in funding and attention for AI and locally hill-climbing.\nSarah Constantin blogs at Otium. She recently graduated from Yale with a PhD in mathematics.", "url": "https://aiimpacts.org/ai-timelines-and-strategies/", "title": "AI timelines and strategies", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-08-21T06:55:25+00:00", "paged_url": "https://aiimpacts.org/feed?paged=18", "authors": ["Katja Grace"], "id": "9c587f3a16f5011509ccb980fbed6fc9", "summary": []} {"text": "Introducing research bounties\n\nBy Katja Grace, 7 August 2015\nSometimes we like to experiment with novel research methods and formats. Today we are introducing ‘AI Impacts Research Bounties‘, in which you get money if you send us inputs to some of our research.\nTo start, we have two bounties: one for showing us instances of abrupt technological progress, and one for pointing us to instances of people acting to avert risks decades ahead of time. Rewards currently range from $20 to $500, and anyone can enter. We may add more bounties, or adjust prices, according to responses. We welcome feedback on any aspect of this experiment.\nThanks to John Salvatier for ongoing collaboration on this project.", "url": "https://aiimpacts.org/introducing-research-bounties/", "title": "Introducing research bounties", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-08-07T07:23:04+00:00", "paged_url": "https://aiimpacts.org/feed?paged=19", "authors": ["Katja Grace"], "id": "8a6953e6b7e321825cb8eff8b813765e", "summary": []} {"text": "AI Impacts research bounties\n\nWe are offering rewards for several inputs to our research, described below. These offers have no specific deadline except where noted. We may modify them or take them down, but will give at least one week’s notice here unless there is strong reason not to. To submit an entry, email katja@intelligence.org. There is currently a large backlog of entries to check, so new entries will not receive a rapid response.\n1. An example of discontinuous technological progress ($50-$500)\nThis bounty offer is no longer available after 3 November 2016.\nWe are interested in finding more examples of large discontinuous technological progress to add to our collection. We’re offering a bounty of around $50-500 per good example.\nWe currently know of two good examples (and one moderate example):\n\nNuclear weapons discontinuously increased the relative effectiveness of explosives.\nHigh temperature superconductors led to a dramatic increase in the highest temperature at which superconducting was possible.\n\nTo assess discontinuity, we’ve been using “number of years worth of progress at past rates”, as measured by any relevant metric of technological progress. For example, the discovery of nuclear weapons was equal to about 6,000 years worth of previous progress in the relative effectiveness of explosives. However, we are also interested in examples that seem intuitively discontinuous, even if they don’t exactly fit the criteria of being a large number of year’s progress in one go.\nThings that make examples better:\n\nSize: Better examples represent larger changes. More than 20 times normal annual progress is ideal.\nSharpness: Better examples happened over shorter periods. Over less than a year is ideal.\nBreadth: Metrics that measure larger categories of things are better. For example, fast adoption curves for highly specific categories (say a particular version of some software) is much less interesting than fast adoption curves for much broader categories (say a whole category of software).\nRarity: As we receive more examples, the interestingness of each one will tend to decline.\n\nAI Impacts is willing to pay more for better examples. Basically we will judge how interesting your example is and then reward you based on that. We will accept examples that violate our stated preferences but satisfy the spirit of the bounty. Our guess is that we would pay about $500 for another example as good nuclear weapons.\nHow to enter: all that is necessary to submit an example is to email us a paragraph describing the example, along with sources to verify your claims (such sources are likely to involve at least one time series of success on a particular metric). Note that an example should be of the form ‘A caused abrupt progress in metric B’. For instance, ‘The boliolicopter caused abrupt progress in the maximum rate of fermblangling at sub-freezing temperatures’.\n2. An example of early action on a risk ($20-$100)\nThis bounty offer is no longer available after 3 November 2016.\nWe want: a one sentence description of a case where at least one person acted to avert a risk that was least fifteen years away, along with a link or citation supporting the claim that the action preceded the risk by at least fifteen years. \nWe will give: up $100, with higher sums for examples that are better according to our judgment (see criteria for betterness below), and which we don’t already know about. We might go over $100 for exceptionally good examples.\nFurther details\nExamples are better if:\n\nThe risk is more novel: relatively similar problems have not arisen before, and would probably not arise sooner than fifteen years in the future. e.g. Poverty in retirement is a risk people often prepare for more than fifteen year before it befalls them, however it is not very novel because other people already face an essentially identical risk, and have done many times before. \nThe solution is more specific: the action taken would not be nearly as useful if the risk disappeared. e.g. Saving money to escape is a reasonable solution to expecting your country to face civil war soon. However saving money is fairly useful in any case, so this solution is not very specific.\nWe haven’t received a lot of examples: as we collect more examples, the value of each one will tend to decline.\n\nSome examples: \n\nLeo Szilard’s secret nuclear patent: the threat of nuclear weapons was quite novel. It’s unclear when Szilard expected such weapons, but quite plausibly at least fifteen years later in 1934. The secret patent does not seem broadly useful, though useful for encouraging more local nuclear research, which is somewhat more broadly useful than secrecy per se. More details in this report. This is a reasonably good example.\nThe Asilomar Conference on recombinant DNA: the risk of was arguably quite novel (genetically engineered pandemics), and the solution was reasonably specific (safety rules for dealing with recombinant DNA). However the risks people were concerned about were immediate, rather than decades hence. More details here. This is not a good example.\n\nEvidence that the example is better in the above ways is also welcome, though we reserve the right not to explore it fully.", "url": "https://aiimpacts.org/ai-impacts-research-bounties/", "title": "AI Impacts research bounties", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-08-07T06:46:38+00:00", "paged_url": "https://aiimpacts.org/feed?paged=19", "authors": ["Katja Grace"], "id": "4f42bdc2c2ab9bf9ea069fd3557a4502", "summary": []} {"text": "Time flies when robots rule the earth\n\nBy Katja Grace, 28 July 2015\nThis week Robin Hanson is finishing off his much anticipated book, The Age of Em: Work, Love and Life When Robots Rule the Earth. He recently told me that it would be helpful to include rough numbers for the brain’s memory and computing capacity in the book, so I agreed to prioritize finding the ones AI Impacts didn’t already have. Consequently we just put up new pages about information storage in the brain. We also made a summary of related ‘human-level’ hardware pages, and an index of pages about hardware in general.\nRobin’s intended use for these numbers is interesting. The premise of his book is that one day (perhaps in the far future) human minds might be emulated on computers, and that this would produce a society that is somewhere between recognizably human and predictably alien. Robin’s project is a detailed account of life in such a society, as far as it can be discerned by peering through social science and engineering theory.\nOne prediction Robin makes is that these emulated minds (’ems’) will run at a wide variety of speeds, depending on their purposes and the incentives. So some ems will have whole lifetimes while others are getting started on a single thought. Robin wanted to know how slow the very slowest ems would run. And for this,  he wanted to know much memory the brain uses, and how much computing it does.\nHis reasoning is as follows. The main costs of running emulations are computing hardware and memory. If Anna is running twice as fast as Ben, then Anna needs about twice as much computing power to run, which will cost about twice as much. However Anna still uses around as much memory as Ben to store the contents of her brain over time. So if most of the cost of an emulation is in computation, then halving the speed would halve the cost, and would often be worth it. But once an emulation is moving so slowly that memory becomes the main cost, slowing down by half makes little difference, and soon stops being worth it. So the slowest emulations should run at around the speed at which computing hardware and memory contribute similarly to cost.\nHardware and memory costs have been falling at roughly similar rates in the past, so if this continues, then the ratio between their costs now is a reasonable (if noisy) predictor of their ratio in several decades time. Given our numbers, Robin estimates that the slowest emulations will operate at between a one hundred trillionth of human speed and one millionth of human speed, with a middle estimate of one tenth of a billionth of human speed.\nAt these rates, immortality looks a lot like dying. If you had been experiencing the world at these speeds since the beginning of the universe, somewhere between an hour and a thousand years would seem to have passed, with a middle estimate of a year. Even if the em economy somehow lasts for a thousand years, running this slowly would mean immediately jumping into whatever comes next.\nThese things are rough of course, but it seems pretty cool to me that we can make reasonable guesses at all about such exotic future scenarios, using clues from our everyday world, like the prevailing prices of hard disks and memory.\nIf you are near Berkeley, CA and want to think more about this kind of stuff, or in this kind of style, remember you can come and meet Robin and partake in more economic futurism at our event this Thursday. We expect a good number of attendees already, but could squeeze in a few more.\nImage: Robin Hanson in a field, taken by Katja Grace", "url": "https://aiimpacts.org/time-flies-when-robots-rule-the-earth/", "title": "Time flies when robots rule the earth", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-07-28T22:39:43+00:00", "paged_url": "https://aiimpacts.org/feed?paged=19", "authors": ["Katja Grace"], "id": "58e73c2f02c477cb0dd177108ce0fe0a", "summary": []} {"text": "Costs of human-level hardware\n\nComputing hardware which is equivalent to the brain –\n\nin terms of FLOPS probably costs between $1 x 105 and $3 x 1016, or $2/hour-$700bn/hour.\nin terms of TEPS probably costs $200M – $7B, or or $4,700 – $170,000/hour (including energy costs in the hourly rate).\nin terms of secondary memory probably costs $300-3,000, or $0.007-$0.07/hour.\n\nDetails\nPartial costs\nComputation\nMain articles: Brain performance in FLOPS, Current FLOPS prices, Trends in the costs of computing\nFLoating-point Operations Per Second (FLOPS) is a measure of computer performance that emphasizes computing capacity. The human brain is estimated to perform between 1013.5 and 1025 FLOPS. Hardware currently costs around $3 x 10-9/FLOPS, or $7 x 10-14/FLOPShour. This makes the current price of hardware which has equivalent computing capacity to the human brain between $1 x 105 and $3 x 1016, or $2/hour-$700bn/hour if hardware is used for five years.\nThe price of FLOPS has probably decreased by a factor of ten roughly every four years in the last quarter of a century.\nCommunication\nMain articles: Brain performance in TEPS, The cost of TEPS \nTraversed Edges Per Second (TEPS) is a measure of computer performance that emphasizes communication capacity. The human brain is estimated to perform at 0.18 – 6.4 x 105 GTEPS. Communication capacity costs around $11,000/GTEP or $0.26/GTEPShour in 2015, when amortized over five years and combined with energy costs. This makes the current price of hardware which has equivalent communication capacity to the human brain around $200M – $7B in total, or $4,700 – $170,000/hour including energy costs.\nWe estimate that the price of TEPS falls by a factor of ten every four years, based the relationship between TEPS and FLOPS.\nInformation storage\nMain articles: Information storage in the brain, Costs of information storage, Costs of human-level information storage\nComputer memory comes in primary and secondary forms. Primary memory (e.g. RAM) is intended to be accessed frequently, while secondary memory is slower to access but has higher capacity. Here we estimate the secondary memory requirements ofthe brain. The human brain is estimated to store around 10-100TB of data. Secondary storage costs around $30/TB in 2015. This means it costs $300-3,000 for enough storage to store the contents of a human brain, or $0.007-$0.07/hour if hardware is used for five years.\nIn the long run the price of secondary memory has declined by an order of magnitude roughly every 4.6 years. However the rate has declined so much that prices haven’t substantially dropped since 2011 (in 2015).\nInterpreting partial costs\nCalculating the total cost of hardware that is relevantly equivalent to the brain is not as simple as adding the partial costs as listed. FLOPS and TEPS are measures of different capabilities of the same hardware, so if you pay for TEPS at the aforementioned prices, you will also receive FLOPS.\nThe above list is also not exhaustive: there may be substantial hardware costs that we haven’t included.", "url": "https://aiimpacts.org/costs-of-human-level-hardware/", "title": "Costs of human-level hardware", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-07-26T23:21:54+00:00", "paged_url": "https://aiimpacts.org/feed?paged=19", "authors": ["Katja Grace"], "id": "f20e2d54243a2976601600ff0e17798c", "summary": []} {"text": "Brain performance in FLOPS\n\nThe computing power needed to replicate the human brain’s relevant activities has been estimated by various authors, with answers ranging from 1012 to 1028 FLOPS.\nDetails\nNotes\nWe have not investigated the brain’s performance in FLOPS in detail, nor substantially reviewed the literature since 2015. This page summarizes others’ estimates that we are aware of, as well as the implications of our investigation into brain performance in TEPS.\nEstimates\nSandberg and Bostrom 2008: estimates and review\nSandberg and Bostrom project the processing required to emulate a human brain at different levels of detail.1 For the three levels that their workshop participants considered most plausible, their estimates are 1018, 1022, and 1025 FLOPS.\nThey also summarize other brain compute estimates, as shown below (we reproduce their Table 10).2 We have not reviewed these estimates, and some do not appear superficially credible to us.\n\n\nDrexler 2018\nDrexler looks at multiple comparisons between narrow AI tasks and neural tasks, and finds that they suggest the ‘basic functional capacity’ of the human brain is less than one petaFLOPS (1015).3\nConversion from brain performance in TEPS\nAmong a small number of computers we compared4, FLOPS and TEPS seem to vary proportionally, at a rate of around 1.7 GTEPS/TFLOP. We also estimate that the human brain performs around  0.18 – 6.4 * 1014 TEPS. Thus if the FLOPS:TEPS ratio in brains is similar to that in computers, a brain would perform around 0.9 – 33.7 * 1016 FLOPS.5 We have not investigated how similar this ratio is likely to be.\nNotes", "url": "https://aiimpacts.org/brain-performance-in-flops/", "title": "Brain performance in FLOPS", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-07-26T19:33:41+00:00", "paged_url": "https://aiimpacts.org/feed?paged=19", "authors": ["Katja Grace"], "id": "9be3dfb8600fb1cef017bacd427997d1", "summary": []} {"text": "Index of articles about hardware\n\nHardware in terms of computing capacity (FLOPS and MIPS)\nBrain performance in FLOPS\n2019 recent trends in GPU price per FLOPS\nElectrical efficiency of computing\n2018 price of performance by Tensor Processing Units\n2017 trend in the cost of computing\nPrice-performance trend in top supercomputers\n2017 FLOPS prices\nTrends in the cost of computing\nWikipedia history of GFLOPS costs\nHardware in terms of communication capacity (TEPS)\nBrain performance in TEPS (includes the cost of brain-level TEPS performance on current hardware)\nThe cost of TEPS (includes current costs, trends and relationship to other measures of hardware price)\nInformation storage\nInformation storage in the brain\nCosts of information storage\nCosts of human-level information storage\nOther\nCosts of human-level hardware\n2019 recent trends in Geekbench score per CPU price\nTrends in DRAM price per gigabyte\nEffect of marginal hardware on artificial general intelligence\nResearch topic: hardware, software and AI\nIndex of articles about hardware\nRelated blog posts\nPreliminary prices for human level hardware (4 April 2015)\nA new approach to predicting brain-computer parity (7 May 2015)\nTime flies when robots rule the earth (28 July 2015)", "url": "https://aiimpacts.org/index-of-hardware-articles/", "title": "Index of articles about hardware", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-07-26T17:38:34+00:00", "paged_url": "https://aiimpacts.org/feed?paged=19", "authors": ["Katja Grace"], "id": "5b7cc20b46de642be3cdbe6ddf5fd5f6", "summary": []} {"text": "Cost of human-level information storage\n\nIt costs roughly $300-$3000 to buy enough storage space to store all information contained by a human brain.\nSupport\nThe human brain probably stores around 10-100TB of data. Data storage costs around $30/TB. Thus it costs roughly $300-$3000 to buy enough storage space to store all information contained by a human brain.\nIf we suppose that one wants to replace the hardware every five years, this is $0.007-$0.07/hour.1\nFor reference, we have estimated that the computing hardware and electricity required to do the computation the brain does would cost around $4,700 – $170,000/hour at present (using an estimate based on TEPS, and assuming computers last for five years). Estimates based on computation rather than communication capabilities (like TEPS) appear to be spread between $3/hour and $1T/hour.2 On the TEPS-based estimate then, the cost of replicating the brain’s information storage using existing hardware would currently be between a twenty millionth and a seventy thousandth of the cost of replicating the brain’s computation using existing hardware.", "url": "https://aiimpacts.org/cost-of-human-level-information-storage/", "title": "Cost of human-level information storage", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-07-23T20:33:08+00:00", "paged_url": "https://aiimpacts.org/feed?paged=19", "authors": ["Katja Grace"], "id": "f51b90202a38699afedcad3c1eb9dc87", "summary": []} {"text": "Costs of information storage\n\nPosted 23 July 2015\nCheap secondary memory appears to cost around $0.03/GB in 2015. In the long run the price has declined by an order of magnitude roughly every 4.6 years. However the rate has declined so much that prices haven’t substantially dropped since 2011 (in 2015).\nSupport\nCheap secondary memory appears to cost around $0.03/GB in 2015.1\nThe price appears to have declined at an average rate of around an order of magnitude every five years in the long run, as illustrated in Figures 1 and 2. Figure 1 shows roughly six and a half orders of magnitude in the thirty years between 1985 and 2015, for around an order of magnitude every 4.6 years. Figure 2 shows thirteen orders of magnitude over the the sixty years between 1955 and 2015, for exactly the same rate. Both figures suggest the rate has been much slower in the past five years, seemingly as part of a longer term flattening. It appears that prices haven’t substantially dropped since 2011 (in 2015).\nFigure 1: Historic prices of hard drive space, from Matt Komorowski\nFigure 2: Historical prices of information storage in various formats, from Havard Blok, mostly drawing on John C. McCallum’s data.\n", "url": "https://aiimpacts.org/costs-of-information-storage/", "title": "Costs of information storage", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-07-23T19:55:02+00:00", "paged_url": "https://aiimpacts.org/feed?paged=19", "authors": ["Katja Grace"], "id": "3fd6c3185a7c47f2c13d3d23f9c89f22", "summary": []} {"text": "Information storage in the brain\n\nLast updated 9 November 2020\nThe brain probably stores around 10-100TB of data.\nSupport\nAccording to Forrest Wickman, computational neuroscientists generally believe the brain stores 10-100 terabytes of data.1 He suggests that these estimates are produced by assuming that information is largely stored in synapses, and that each synapse stores around 1 byte. The number of bytes is then simply the number of synapses.\nThese assumptions are simplistic (as he points out). In particular:\n\nsynapses may store more or less than one byte of information on average\nsome information may be stored outside of synapses\nnot all synapses appear to store information\nsynapses do not appear to be entirely independent\n\nWe estimate that there are 1.8-3.2 x 10¹⁴ synapses in the human brain, so according to the procedure Wickman outlines, this suggests that the brain stores around 180-320TB of data. It is unclear from his article whether the variation in the views of computational neuroscientists is due to different opinions on the assumptions stated above, or on the number of synapses in the brain. This makes it hard to adjust our estimate well, so our best guess for now is that the brain can store around 10-100TB of data, based on this being the common view among computational neuroscientists.\n", "url": "https://aiimpacts.org/information-storage-in-the-brain/", "title": "Information storage in the brain", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-07-23T16:26:18+00:00", "paged_url": "https://aiimpacts.org/feed?paged=19", "authors": ["Katja Grace"], "id": "d9a64ca33b43b859c10e85f38b9370e8", "summary": []} {"text": "Event: Exercises in Economic Futurism\n\nBy Katja Grace, 15 July 2015\nOn Thursday July 30th Robin Hanson is visiting again, and this time we will be holding an informal workshop on how to usefully answer questions about the future, with an emphasis on economic approaches. We will pick roughly three concrete futurism questions, then think about how to go about answering them together. We hope both to make progress on the questions at hand, and to equip attendees with a wider range of tools for effective futurism.\nTopic suggestions are welcome in the comments, whether or not you hope to come.\nAfternoon tea will be provided.\nDetails summary\nDate: 30 July 2015\nLocation: Berkeley, near College Ave and Ashby (ask for more detail)\nTimetable:\n2pm: Afternoon tea and chatting (it is best to show up somewhere in this hour)\n3pm: Exercises\n7pm: End (and transition into a party at the same location—attendees welcome to stay on)\nWe hope to keep the event to around twenty people, so RSVP required. If you would like to come, write to katja@intelligence.org.\n\nImage: La Sortie de l’opéra en l’an 2000", "url": "https://aiimpacts.org/event-exercises-in-economic-futurism/", "title": "Event: Exercises in Economic Futurism", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-07-15T19:27:03+00:00", "paged_url": "https://aiimpacts.org/feed?paged=19", "authors": ["Katja Grace"], "id": "84979626284c8f2fb5ac1bc56a8126c7", "summary": []} {"text": "Steve Potter on neuroscience and AI\n\nBy Katja Grace, 13 July 2015\nDr Steve Potter\nProf. Steve Potter works at the Laboratory of Neuroengineering in Atlanta, Georgia. I wrote to him after coming across his old article, ‘What can AI get from Neuroscience?’ I wanted to know how neuroscience might contribute to AI in the future: for instance will ‘reverse engineering the brain‘ be a substantial contributor of software for general AI? To shed light on this, I talked to Prof. Potter about how neuroscience has helped AI in the past, how the fields interact now, and what he expects in the future. Summary notes on the conversation are here.", "url": "https://aiimpacts.org/steve-potter-on-neuroscience-and-ai/", "title": "Steve Potter on neuroscience and AI", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-07-14T01:50:37+00:00", "paged_url": "https://aiimpacts.org/feed?paged=20", "authors": ["Katja Grace"], "id": "4681f354ab39ce620e45708dbe34394e", "summary": []} {"text": "Conversation with Steve Potter\n\nPosted 13 July 2015\nParticipants\nFigure 1: Professor Steve Potter\n\nProfessor Steve Potter – Associate Professor, Laboratory of NeuroEngineering, Coulter Department of Biomedical Engineering, Georgia Institute of Technology\nKatja Grace – Machine Intelligence Research Institute (MIRI)\n\nNote: These notes were compiled by MIRI and give an overview of the major points made by Professor Steve Potter.\nSummary\nKatja Grace spoke with Professor Steve Potter of Georgia Institute of Technology as part of AI Impacts’ investigation into the implications of neuroscience for artificial intelligence (AI). Conversation topics included how neuroscience now contributes to AI and how it might contribute in the future.\nHow has neuroscience helped AI in the past?\nProfessor Potter found it difficult to think of examples where neuroscience has helped with higher level ideas in AI. Some elements of cognitive science have been implemented in AI, but these may not be biologically based. He described two broad instances of neuroscience-inspired projects.\nSubsumption architecture\nPast work in AI has focused on disembodied computers with little work in robotics. Researchers now understand that AI does not need to be centralized; it can also take on physical form. Subsumption architecture is one way that robotics has advanced. This involves the coupling of sensory information to action selection. For example, Professor Rodney Brooks at MIT has developed robotic legs that respond to certain sensory signals. These legs also send messages to one another to control their movement. Professor Potter believes that this work could have been based on neuroscience, but it is not clear how much Professor Brooks was inspired by neuroscience while working on this project; the idea may have come to him independently.\nNeuromorphic engineering\nThis type of engineering employs properties of biological nervous systems in neural system AI, such as perception and motor control. One aspect of brain function can be imitated with silicon chips through pulse-coding, where analog signals are sent and received in tiny pulses. An application for this is in camera development by mimicking pulse-coded signals between the brain and the retina.\nHow is neuroscience contributing to AI today?\nAlthough neuroscience has not assisted AI development much in the past, Professor Potter has confidence that this intersection has considerable potential. This is because the brain works well in areas where AI falls short. For example, AI needs to improve how it works in real time in the real world. Self-driving cars may be improved through examining how a model organism, such as a bee, would respond to an analogous situation. Professor Potter believes it would be worthwhile research to record how humans use their brains while driving. Brain algorithms developed from this could be implemented into car design.\nCurrent work at the intersection of neuroscience and AI include the following:\nArtificial neural networks\nMost researchers at the intersection of AI and neuroscience are examining artificial neural networks, and might describe their work as ‘neural simulations’. These networks are a family of statistical learning models that are inspired by biological neural networks. Hardware in this discipline includes neuromorphic chips, while software includes work in pattern recognition. This includes handwriting recognition and finding military tanks in aerial photographs. The translation of these networks into useful products for both hardware and software applications has been slow.\nHybrots\nProfessor Potter has helped develop hybrots, which are hybrid living tissue interfaced with robotic machines: robots controlled by neurons. Silent Barrage was an early hybrot that drew on paper attached to pillars. Video was taken of people viewing the Silent Barrage hybrots. This data was transmitted back to Prof. Potter’s lab, where it was used to trigger electrical stimulation in the living brain of the system. This was a petri dish interfaced to a culture of rat cortical neurons. This work is currently being expanded to include more types of hybrots. In one the control will be by living neurons, while the other will be controlled by a simulated neural network.\nMeart (MultiElectrode Array Art) was an earlier hybrot. Controlled by a brain composed of rat neuron cells, it used robotic arms to draw on paper. It never progressed past the toddler stage of scribbling.\nHow is neuroscience likely to help AI in the future?\nA particular line of research in neuroscience that is likely to help with AI is the concept of delays. Computer design is often optimized to reduce the amount of time between command and execution. The brain though may take milliseconds longer to respond. However delays in the brain were evolved to respond to the timing of the real world and are a useful part of the brain’s learning process.\nNeuroscience probably also has potential to help AI in searching databases. It appears that the brain has methods for this that are completely unlike those used in computers, though we do not yet know what the brain’s methods are. One example given of the brain’s impressive abilities here is that Professor Potter can meet a new person and instantly be confident that he has never seen that person before.\nHow long will it take to duplicate human intelligence?\nIt will be hard to say when this has been achieved; success is happening at different rates for different applications. The future of neuroscience in AI will most likely involve taking elements of neuroscience and applying them to AI; it is unlikely that there will be a wait until we have a good understanding of the brain, then an export of that knowledge complete to AI.\nProfessor Potter greatly respects Ray Kurzweil, but does not think that he has an in depth knowledge of neuroscience. Professor Potter thinks the brain is much more complex than Kurzweil appears to believe, and that ‘duplicating’ human intelligence will take far longer than Kurzweil predicts. In Professor Potter’s consideration, it will take over a hundred years to develop a robot butler that can convince you that it is human.\nChallenges to progress\nLack of collaboration\nNeuroscience-inspired AI progress has been hampered because researchers across neuroscience and AI seldom collaborate with one another. This may be from disinterest or limited understanding of each other’s fields. Neuroscientists are not generally interested in the goal of creating human-level artificial intelligence. Professor Potter believes that of the roughly 30,000 people who attend the Society for Neuroscience, approximately 20 people want this. Most neuroscientists, for example, want to learn how something works instead of learning how it can be applied (e.g. learning how the auditory system works instead of developing a new hearing aid). If more people saw benefits in applying neuroscience to AI and in particular human-level AI, there would be greater progress. However, the scale is hard to predict. There is the potential for very much more rapid progress. For researchers to move their projects in this direction, the priorities of funding agencies would first have to move; these as these effectively dictate which projects move forward.\nFunding\nFunding for work at the intersection of neuroscience and AI may be hard to find. The National Institute of Health (NIH) funds only health-related work and has not funded AI projects. The National Science Foundation (NSF) may not think the work fits its requirement of being basic science research; it may be too applied. NSF though, is more open-minded to funding research on AI than NIH is. The military is also interested in AI research. Outside (of )the U.S., the European Union (EU) funds cross-disciplinary work in neuroscience and AI.\nNational Science Foundation (NSF) funding\nNSF had a call for radical proposals, from which Professor Potter received a four-year-long grant to apply neuroscience to electrical grid systems. Collaborators included a power engineer and people studying neural networks. The group was interested in addressing the U.S.’s large and uneven power supply and usage. The electrical grid has become increasingly difficult to control because of geographically varying differences in input and output.\nProfessor Potter believes that if people in neuroscience, AI, neural networks, and computer design talked more, this would bring progress. However, there were some challenges with this collaborative electrical grid systems project that need to be addressed. For example, the researchers needed to spend considerable time educating one another about their respective fields. It was also difficult to communicate with collaborators across the country; NSF paid for only one meeting per year, and the nuances of in-person interaction seem important for bringing together such diverse groups of people and reaping the benefits of their creative communication.\nOther people working in this field\n\nHenry Markram – Professor, École Polytechnique Fédérale de Lausanne, Laboratory of Neural Microcircuitry. Using EU funding, he creates realistic computer models of the brain, one piece at a time.\nRodney Douglas – Professor Emeritus, University of Zurich, Institute of Neuroinformatics. He is a neuromorphic engineer who worked on emulated brain function.\nCarver Mead – Gordon and Betty Moore Professor of Engineering and Applied Science Emeritus, California Institute of Technology. He was a founding father of neuromorphic engineering.\nRodney Brooks – Panasonic Professor of Robotics Emeritus, Massachusetts Institute of Technology (MIT). He was a pioneer in studying distributed intelligence and developed subsumption architecture.\nAndy Clark – Professor of Logic and Metaphysics, University of Edinburgh. He does work on embodiment, artificial intelligence, and philosophy.\nJose Carmena – Associate Professor of Electrical Engineering and Neuroscience, University of California-Berkeley. Co-Director of the Center of Neural Engineering and Prostheses, University of California-Berkeley, University of California-San Francisco. He has researched the impact of electrical stimulation on sensorimotor learning and control in rats.\nGuy Ben-Ary – Manager, University of Western Australia, CELLCentral in the School of Anatomy and Human Biology. He is an artist and researcher who uses biologically related technology in his work. He worked in collaboration with Professor Potter on Silent Barrage.\nWolfgang Maass – Professor of Computer Science, Graz University of Technology. He is doing research on artificial neural networks.\nThad Starner – Assistant Professor, Georgia Institute of Technology, College of Computing. He applies biological concepts into developing wearable computing devices.\nJennifer Hasler – Professor, Georgia Institute of Technology, Bioengineering and Electronic Design and Applications. She has studied neuromorphic hardware.\n", "url": "https://aiimpacts.org/conversation-with-steve-potter/", "title": "Conversation with Steve Potter", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-07-14T01:49:36+00:00", "paged_url": "https://aiimpacts.org/feed?paged=20", "authors": ["Katja Grace"], "id": "c9ef36b36e25048ebd0e7e5c94dd1d22", "summary": []} {"text": "New funding for AI Impacts\n\nBy Katja Grace, 4 July 2015\nAI Impacts has received two grants! We are grateful to the Future of Humanity Institute (FHI) for $8,700 to support work on the project until September 2015, and the Future of Life Institute (FLI) for $49,310 for another year of work after that. Together this is enough to have a part time researcher until September 2016, plus a little extra for things like workshops and running the website.\nWe are big fans of FHI and FLI, and are excited to be working alongside them.\nThe FLI grant was part of the recent contest which distributed around $7M funding from Elon Musk and the Open Philanthropy Project to projects designed to keep AI robust and beneficial. The full list of projects to be funded is here. You can see part of our proposal here.\nThis funding means that AI Impacts is no longer in urgent need of support. Further donations will likely go to additional research through contract work, guest research, short term collaborations, and outsourceable data collection.\nMany thanks to those whose support—in the form of both funding and other feedback—has brought AI Impacts this far.", "url": "https://aiimpacts.org/new-funding-for-ai-impacts/", "title": "New funding for AI Impacts", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-07-04T22:25:54+00:00", "paged_url": "https://aiimpacts.org/feed?paged=20", "authors": ["Katja Grace"], "id": "9679ec32aa8f0120c61514d2b8648ec9", "summary": []} {"text": "Update on all the AI predictions\n\nBy Katja Grace, 5 June 2015\nFor the last little while, we’ve been looking into a dataset of individual AI predictions, collected by MIRI a couple of years ago. We also previously gathered all the surveys about AI predictions that we could find. Together, these are all the public predictions of AI that we know of. So we just wrote up a quick summary of what we have so far.\nHere’s a picture of most of the predictions, from our summary:\nFigure 1: Predictions from the MIRI dataset (red = maxIY ≈ ‘AI more likely than not after …’, and green = minPY ≈ ‘AI less likely than not before …’) and surveys. This figure excludes one prediction of 3012 made in 2012, and the Hanson survey, which doesn’t ask directly about prediction dates.\nRecent surveys seem to pretty reliably predict AI between 2040 and 2050, as you can see. The earlier surveys which don’t fit this trend also had less uniform questions, whereas the last six surveys ask about the year in which there is a 50% chance that (something like) human-level AI will exist. The entire set of individual predictions has a median somewhere in the 2030s, depending on how you count. However for predictions made since 2000, the median is 2042 (minPY), in line with the surveys. The surveys that ask also consistently get median dates for a 10% chance of AI in the 2020s.\nThis consistency seems interesting, and these dates seem fairly soon. If we took these estimates seriously, and people really meant at least ‘AI that could replace most humans in their jobs’, the predictions of ordinary AI researchers seem pretty concerning. 2040 is not far off, and the 2020s seem too close for us to be prepared to deal with moderate chances of AI, at the current pace.\nWe are not sure what to make of these predictions. Predictions about AI are frequently distrusted, though often alongside complaints that seem weak to us. For instance that people are biased to predict AI twenty years in the future, or just before their own deaths; that AI researchers have always been very optimistic and continually proven wrong; that experts and novices make the same predictions (Edit (6/28/2016): now found to be based on an error); or that failed predictions of the past look like current predictions. There really do seem to be selection biases, from people who are optimistic about AGI working in the field for instance, and from shorter predictions being more published. However there are ways to avoid these.\nThere seem to be a few good reasons to distrust these predictions however. First, it’s not clear that people can predict these kinds of events well in any field, at least without the help of tools. Relatedly, it’s not clear what tools and other resources people used in the creation of these predictions. Did they model the situation carefully, or just report their gut reactions? My guess is near the ‘gut reaction’ end of the spectrum, based on looking for reasoning and finding only a little. Often gut reactions are reliable, but I don’t expect them to be so, on their own, in an area such as forecasting novel and revolutionary technologies.\nThirdly, phrases like, ‘human-level AI arrives’ appear to stand for different events for different people. Sometimes people are talking about almost perfect human replicas, sometimes software entities that can undercut a human at work without resembling them much at all, sometimes human-like thinking styles which are far from being able to replace us. Sometimes they are talking about human-level abilities at human cost, sometimes at any cost. Sometimes consciousness is required, sometimes poetry is, sometimes calculating ability suffices. Our impressions from talking to people are that ‘AI predictions’ mean a wide variety of things. So the collection of predictions is probably about different events, which we might reasonably expect to happen at fairly different times. Before trusting experts here, it seems key to check we know what they are talking about.\nGiven all of these things, I don’t trust these predictions a huge amount. However I expect they are somewhat informative, and there are not a lot of good sources to trust at present.\nThe next things I’d like to know in this area:\n\nWhat do experts actually believe about human-level AI timelines, if you check fairly thoroughly that they are talking about what you think they are talking about, and aren’t making obviously different assumptions about other matters?\nHow reliable are similar predictions? For instance, predictions of novel technologies, predictions of economic upheaval, predictions of disaster?\nWhy do the results of the Hanson survey conflict with the other surveys?\nHow do people make the predictions they make? (e.g. How often are they thinking of hardware trends? Using intuition? Following the consensus of others?)\nWhy are AGI researchers so much more optimistic than AI researchers, and are AI researchers so much more optimistic than others?\nWhat disagreements between AI researchers produce their different predictions?\nWhat do AI researchers know that informs their predictions that people outside the field (like me) do not know? (What do they know that doesn’t inform their predictions, but should?)\n\nHopefully we’ll be looking more into some of these things soon.\n ", "url": "https://aiimpacts.org/update-on-all-the-ai-predictions/", "title": "Update on all the AI predictions", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-06-06T05:27:59+00:00", "paged_url": "https://aiimpacts.org/feed?paged=20", "authors": ["Katja Grace"], "id": "d393167b12ec47796602becc18a8a50f", "summary": []} {"text": "Predictions of Human-Level AI Timelines\n\nNote: This page is out of date. See an up-to-date version of this page on our wiki.\nUpdated 5 June 2015\nWe know of around 1,300 public predictions of when human-level AI will arrive, of varying levels of quality. These include predictions from individual statements and larger surveys. Median predictions tend to be between 2030 and 2055 for predictions made since 2000, across different subgroups of predictors.\nDetails\nThe landscape of AI predictions\nPredictions of when human-level AI will be achieved exist in the form of surveys and public statements (e.g. in articles, books or interviews). Some statements backed by analysis are discussed here. Many more statements have been collected by MIRI. Figure 1 illustrates almost all of the predictions we know about, though most are aggregated there into survey medians. Altogether, we know of around 1,300 public predictions of when human-level AI will arrive, though 888 are from a single informal online poll. We know of ten surveys that address this question directly (plus a set of interviews which we sometimes treat as a survey but here count here as individual statements, and a survey which asks about progress so far as a fraction of what is required for human-level AI). Only 65 predictions that we know of are not part of surveys.\nSummary of findings\nFigure 1: Predictions from the MIRI dataset (red = maxIY ≈ ‘AI more likely than not after …’, and green = minPY ≈ ‘AI less likely than not before …’) and surveys. This figure excludes one prediction of 3012 made in 2012, and the Hanson survey, which doesn’t ask directly about prediction dates.\nRecent surveys tend to have median dates between 2040 and 2050. All six of the surveys which ask for the year in which human-level AI will have arrived with 50% probability produce medians in this range (not including Kruel’s interviews, which have a median of 2035, and are counted in the statements here). The median prediction in statements is 2042, though predictions of AGI researchers and futurists have medians in the early 2030s. Surveys give median estimates for a 10% chance of human-level AI in the 2020s. We have not attempted to adjust these figures for biases.\nImplications\nExpert predictions about AI timelines are often considered uninformative. Evidence that predictions are less informative than in other messy fields appears to be weak. We have not evaluated baseline prediction accuracy in such fields however. We expect survey results and predictions from those further from AGI are more accurate than other sources, due to selection biases. The differences between these sources appear to be a small number of decades.", "url": "https://aiimpacts.org/predictions-of-human-level-ai-timelines/", "title": "Predictions of Human-Level AI Timelines", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-06-05T15:36:50+00:00", "paged_url": "https://aiimpacts.org/feed?paged=20", "authors": ["Katja Grace"], "id": "57078ba8df8e862e44a6767c6c7297df", "summary": []} {"text": "Accuracy of AI Predictions\n\nUpdated 4 June 2015\nIt is unclear how informative we should expect expert predictions about AI timelines to be. Individual predictions are undoubtedly often off by many decades, since they disagree with each other. However their aggregate may still be quite informative. The main potential reason we know of to doubt the accuracy of expert predictions is that experts are generally poor predictors in many areas, and AI looks likely to be one of them. However we have not investigated how accurate ‘poor’ is, or whether AI really is such a case.\nPredictions of AI timelines are likely to be biased toward optimism by roughly decades, especially if they are voluntary statements rather than surveys, and especially if they are from populations selected for optimism. We expect these factors account for less than a decade and around two decades’ difference in median predictions respectively.\nSupport\nConsiderations regarding accuracy\nA number of reasons have been suggested for distrusting predictions about AI timelines:\n\nModels of areas where people predict well\nResearch has produced a characterization of situations where experts predict well and where they do not. See table 1 here. AI appears to fall into several classes that go with worse predictions. However we have not investigated this evidence in depth, or the extent to which these factors purportedly influence prediction quality.\nExpert predictions are generally poor\nExperts are notoriously poor predictors. However our impression is that this is because of their disappointing inability to predict some things well, rather than across the board failure. For instance, experts can predict the Higgs boson’s existence, outcomes of chemical reactions, and astronomical phenomena. So the question falls back to where AI falls in the spectrum of expert predictability, discussed in the last point.\nDisparate predictions\nOne sign that AI predictions are not very accurate is that they differ over a range of a century or so. This strongly suggests that many individual predictions are inaccurate, though not that the aggregate distribution is uninformative.\nSimilarity of old and new predictions\nOlder predictions seem to form a fairly similar distribution to more recent predictions, except for very old predictions. This is weak evidence that new predictions are not strongly affected by evidence, and are therefore more likely to be inaccurate.\nSimilarity of expert and lay opinions\nArmstrong and Sotala found that expert and non-expert predictions look very similar.1 This finding is in doubt at the time of writing, due to errors in the analysis. If it were true, this would be weak evidence against experts having relevant expertise, since if they did, this might cause a difference with the opinions of lay-people. Note that it may also not, if the laypeople go to experts for information.\nPredictions are about different things and often misinterpreted\nComments made around predictions of human-level AI suggest that predictors are sometimes thinking about different events as ‘AI arriving’.2 Even when they are predictions about the same event, ‘prediction’ can mean different things. One person might ‘predict’ the year when they think human-level AI is more likely than not, while another ‘predicts’ the year that AI seems almost certain.\n\nThis list is not necessarily complete.\nPurported biases\nA number of biases have been posited to affect predictions of human-level AI:\n\nSelection biases from optimistic experts\nBecoming an expert is probably correlated with independent optimism about the field, and experts make most of the credible predictions. We expect this to push median estimates earlier by less than a few decades.\nBiases from short-term predictions being recorded\nThere are a few reasons to expect recorded public predictions to be biased toward shorter timescales. Overall these probably make public statements less than a decade more optimistic.\nMaes-Garreau law\nThe Maes-Garreau law is a posited tendency for people to predict important technologies not long before their own likely death. It probably doesn’t afflict predictions of human-level AI substantially.\nFixed period bias\nThere is a stereotype that people tend to predict AI in 20-30 years. There is weak evidence of such a tendency around 20 years, though little evidence that this is due to a bias (that we know of).\n\nConclusions\nAI appears to exhibit several qualities characteristic of areas that people are not good at predicting. Individual AI predictions appear to be inaccurate by many decades in virtue of their disagreement. Other grounds for particularly distrusting AI predictions seem to offer weak evidence against them, if any. Our current guess is that AI predictions are less reliable than many kinds of prediction, though still potentially fairly informative.\nBiases toward early estimates appear to exist, as a result of optimistic people becoming experts, and optimistic predictions being more likely to be published for various reasons. These are the only plausible substantial biases we know of.", "url": "https://aiimpacts.org/accuracy-of-ai-predictions/", "title": "Accuracy of AI Predictions", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-06-04T08:47:48+00:00", "paged_url": "https://aiimpacts.org/feed?paged=20", "authors": ["Katja Grace"], "id": "6662a6f6aa618183b30708443c62c94d", "summary": []} {"text": "Publication biases toward shorter predictions\n\nWe expect predictions that human-level AI will come sooner to be recorded publicly more often, for a few reasons. Public statements are probably more optimistic than surveys because of such effects. The difference appears to be less than a decade, for median predictions.\nSupport\nPlausible biases\nBelow we outline five reasons for expecting earlier predictions to be stated and publicized more than later ones. We do not know of compelling reasons to expect longer term predictions to be publicized more, unless they are so distant as to also fit under the first bias discussed below.\nBias from not stating the obvious\nIn many circumstances, people are disproportionately likely to state beliefs that they think others do not hold. For example, “homeopathy works” gets more Google hits than “homeopathy doesn’t work”, though this probably doesn’t reflect popular beliefs on the matter. Making public predictions seems likely to be a circumstance with this character. Predictions are often made in books and articles which are intended to be interesting and surprising, rather than by people whose job it is to report on AI forecasts regardless of how far away they are. Thus we expect people with unusual positions on AI timelines to be more likely to state them. This should produce a bias toward both very short and very long predictions being published.\nBias from the near future being more concerning\nArtificial intelligence will arguably be hugely important, whether as a positive or negative influence on the world. Consequently, people are motivated to talk about its social implications. The degree of concern motivated by impending events tends to increase sharply with proximity to the event. Thus people who expect human-level AI in a decade will tend to be more concerned about it than people who expect human-level AI to take a century, and so will talk about it more. Similarly, publishers are probably more interested in producing books and articles making more concerning claims.\nBias from ignoring reverse predictions\nIf you search for people predicting AI by a given date, you can get downwardly biased estimates by taking predictions from sources where people are asked about certain specific dates, and respond that AI will or will not have arrived by that date. If people respond ‘AI will arrive by X’ and ‘AI will not arrive by X’ as appropriate, the former can look like ‘predictions’ while the latter do not.\nThis bias affected some data in the MIRI dataset, though we have tried to minimize it now. For example, this bet (“By 2029 no computer – or “machine intelligence” – will have passed the Turing Test.”) is interpreted in the above collection as Kurzweil making a prediction, but not as Kapor making a prediction. It also contained several estimates of 70 years, taken from a group who appear to have been asked whether AI would come within 70 years, much later, or never. The ‘within 70 years’ estimates are recorded as predictions, while the others ignored, producing ’70 years’ estimates, almost regardless of the overall opinions of the group surveyed. In a population of people with a range of beliefs, this method of recording predictions would produce ‘predictions’ largely determined by which year was asked about.\nBias from unavoidably ignoring reverse predictions\nThe aforementioned bias arises from an error that can be avoided in recording data, where predictions and reverse predictions are available. However similar types of bias may exist more subtly. Such bias could arise where people informally volunteer opinions in a discussion about some period in the future. People with shorter estimates who can make a positive statement might feel more as though they have something to say, while those who believe there will not be AI at that time do not. For instance, suppose ten people write books about the year 2050, and each predicts AI in a different decade in the 21st Century. Those who predict it prior to 2050 will mention it, and be registered as a prediction of before 2050. Those who predict it after 2050 will not mention it, and not be registered as making a prediction. This could also be hard to avoid if predictions reach you through a filter of others registering them as predictions.\nSelection bias from optimistic experts\nMain article: Selection bias from optimistic experts\nSome factors that cause people to make predictions about AI are likely to correlate with expectations of human-level AI arriving sooner. Experts are better positioned to make credible predictions about their field of expertise than more distant observers are. However since people are more likely to join a field if they are more optimistic about progress there, we might expect their testimony to be biased toward optimism.\nMeasuring these biases\nThese forms of bias (except the last) seem to us as if they should be much weaker in survey data than voluntary statements, for the following reasons:\n\nSurveys come with a default of answering questions, so one does not need a strong reason or social justification for doing so (e.g. having a surprising claim, or wanting to elicit concern).\nOne can assess whether a survey ignores reverse predictions, and there appears to be little risk of invisible reverse predictions.\nParticipation in surveys is mostly determined before the questions are viewed, for a large number of questions at once. This allows less opportunity for views on the question to affect participation.\nParticipation in surveys is relatively cheap, so people who care little about expressing any particular view are likely to participate for reasons of orthogonal incentives, whereas costly communications (such as writing a book) are likely to be sensible only for those with a strong interest in promoting a specific message.\nParticipation in surveys is usually anonymous, so relatively unsatisfactory for people who particularly want to associate with a specific view, further aligning the incentives of those who want to communicate with those who don’t care.\nMuch larger fractions of people participate in surveys when requested than volunteer predictions in highly publicized arenas, which lessens the possibility for selection bias.\n\nWe think publication biases such as those described here are reasonably likely on theoretical grounds. We are also not aware of other reasons to expect surveys and statements to differ in their optimism about AI timelines. Thus we can compare the predictions of statements and surveys to estimate the size of these biases. Survey data appears to produce median predictions of human-level AI somewhat later than similar public statements do: less than a decade, at a very rough estimate. Thus we think some combination of these biases probably exist, and introduce less than a decade of error to median estimates.\nImplications\nAccuracy of AI predictions: AI predictions made in statements are probably biased toward being early, by less than a decade. This suggests both that predictions overall are probably slightly earlier than they would be otherwise, and surveys should be trusted more relative to statements (though there may be other considerations there).\nCollecting data: When collecting data about AI predictions, it is important to avoid introducing bias by recording opinions that AI is before some date while ignoring opinions that it is after that date.\nMIRI dataset: The earlier version of the MIRI dataset is somewhat biased due to ignoring reverse predictions, however this has been at least partially resolved.", "url": "https://aiimpacts.org/short-prediction-publication-biases/", "title": "Publication biases toward shorter predictions", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-05-29T21:46:50+00:00", "paged_url": "https://aiimpacts.org/feed?paged=20", "authors": ["Katja Grace"], "id": "65e5cea398b4cde3458e3ce8e82b29b7", "summary": []} {"text": "Selection bias from optimistic experts\n\nExperts on AI probably systematically underestimate time to human-level AI, due to a selection bias. The same is more strongly true of AGI experts. The scale of such biases appears to be decades. Most public AI predictions are from AI and AGI researchers, so this bias is relevant to interpreting these predictions.\nDetails\nWhy we expect bias\nWe can model a person’s views on AI timelines as being influenced both by their knowledge of AI and other somewhat independent factors, such as their general optimism and their understanding of technological history. People who are initially more optimistic about progress in AI seem more likely to enter the field of AI than those who are less so. Thus we might expect experts in AI to be selected for being optimistic, for reasons independent of their expertise. Similarly, AI researchers presumably enter the subfield of AGI more if they are optimistic about human-level intelligence being feasible soon.\nThis means expert predictions should tend to be more optimistic than they would if they were made by random people who became well informed, and thus are probably overall too optimistic (setting aside any other biases we haven’t considered).\nThis reason to expect bias only applies to the extent that predictions are made based on personal judgments, rather than explicit procedures that can be verified to avoid such biases. However predictions in AI appear to be very dependent on such judgments. Thus we expect some bias toward earlier predictions from AI experts, and more so from AGI experts. How large such biases might be is unclear however.\nEmpirical evidence for bias\nAnalysis of the MIRI dataset supports a selection bias existing. Median people working in AGI are around two decades more optimistic than median AI researchers from outside AGI. Those in AI are more optimistic again than ‘others’, and futurists are slightly more optimistic than even AGI researchers, though these are less clear due to small and ambiguous samples. In sum, the groups do make different predictions in the directions that we would expect as a result of such bias.\nHowever it is hard to exclude expertise as an explanation for these differences, so this does not strongly imply that there are biases. There could also be biases that are not caused by selection effects, such as wishful thinking, planning fallacy, or self-serving bias. There may also be other plausible explanations we haven’t considered.\nSince there are several plausible reasons for the differences we see here, and few salient reasons to expect effects in the opposite direction (expertise could go either way), the size of the selection biases in question are probably at most as large as the gaps between the predictions of the groups. That is, roughly two decades between AI and AGI researchers, and another several decades between AI researchers and others. Part of this span should be a bias of the remaining group toward being too pessimistic, but in both cases the remaining groups are much larger than the selected group, so most of the bias should be in the selected group.\nEffects of group biases on predictions\nPeople being selected into groups such as ‘AGI researchers’ based on their optimism does not in itself introduce a bias. The problem arises when people from different groups start making different numbers of predictions. In practice, they do. Among the predictions we know of, most are from AI researchers, and a large fraction of those are from AGI researchers. Of surveys we have recorded, 80% target AI or AGI researchers, and around half of them target AGI researchers in particular. Statements in the MIRI dataset since 2000 include 13 from AGI researchers, 16 from AI researchers, 6 from futurists, and 6 from others. This suggests we should expect aggregated predictions from surveys and statements to be optimistic, by roughly decades.\nConclusions\nIt seems likely that AI and AGI researchers’ predictions exhibit a selection bias toward being early, based on reason to expect such a bias, the large disparity between AI and AGI researchers’ predictions (while AI researchers seem likely to be optimistic if anything), and the consistency between the distributions we see and those we would expect under the selection bias explanation for disagreement. Since AI and AGI researchers are heavily represented in prediction data, predictions are likely to be biased toward optimism, by roughly decades.\n \nRelevance\nAccuracy of AI predictions: many AI timeline predictions come from AI researchers and AGI researchers, and people interested in futurism. If we want to use these predictions to estimate AI timelines, it is valuable to know how biased they are, so we can correct for such biases.\nDetecting relevant expertise: if the difference between AI and AGI researcher predictions is not due to bias, then it suggests one group had additional information. Such information would be worth investigating.", "url": "https://aiimpacts.org/bias-from-optimistic-predictors/", "title": "Selection bias from optimistic experts", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-05-29T18:50:48+00:00", "paged_url": "https://aiimpacts.org/feed?paged=20", "authors": ["Katja Grace"], "id": "622175fc15407588477aa1113b771e15", "summary": []} {"text": "Why do AGI researchers expect AI so soon?\n\nBy Katja Grace, 24 May 2015\nPeople have been predicting when human-level AI will appear for many decades. A few years ago, MIRI made a big, organized collection of such predictions, along with helpful metadata. We are grateful, and just put up a page about this dataset, including some analysis. Some of you saw an earlier version of on an earlier version of our site.\nThere are lots of interesting things to say about the collected predictions. One interesting thing you might say is ‘wow, the median predictor thinks human-level AI will arrive in the 2030s—that’s kind of alarmingly soon’. While this is true, another interesting thing is that different groups have fairly different predictions. This means the overall median date is especially sensitive to who is in the sample.\nIn this particular dataset, who is in the sample depends a lot on who bothers to make public predictions. And another interesting fact is that people who bother to make public predictions have shorter AI timelines than people who are surveyed more randomly. This means the predictions you see here are probably biased in the somewhat early direction. We’ll talk about that another time. For now, I’d like to show you some of the interesting differences between groups of people.\nWe divided the people who made predictions into those in AI, those in AGI, futurists and others. This was a quick and imprecise procedure mostly based on Paul’s knowledge of the fields and the people, and some Googling. Paul doesn’t think he looked at the prediction dates before categorizing, though he probably basically knew some already. For each person in the dataset, we also interpreted their statement as a loose claim about when human-level AI was less likely than not to have arrived and when it was more likely than not to have arrived.\nBelow is what some of the different groups’ predictions look like, for predictions made since 2000. At each date, the line shows what fraction of predictors in that group think AI will already have happened by then, more likely than not. Note that they may also think AI will have happened before then: statements were not necessarily about the first year on which AI would arrive.\nFigure 1: Cumulative distributions of predictions made since 2000 by different groups of people\nThe groups’ predictions look pretty different, and mostly in ways you might expect: futurists and AGI researchers are more optimistic than other AI researchers, who are more optimistic than ‘others’. The median years given by different groups span seventy years, though this is mostly due to ‘other’, which is a small group. Medians for AI and AGI are eighteen years apart.\nThe ‘futurist’ and ‘other’ categories are twelve people together, and the line between being a futurist and merely pronouncing on the future sometimes seems blurry. It is interesting that the futurists here look very different from the the ‘others’, but I wouldn’t read that much into it. It may just be that Paul’s perception of who is a futurist depends on degree of confidence about futuristic technology.\nMost of the predictors are in the AI or AGI categories. These groups have markedly different expectations. About 85% of AGI researchers are more optimistic than the median AI researcher. This is particularly important because ‘expert predictions’ about AI usually come from some combination of AI and AGI researchers, and it looks like what the combination is may alter the median date by around two decades.\nWhy would AGI researchers be systematically more optimistic than other AI researchers? There are perhaps too many plausible explanations for the discrepancy.\nMaybe AGI researchers are—like many—overoptimistic about their own project. Planning fallacy is ubiquitous, and planning fallacy about building AGI naturally shortens overall AGI timelines.\nAnother possibility is expertise: perhaps human-level AI really will arrive soon, and the AGI researchers are close enough to the action to see this, while it takes time for the information to percolate to others. The AI researchers are also somewhat informed, so their predictions are partway between those of the AGI researchers, and those of the public.\nAnother reason is selection bias. AI researchers who are more optimistic about AGI will tend to enter the subfield of AGI more often than those who think human-level AI is a long way off. Naturally then, AGI researchers will always be more optimistic about AGI than AI researchers are, even if they are all reasonable and equally well informed. It seems hard to imagine some of the effect not being caused by this.\nIt matters which explanations are true: expertise means we should listen to AGI researchers above others. Planning fallacy and selection bias suggest we should not listen to them so much, or at least not directly. If we want to listen to them in those cases, we might want to make different adjustments to account for biases.\nHow can we tell which explanations are true? The shapes of the curves could give some evidence. What would we expect the curves to look like if the different explanations were true? Planning fallacy might look like the entire AI curve being shifted fractionally to the left to produce the AGI curve – e.g. so all of the times are halved. Selection bias would make the AGI curve look like the bottom of the AI curve, or the AI curve with its earlier parts heavily weighted. Expertise could look like dates that everyone in the know just doesn’t predict. Or the predictions might just form a narrower, more accurate, band. In fact all of these would lead to pretty similar looking graphs, and seem to roughly fit the data. So I don’t think we can infer much this way.\nDo you favor any of the hypotheses I mentioned? Or others? How do you distinguish between them?\n \n\nOur page about demographic differences in AI predictions is here. \nOur page about the MIRI AI predictions dataset is here.", "url": "https://aiimpacts.org/why-do-agi-researchers-expect-ai-so-soon/", "title": "Why do AGI researchers expect AI so soon?", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-05-25T00:03:56+00:00", "paged_url": "https://aiimpacts.org/feed?paged=20", "authors": ["Katja Grace"], "id": "fdb4af42b25d314d531470b91d1f3315", "summary": []} {"text": "Group Differences in AI Predictions\n\nUpdated 9 November 2020\nIn 2015 AGI researchers appeared to expect human-level AI substantially sooner than other AI researchers. The difference ranges from about five years to at least about sixty years as we move from highest percentiles of optimism to the lowest. Futurists appear to be around as optimistic as AGI researchers. Other people appear to be substantially more pessimistic than AI researchers.\nDetails\nMIRI dataset\nWe categorized predictors in the MIRI dataset as AI researchers, AGI (artificial general intelligence) researchers, Futurists and Other. We also interpreted their statements into a common format, roughly corresponding to the first year in which the person appeared to be suggesting that human-level AI was more likely than not (see ‘minPY’ described here).\nRecent (since 2000) predictions are shown in the figure below. Those made by people working on AGI specifically tended to be decades more optimistic than those at the same percentile of optimism working in other areas of AI. The difference ranged from around five years to at least around sixty years as we move from the soonest predictions to the latest. Those who worked in AI broadly tended to be at least a decade more optimistic than ‘others’, at any percentile of optimism within their group. Futurists were about as optimistic as AGI researchers.\nNote that these predictions were made over a period of at least 12 years, rather than at the same time.\nFigure 1: Cumulative probability of AI being predicted (minPY), for various groups, for predictions made after 2000. See here.\nMedian predictions are shown below (these are also minPY predictions as defined on the MIRI dataset page, calculated from ‘cumulative distributions’ sheet in updated dataset spreadsheet also available there).\n\n\n\n Median AI predictions\n AGI\n AI\n Futurist\n Other\n All\n\n\n Early (pre-2000) (warning: noisy)\n\n 1988\n 2031\n 2036\n 2025\n\n\n Late (since 2000)\n 2033\n 2051\n 2031\n 2101\n 2042\n\n\n\nFHI survey data\nThe FHI survey results suggest that people’s views are not very different if they work in computer science or other parts of academia. We have not investigated this evidence in more detail.\nImplications\nBiases from optimistic predictors and information asymmetries: Differences of opinion among groups who predict AI suggest that either some groups have more information, or that biases exist between predictions made by the groups (e.g. even among unbiased but noisy forecasters, if only people most optimistic about a field enter it, then the views of those in the field will be biased toward optimism) . Either of these is valuable to know about, so that we can either look into the additional information, or try to correct for the biases.", "url": "https://aiimpacts.org/group-differences-in-ai-predictions/", "title": "Group Differences in AI Predictions", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-05-24T20:37:27+00:00", "paged_url": "https://aiimpacts.org/feed?paged=20", "authors": ["Katja Grace"], "id": "b01fde5043c39310a18468c7c9225742", "summary": []} {"text": "Supporting AI Impacts\n\n\nBy Katja Grace, 21 May 2015\nWe now have a donations page. If you like what we are doing as much as anything else you can think of to spend marginal dollars on, I encourage you to support this project! Money will go to more of the kind of thing you see, including AI Impacts’ existence.\nBriefly, I think AI Impacts is worth supporting because AI is a really big deal, improving our forecasts of AI is a neglected leg of AI preparations, and there are cheap, tractable projects which could improve our forecasts. I hope to elaborate on these claims more quantitatively in the future.\nIf you like what we are doing enough to want to hear about it sometimes, but not enough to want to pay for it, you might want to follow us on Facebook or Twitter or RSSs (blog, featured articles). If you don’t like what we are doing even that much, and you think we could do better, we’d always love to hear about it.\n(Image: Rosario Fiore)", "url": "https://aiimpacts.org/supporting-ai-impacts/", "title": "Supporting AI Impacts", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-05-22T05:27:11+00:00", "paged_url": "https://aiimpacts.org/feed?paged=21", "authors": ["Katja Grace"], "id": "7914573268c400eebcc71cac684f5dad", "summary": []} {"text": "The Maes-Garreau Law\n\nThe Maes-Garreau law posits that people tend to predict exciting future technologies toward the end of their lifetimes. It probably does not hold for predictions of human-level AI.\nClarification\nFrom Wikipedia:\nThe Maes–Garreau law is the statement that “most favorable predictions about future technology will fall within the Maes–Garreau point”, defined as “the latest possible date a prediction can come true and still remain in the lifetime of the person making it”. Specifically, it relates to predictions of a technological singularity or other radical future technologies.\nThe law was posited by Kevin Kelly, here.\nEvidence\nIn the MIRI dataset, age and predicted time to AI are very weakly anti-correlated, with a correlation of -0.017. That is, older people expect AI very slightly sooner than others. This suggests that if the Maes-Garreau law applies to human-level AI predictions, it is very weak, or is being masked by some other effect. Armstrong and Sotala also interpret an earlier version of the same dataset as evidence against the Maes-Garreau law substantially applying, using a different method of analysis.\nEarlier, smaller, informal analyses find evidence of the law, but in different settings. According to Rodney Brooks (according to Kevin Kelly), Pattie Maes observed this effect strongly in a survey of public predictions of human uploading:\n[Maes] took as many people as she could find who had publicly predicted downloading of consciousness into silicon, and plotted the dates of their predictions, along with when they themselves would turn seventy years old. Not too surprisingly, the years matched up for each of them. Three score and ten years from their individual births, technology would be ripe for them to download their consciousnesses into a computer. Just in the nick of time! They were each, in their own minds, going to be remarkably lucky, to be in just the right place at the right time.\nHowever, according to Kelly, the data was not kept.\nKelly did another small search for predictions of the singularity, which appears to only support a very weakened version of the law: many people predict AI within their lifetime.\nThe hypothesized reason for this relationship is that people would like to believe they will personally avoid death. If this is true, we might expect the relation to apply much more strongly to predictions of events which might fairly directly save a person from death. Human uploading and the singularity are such events, while human-level AI does not appear to be. Thus it is plausible that this law does apply to some technological predictions, but not human-level AI.\nImplications\nEvidence about wishful thinking: the Maes-Garreau law is a relatively easy to check instance of a larger class of hypotheses to do with AI predictions being directed by wishful thinking. If wishful thinking were a large factor in AI predictions, this would undermine accuracy because it is not related to when human-level AI will appear. That the Maes-Garreau law doesn’t seem to hold is evidence against wishful thinking being a strong determinant of AI predictions. Further evidence might be obtained by observing the correlation between belief that human-level AI will be positive for society and belief that it will come soon.", "url": "https://aiimpacts.org/the-maes-garreau-law/", "title": "The Maes-Garreau Law", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-05-20T11:18:50+00:00", "paged_url": "https://aiimpacts.org/feed?paged=21", "authors": ["Katja Grace"], "id": "1433c5411b66756d8b5a68ef107d15d9", "summary": []} {"text": "AI Timeline predictions in surveys and statements\n\nSurveys seem to produce median estimates of time to human-level AI which are roughly a decade later than those produced from voluntary public statements.\nDetails\nWe compared several surveys to predictions made by similar groups of people in the MIRI AI predictions dataset, and found that predictions made in surveys were roughly 0-2 decade later. This was a rough and non-rigorous comparison, and we made no effort to control for most variables.\nStuart Armstrong and Kaj Sotala make a similar comparison here, and also find survey data to give later predictions. However they are comparing non-survey data largely from recent decades with survey data entirely from 1973, which we think makes the groups too different in circumstance to infer much about surveys and statements in particular. Though in the MIRI dataset (that they used), very early predictions tend to be more optimistic than later predictions, if anything, so if they had limited themselves to predictions from similar times there would have been a larger difference (though with a very small sample of statements).\nRelevance\nAccuracy of AI predictions: some biases which probably exist in public statements about AI predictions are likely to be smaller or not apply in survey data. For instance, public statements are probably more likely to be made by people who believe they have surprising or interesting views, whereas this should much less influence answers to a survey question once someone is taking a survey. Thus comparing data from surveys and voluntary statements can tell us about the strength of such biases. Given that median survey predictions are rarely more than a decade later than similar statements, and survey predictions seem unlikely to be strongly biased in this way, median statements are probably less than a decade early as a result of this bias.", "url": "https://aiimpacts.org/ai-timeline-predictions-in-surveys-and-statements/", "title": "AI Timeline predictions in surveys and statements", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-05-20T11:01:48+00:00", "paged_url": "https://aiimpacts.org/feed?paged=21", "authors": ["Katja Grace"], "id": "3ab2dcf1651749aa391a29b15fe9cfe0", "summary": []} {"text": "MIRI AI Predictions Dataset\n\nThe MIRI AI predictions dataset is a collection of public predictions about human-level AI timelines. We edited the original dataset, as described below. Our dataset is available here, and the original here.\nInteresting features of the dataset include:\n\nThe median dates at which people’s predictions suggest AI is less likely than not and more likely than not are 2033 and 2037 respectively.\nPredictions made before 2000 and after 2000 are distributed similarly, in terms of time remaining when the prediction is made\nSix predictions made before 1980 were probably systematically sooner than predictions made later.\nAGI researchers appear to be more optimistic than AI researchers.\nPeople predicting AI in public statements (in the MIRI dataset) predict earlier dates than demographically similar survey takers do.\nAge and predicted time to AI are almost entirely uncorrelated: r = -.017.\n\nDetails\nHistory of the dataset\nWe got the original MIRI dataset from here. According to the accompanying post, the Machine Intelligence Research Institute (MIRI) commissioned Jonathan Wang and Brian Potter to gather the data. Kaj Sotala and Stuart Armstrong analyzed and categorized it (their categories are available in both versions of the dataset). It was used in the papers Armstrong and Sotala 2012 and Armstrong and Sotala 2014. We modified the dataset, as described below. Our version is here.\nOur changes to the dataset\nThese are changes we made to the dataset:\n\nThere were a few instances of summary results from large surveys included as single predictions – we removed these because survey medians and individual public predictions seem to us sufficiently different to warrant considering separately.\nWe removed entries which appeared to be duplications of the same data, from different sources.\nWe removed predictions made by the same individual within less than ten years.\nWe removed some data which appeared to have been collected in a biased fashion, where we could not correct the bias.\nWe removed some entries that did not seem to be predictions about general artificial intelligence\nWe may have removed some entries for other similar reasons\nWe added some predictions we knew of which were not in the data.\nWe fixed some small typographic errors.\n\nDeleted entries can be seen in the last sheet of our version of the dataset. Most have explanations in one of the last few columns.\nWe continue to change the dataset as we find predictions it is missing, or errors in it. The current dataset may not exactly match the descriptions on this page.\nHow did our changes matter?\nImplications of the above changes:\n\nThe dataset originally had 95 predictions; our version has 65 at last count.\nArmstrong and Sotala transformed each statement into a ‘median’ prediction. In the original dataset, the mean ‘median’ was 2040 and the median ‘median’ 2030. After our changes, the mean ‘median’ is 2046 and the median ‘median’ remains at 2030. The means are highly influenced by extreme outliers.\nWe have not evaluated Armstrong and Sotala’s findings in the updated dataset. One reason is that their findings are mostly qualitative. For instance, it is a matter of judgment whether there is still ‘a visible difference’ between expert and non-expert performance. Our judgment may differ from those authors anyway, so it would be unclear whether the change in data changed their findings. We address some of the same questions by different methods.\n\nminPY and maxIY predictions\nPeople say many slightly different things about when human-level AI will arrive. We interpreted predictions into a common format: one or both of a claim about when human-level AI would be less likely than not, and a claim about when human-level AI would be more likely than not. Most people didn’t explicitly use such language, so we interpreted things roughly, as closely as we could. For instance, if someone said ‘AI will not be here by 2080’ we would interpret this as AI being less likely to exist than not by that date.\nThroughout this page, we use ‘minimum probable year’ (minPY) to refer to the minimum time when a person is interpreted as stating that AI is more likely than not. We use ‘maximum improbable year’ (maxIY) to refer to the maximum time when a person is interpreted as stating that AI is less likely than not. To be clear, these are not necessarily the earliest and latest times that a person holds the requisite belief – just the earliest and latest times that is implied by their statement. For instance, if a person says ‘I disagree that we will have human-level AI in 2050’, then we interpret this as a maxIY prediction of 2050, though they may well also believe AI is less likely than not in 2065 also. We would not interpret this statement as implying any minPY. We interpreted predictions like ‘AI will arrive in about 2045’ as 2045 being the date at which AI would become more likely than not, so both minPY and a maxIY of 2045.\nThis is different to the ‘median’ interpretation Armstrong and Sotala provided. Which is not necessarily to disagree with their measure: as Armstrong points out, it is useful to have independent interpretations of the predictions. Both our measure and theirs could mislead in different circumstances. People who say ‘AI will come in about 100 years’ and ‘AI will come within about 100 years’ probably don’t mean to point to estimates 50 years apart (as they might be seen to in Armstrong and Sotala’s measure). On the other hand, if a person says ‘AI will obviously exist before 3000AD’ we will record it as ‘AI is more likely than not from 3000AD’ and it may be easy to forget that in the context this was far from the earliest date at which they thought AI was more likely than not.\n\n\n\n\n Original A&S ‘median’\n Updated A&S ‘median’\nminPY\n maxIY\n\n\n Mean\n2040\n2046\n2067\n2067\n\n\n Median\n2030\n2030\n2037\n2033\n\n\n\nTable 1: Summary of mean and median AI predictions under different interpretations\nAs shown in Table 1, our median dates are a few years later than Armstrong & Sotala’s original or updated dates, and only four years from one another.\nCategories used in our analysis\nTiming\n‘Early’ throughout refers to before 2000. ‘Late’ refers to 2000 onwards. We split the predictions in this way because often we are interested in recent predictions, and 2000 is a relatively natural recent cutoff. We chose this date without conscious attention to the data beyond the fact that there have been plenty of predictions since 2000.\nExpertise\nWe categorized people as ‘AGI’, ‘AI’, ‘futurist’ and ‘other’ as best we could, according to their apparent research areas and activities. These are ambiguous categories, but the ends to which we put such categorization do not require that they be very precise.\nFindings\nBasic statistics\nThe median minPY is 2037 and median maxIY is 2033 (see  ‘Basic statistics’ sheet). The mean minPY is 2067, which is the same as the mean maxIY (see ‘Basic statistics’ sheet). These means are fairly meaningless, as they are influenced greatly by a few extreme outliers. Figure 1 shows the distribution of most of the predictions.\nFigure 1: minPY (‘AI after’) and maxIY (‘No AI till’) predictions(from ‘Basic statistics’ sheet)\nThe following figures shows the fraction of predictors over time who claimed that human-level AI is more likely to have arrived by that time than not (i.e. minPY predictions). The first is for all predictions, and the second for predictions since 2000. The first graph is hard to meaningfully interpret, because the predictions were made in very different volumes at very different times. For instance, the small bump on the left is from a small number of early predictions. However it gives a rough picture of the data.\nFigure 2: Fraction of all minPY predictions which say AI will have arrived, over time (From ‘Cumulative distributions’ sheet).\nFigure 3: Fraction of late minPY predictions (made since 2000) which say AI will have arrived, over time (From ‘Cumulative distributions’ sheet).\nRemember that these are dates from which people claimed something like AI being more likely than not. Such dates are influenced not only by what people believe, but also by what they are asked. If a person believes that AI is more likely than not by 2020, and they are asked ‘will there be AI in 2060’ they will respond ‘yes’ and this will be recorded as a prediction of AI being more likely than not after 2060. The graph is thus an upper bound for when people predict AI is more likely than not. That is, the graph of when people really predict AI with 50 percent confidence keeps somewhere to the left of the one in figures 2 and 3.\nSimilarity of predictions over time\nIn general, early and late predictions are distributed fairly similarly over the years following them. For minPY predictions, the correlation between the date of a prediction and number of years until AI is predicted from that time is 0.13 (see ‘Basic statistics’ sheet). Figure 5 shows the cumulative probability of AI being predicted over time, by late and early predictors. At a glance, they are surprisingly similar. The largest difference between the fraction of early and of late people who predict AI by any given distance in the future is about 15% (see ‘Predictions over time 2’ sheet). A difference this large is fairly likely by chance. However most of the predictions were made within twenty years of one another, so it is not surprising if they are similar.\nThe six very early predictions do seem to be unusually optimistic. They are all below the median 30 years, which would have a 1.6% probability of occurring by chance.\nFigures 4-7 illustrate the same data in different formats.\nFigure 4: Time left until minPY predictions, by date when they were made. (From ‘Basic statistics’ sheet)\nFigure 5: Cumulative probability of AI being predicted (minPY) different distances out for early and late predictors (From ‘Predictions over time 2’ sheet)\nFigure 6: Fraction of minPY predictions at different distances in the future, for early and late predictors (From ‘Predictions over time’ sheet)\nFigure 7: Cumulative probability of AI being predicted by a given date, for early and late predictors (minPY). (From ‘Cumulative distributions’ sheet)\nGroups of participants\nAssociations with expertise and enthusiasm\nSummary\nAGI people in this dataset are generally substantially more optimistic than AI people. Among the small number of futurists and others, futurists were optimistic about timing, and others were pessimistic.\nDetails\nWe classified the predictors as AGI researchers, (other) AI researchers, Futurists and Other, and calculated CDFs of their minPY  predictions, both for early and late predictors. The figures below show a selection of these. Recall that ‘early’ and ‘late’ correspond to before and after 2000.\nAs we can see in figure 8, Late AGI predictors are substantially more optimistic than late AI predictors: for almost any date this century, at least 20% more AGI people predict AI by then. The median late AI researcher minPY is 18 years later than the median AGI researcher minPY. We haven’t checked whether this is partly caused by predictions by AGI researchers having been made earlier.\nThere were only 6 late futurists, and 6 late ‘other’ (compared to 13 and 16 late AGI and late AI respectively), so the data for these groups is fairly noisy. Roughly, late futurists in the sample were more optimistic than anyone, while late ‘other’ were more pessimistic than anyone.\nThere were no early AGI people, and only three early ‘other’. Among seven early AI and eight early futurists, the AI people predicted AI much earlier (70% of early AI people predict AI before any early futurists do), but this seems to be at least partly explained by the early AI people being concentrated very early, and people predicting AI similar distances in the future throughout time.\nFigure 8: Cumulative probability of AI being predicted over time, for late AI and late AGI predictors.(See ‘Cumulative distributions’ sheet)\nFigure 9: Cumulative probability of AI being predicted over time, for all late groups. (See ‘Cumulative distributions’ sheet)\n\n\n\n Median minPY predictions\n AGI\n AI\n Futurist\n Other\n All\n\n\n Early (warning: noisy)\n –\n 1988\n 2031\n 2036\n 2024\n\n\n Late\n 2033\n 2051\n 2030\n 2101\n 2042\n\n\n\nTable 2: Median minPY predictions for all groups, late and early. There were no early AGI predictors.\nStatement makers and survey takers\nSummary\nSurveys seem to produce later median estimates than similar individuals making public statements do. We compared some of the surveys we know of to the demographically similar predictors in the MIRI dataset. We expected these to differ because predictors in the MIRI dataset are mostly choosing to making public statements, while survey takers are being asked, relatively anonymously, for their opinions. Surveys seem to produce median dates on the order of a decade later than statements made by similar groups.\nDetails\nWe expect surveys and voluntary statements to be subject to different selection biases. In particular, we expect surveys to represent a more even sample of opinion, while voluntary statements to be more strongly concentrated among people with exciting things to say or strong agendas. To learn about the difference between these groups, and thus the extent of any such bias, we below compare median predictions made in surveys to median predictions made by people from similar groups in voluntary statements.\nNote that this is rough: categorizing people is hard, and we have not investigated the participants in these surveys more than cursorily. There are very few ‘other’ predictors in the MIRI dataset. The results in this section are intended to provide a ballpark estimate only.\nAlso note that while both sets of predictions are minPYs, the survey dates are often the actual median year that a person expects AI, whereas the statements could often be later years which the person happens to be talking about.\n\n\n\nSurvey\nPrimary participants\n Median minPY prediction in comparable statements in the MIRI data\n Median in survey\n Difference\n\n\n Kruel (AI researchers)\n AI\n 2051\n 2062\n+11\n\n\n Kruel (AGI researchers)\n AGI\n2033\n 2031\n-2\n\n\n AGI-09\n AGI\n 2033\n 2040\n+7\n\n\n FHI\n AGI/other\n 2033-2062\n 2050\nin range\n\n\n Klein\n Other/futurist\n 2030-2062\n 2050\nin range\n\n\n AI@50\n AI/Other\n 2051-2062\n 2056\nin range\n\n\n Bainbridge\n Other\n 2062\n 2085\n+23\n\n\n\nTable 3: median predictions in surveys and statements from demographically similar groups.\nNote that the Kruel interviews are somewhere between statements and surveys, and are included in both data.\nIt appears that the surveys give somewhat later dates than similar groups of people making statements voluntarily. Around half of the surveys give later answers than expected, and the other half are roughly as expected. The difference seems to be on the order of a decade. This is what one might naively expect in the presence of a bias from people advertising their more surprising views.\nRelation of predictions and lifespan\nAge and predicted time to AI are very weakly anti-correlated: r = -.017 (see Basic statistics sheet, “correlation of age and time to prediction”). This is evidence against a posited bias to predict AI within your existing lifespan, known as the Maes-Garreau Law.", "url": "https://aiimpacts.org/miri-ai-predictions-dataset/", "title": "MIRI AI Predictions Dataset", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-05-20T10:18:11+00:00", "paged_url": "https://aiimpacts.org/feed?paged=21", "authors": ["Katja Grace"], "id": "3d6af7206fbef2bf27238e7269daaabe", "summary": []} {"text": "A new approach to predicting brain-computer parity\n\nBy Katja Grace, 7 May 2015\nHow large does a computer need to be before it is ‘as powerful’ as the human brain?\nThis is a difficult question, which people have answered before, with much uncertainty.\nWe have a new answer! (Longer description here; summary in the rest of this post.) This answer is based on ‘traversed edges per second’ (TEPS), a metric which emphasizes communication within a computer, instead of computing operations (like FLOPS). That is, TEPS measures how fast information can move around.\nCommunication can be a substantial bottleneck for big computers, slowing them down in spite of their powerful computing capacity. It seems plausible that communication is also a bottleneck for the brain, which is both a big computer, and one that spends lots of resources on communication. This is one reason to measure the brain in terms of TEPS: if communication is a bottleneck, then it is especially important to know when computers will achieve similar performance to the brain there, not just on easier aspects of being a successful computer.\nThe TEPS benchmark asks the computer to simulate a graph, and then to search through it. The question is how many edges in the graph the computer can follow per second. We can’t ask the brain to run the TEPS benchmark, but the brain is already a graph of neurons, and we can measure edges being traversed in it (action potentials communicating between neurons). So we can count how many edges are traversed in the brain per second, and compare this to existing computer hardware.\nThe brain seems to have around 1.8-3.2 x 10^14  synapses. We’d like to know how often these synapses convey spikes, but this has been too hard to discover. So we use neuron firing frequency as a proxy. We previously calculated that each neuron spikes around 0.1-2 times per second. Together with the number of synapses, this suggests the brain performs at around 0.18 – 6.4 * 1014 TEPS. This assumes many things, and is hazy in many ways, some of which are detailed in our longer page on the topic. The estimate could be tightened on many fronts with more work.\nThe Sequoia supercomputer is currently the best computer in the world on the TEPS benchmark. Its record is 2.3 *1013 TEPS. So the human brain seems to be somewhere between as powerful and thirty times as powerful as the best supercomputer, in terms of TEPS.\nAt current prices for TEPS, the brain’s performance should cost roughly $4,700 – $170,000/hour. Our previous fairly wild guess was that TEPS prices should improve by a factor of ten every four years. If this is true, it should take seven to fourteen years for a computer which costs $100/hour to be competitive with the human brain. At that point, if having human-level hardware in terms of TEPS were enough to have human-level AI, human-level AI should be replacing well paid humans.\nMoravec’s and Kurzweil’s estimates of computation in the brain suggest human-equivalent hardware should cost $100/hour either some time in the past or in about four years respectively, so our TEPS estimate is actually late relative to those. However they are all pretty close together. Sandberg and Bostrom’s estimates of hardware required to emulate a brain span from around then to around thirty years later, though note that emulating is different from replicating functionally. Altogether ‘human-level’ hardware seems likely to be upon us soon, if it isn’t already. The estimate from TEPS points to the near future even more strongly.\n(Featured image by MartinGrandjean)", "url": "https://aiimpacts.org/tepsbrainestimate/", "title": "A new approach to predicting brain-computer parity", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-05-08T00:36:48+00:00", "paged_url": "https://aiimpacts.org/feed?paged=21", "authors": ["Katja Grace"], "id": "0b2a741c68a024411e25a199c800f77b", "summary": []} {"text": "Brain performance in TEPS\n\nTraversed Edges Per Second (TEPS) is a benchmark for measuring a computer’s ability to communicate information internally. Given several assumptions, we can also estimate the human brain’s communication performance in terms of TEPS, and use this to meaningfully compare brains to computers. We estimate that (given these assumptions) the human brain performs around  0.18 – 6.4 * 1014 TEPS. This is within an order of magnitude more than existing supercomputers.\nAt current prices for TEPS, we estimate that it costs around $4,700 – $170,000/hour to perform at the level of the brain. Our best guess is that ‘human-level’ TEPS performance will cost less than $100/hour in seven to fourteen years, though this is highly uncertain.\nMotivation: why measure the brain in TEPS?\nWhy measure communication?\nPerformance benchmarks such as floating point operations per second (FLOPS) and millions of instructions per second (MIPS) mostly measure how fast a computer can perform individual operations. However a computer also needs to move information around between the various components performing operations.1 This communication takes time, space and wiring, and so can substantially affect overall performance of a computer, especially on data intensive applications. Consequently when comparing computers it is useful to have performance metrics that emphasize communication as well as ones that emphasize computation. When comparing computers to the brain, there are further reasons to be interested in communication performance, as we shall see below.\nCommunication is a plausible bottleneck for the brain\nIn modern high performance computing, communication between and within processors and memory is often a significant cost.2 3 4 5 Our impression is that in many applications it is more expensive than performing individual bit operations, making operations per second a less relevant measure of computing performance.\nWe should expect computers to become increasingly bottlenecked on communication as they grow larger, for theoretical reasons. If you scale up a computer, it requires linearly more processors, but superlinearly more connections for those processors to communicate with one another quickly. And empirically, this is what happens: the computers which prompted the creation of the TEPS benchmark were large supercomputers.\nIt’s hard to estimate the relative importance of computation and communication in the brain. But there are some indications that communication is an important expense for the human brain as well. A substantial part of the brain’s energy is used to transmit action potentials along axons rather than to do non-trivial computation.6 Our impression is also that the parts of the brain responsible for communication (e.g. axons) comprise a substantial fraction of the brain’s mass. That substantial resources are spent on communication suggests that communication is high value on the margin for the brain. Otherwise, resources would likely have been directed elsewhere during our evolutionary history.\nToday, our impression is that networks are typically implemented on single machines because communication between processors is otherwise very expensive. But the power of individual processors is not increasing as rapidly as costs are falling, and even today it would be economical to use thousands of machines if doing so could yield human-level AI. So it seems quite plausible that communication will become a very large bottleneck as neural networks scale further.\nIn sum, we suspect communication is a bottleneck for the brain for three reasons: the brain is a large computer, similar computing tasks tend to be bottlenecked in this way, and the brain uses substantial resources on communication.\nIf communication is a bottleneck for the brain, this suggests that it will also be a bottleneck for computers with similar performance to the brain. It does not strongly imply this: a different kind of architecture might be bottlenecked by different factors.\nCost-effectiveness of measuring communication costs\nIt is much easier to estimate communication within the brain than to estimate computation. This is because action potentials seem to be responsible for most of the long-distance communication7, and their information content is relatively easy to quantify. It is much less clear how many ‘operations’ are being done in the brain, because we don’t know in detail how the brain represents the computations it is doing.\nAnother issue that makes computing performance relatively hard to evaluate is the potential for custom hardware. If someone wants to do a lot of similar computations, it is possible to design custom hardware which computes much faster than a generic computer. This could happen with AI, making timing estimates based on generic computers too late. Communication may also be improved by appropriate hardware, but we expect the performance gains to be substantially smaller. We have not investigated this question.\nMeasuring the brain in terms of communication is especially valuable because it is a relatively independent complement to estimates of the brain’s performance based on computation. Moravec, Kurzweil and Sandberg and Bostrom have all estimated the brain’s computing performance, and used this to deduce AI timelines. We don’t know of estimates of the total communication within the brain, or the cost of programs with similar communication requirements on modern computers. These an important and complementary aspect of the cost of ‘human-level’ computing hardware.\nTEPS\nTraversed edges per second (TEPS) is a metric that was recently developed to measure communication costs, which were seen as neglected in high performance computing.8 The TEPS benchmark measures the time required to perform a breadth-first search on a large random graph, requiring propagating information across every edge of the graph (either by accessing memory locations associated with different nodes, or communicating between different processors associated with different nodes).9  You can read about the benchmark in more detail at the Graph 500 site.\nTEPS as a meaningful way to compare brains and computers\nBasic outline of how to measure a brain in TEPS\nThough a brain cannot run the TEPS benchmark, we can roughly assess the brain’s communication ability in terms of TEPS. The brain is a large network of neurons, so we can ask how many edges between the neurons (synapses) are traversed (transmit signals) every second. This is equivalent to TEPS performance in a computer in the sense that the brain is sending messages along edges in a graph. However it differs in other senses. For instance, a computer with a certain TEPS performance can represent many different graphs and transmit signals in them, whereas we at least do not know how to use the brain so flexibly. This calculation also makes various assumptions, to be discussed shortly.\nOne important interpretation of the brain’s TEPS performance calculated in this way is as a lower bound on communication ability needed to simulate a brain on a computer to a level of detail that included neural connections and firing. The computer running the simulation would need to be traversing this many edges per second in the graph that represented the brain’s network of neurons.\nAssumptions\nMost relevant communication is between neurons\nThe brain could be simulated at many levels of detail. For instance, in the brain, there is both communication between neurons and communication within neurons. We are considering only communication between neurons. This means we might underestimate communication taking place in the brain.\nOur impression is that essentially all long-distance communication in the brain takes place between neurons, and that such long-distance communication is a substantial fraction of the brain’s communication. The reasons for expecting communication to be a bottleneck—that the brain spends much matter and energy on it; that it is a large cost in large computers; and that algorithms which seem similar to the brain tend to suffer greatly from communication costs—also suggest that long distance communication alone is a substantial bottleneck.\nTraversing an edge is relevantly similar to spiking\nWe are assuming that a computer traversing an edge in a graph (as in the TEPS benchmark) is sufficient to functionally replicate a neuron spiking. This might not be true, for instance if the neuron spike sends more information than the edge traversal. This might happen if there were more perceptibly different times each second at which the neuron could send a signal. We could usefully refine the current estimate by measuring the information contained in neuron spikes and traversed edges.10\nDistributions of edges traversed don’t make a material difference\nThe distribution of edges traversed in the brain is presumably quite different from the one used in the TEPS benchmark. We are ignoring this, assuming that it doesn’t make a large difference to the number of edges that can be traversed. This might not be true, if for instance the ‘short’ connections in the brain are used more often. We know of no particular reason to expect this, but it would be a good thing to check in future.\nGraph characteristics are relevantly similar\nGraphs vary in how many nodes they contain, how many connections exist between nodes, and how the connections are distributed. If these parameters are quite different for the brain and the computers tested on the TEPS benchmark, we should be more wary interpreting computer TEPS performance as equivalent to what the brain does. For instance, if the brain consisted of a very large number of nodes with very few connections, and computers could perform at a certain level on much smaller graphs with many connections, then even if the computer could traverse as many edges per second, it may not be able to carry out the edge traversals that the brain is doing.\nHowever graphs with different numbers of nodes are more comparable than they might seem. Ten connected nodes with ten links each can be treated as one node with around ninety links. The links connecting the ten nodes are a small fraction of those acting as outgoing links, so whether the central ‘node’ is really ten connected nodes should make little difference to a computer’s ability to deal with the graph. The most important parameters are the number of edges and the number of times they are traversed.\nWe can compare the characteristics of brains and graphs in the TEPS benchmark. The TEPS benchmark uses graphs with up to 2 * 1012 nodes,11 while the human brain has around 1011 nodes (neurons). Thus the human brain is around twenty times smaller (in terms of nodes) than the largest graphs used in the TEPS benchmark.\nThe brain contains many more links than the TEPS benchmark graphs. TEPS graphs appear to have average degree 32 (that is, each node has 32 links on average),12 while the brain apparently has average degree around 3,600 – 6,400.13\nThe distribution of connections in the brain and the TEPS benchmark are probably different. Both are small-world distributions, with some highly connected nodes and some sparsely connected nodes, however we haven’t compared them in depth. The TEPS graphs are produced randomly, which should be a particularly difficult case for traversing edges in them (according to our understanding). If the brain has more local connections, traversing edges in it should be somewhat easier.\nWe expect the distribution of connections to make a small difference. In general, the time required to do a breadth first search depends linearly on the number of edges, and doesn’t depend on degree. The TEPS benchmark is essentially a breadth first search, so we should expect it basically have this character. However in a physical computer, degree probably matters somewhat. We expect that in practice that the cost scales with edges * log(edges), because the difficulty of traversing each edge should scale with log(edges) as edges become more complex to specify. A graph with more local connections and fewer long-distance connections is much like a smaller graph, so that too should not change difficulty much.\nHow many TEPS does the brain perform?\nWe can calculate TEPS performed by the brain as follows:\nTEPS = synapse-spikes/second in the brain\n= Number of synapses in the brain * Average spikes/second in synapses\n≈ Number of synapses in the brain * Average spikes/second in neurons\n= 1.8-3.2 x 10^14  *  0.1-2 \n= 0.18 – 6.4 * 10^14\nThat is, the brain performs at around 18-640 trillion TEPS.\nNote that the average firing rate of neurons is not necessarily equal to the average firing rate in synapses, even though each spike involves both a neuron and synapses. Neurons have many synapses, so if neurons that fire faster tend to have more or less synapses than slower neurons, the average rates will diverge. We are assuming here that average rates are similar. This could be investigated further.\nFor comparison, the highest TEPS performance by a computer is 2.3 * 10^13 TEPS (23 trillion TEPS)14, which according to the above figures is within the plausible range of brains (at the very lower end of the range).\nImplications\nThat the brain performs at around 18-640 trillion TEPS means that if communication is in fact a major bottleneck for brains, and also for computer hardware functionally replicating brains, then existing hardware can probably already perform at the level of a brain, or at least at one thirtieth of that level.\nCost of ‘human-level’ TEPS performance\nWe can also calculate the price of a machine equivalent to a brain in TEPS performance, given current prices for TEPS:\nPrice of brain-equivalence = TEPS performance of brain * price of TEPS\n= TEPS performance of brain/billion * price of GTEPS\n= 0.18 – 6.4 * 10^14/10^9 * $0.26/hour\n= $0.047 – 1.7 * 10^5/hour\n= $4,700 – $170,000/hour\nFor comparison, supercomputers seem to cost around $2,000-40,000/hour to run, if we amortize their costs across three years.15 So the lower end of this range is within what people pay for computing applications (naturally, since the brain appears to be around as powerful as the largest supercomputers, in terms of TEPS). The lower end of the range is still about 1.5 orders of magnitude more than what people regularly pay for labor. Though the highest paid CEOs appear to make at least $12k/hour.16\nTimespan for ‘human-level’ TEPS to arrive\nOur best guess is that TEPS/$ grows by a factor of ten every four years, roughly. Thus for computer hardware to compete on TEPS with a human who costs $100/hour should take about seven to thirteen years.17 We are fairly unsure of the growth rate of TEPS however.\n\n ", "url": "https://aiimpacts.org/brain-performance-in-teps/", "title": "Brain performance in TEPS", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-05-07T00:15:21+00:00", "paged_url": "https://aiimpacts.org/feed?paged=21", "authors": ["Katja Grace"], "id": "3784568adfaca4c2c073c344c0026cbf", "summary": []} {"text": "Glial Signaling\n\nThe presence of glial cells may increase the capacity for signaling in the brain by a small factor, but is unlikely to qualitatively change the nature or extent of signaling in the brain.\nSupport\nNumber of glial cells\nAzevado et al. physically count the number of cells in a human brain and find about 10¹¹ each of neurons and glial cells, suggesting that the number of glia is quite similar to the number of neurons.\nReferences to much larger numbers of glial cells appear to be common, but we could not track down the empirical research supporting these claims. For example the Wikipedia article on neuroglia states “In general, neuroglial cells are smaller than neurons and outnumber them by five to ten times,” and an article about glia in Scientific American opens “Nearly 90 percent of the brain is composed of glial cells, not neurons.” An informal blog post suggests that the factor of ten figure may be a popular myth, although that post also draws on Azevado et al. so should not be considered independent support.\nNature of glial signaling\nSandberg and Bostrom write: “…the time constants for glial calcium dynamics is generally far slower than the dynamics of action potentials (on the order of seconds or more), suggesting that the time resolution would not have to be as fine” (p. 36). This suggests that the computational role of glial cells is not too great.\nNewman and Zahs 1998 mechanically stimulate glial cells in a rat retina, and find that this stimulation results in slow-moving waves of increased calcium concentration.1 These calcium waves had an effect on neuron activity (see figure 4 in their paper, which also provides some indication concerning the characteristic timescale). For reference, these speeds are about a million times slower than action potential propagation (neuron firing). These figures support Sandberg and Bostrom’s claims, and as far as we are aware they are consistent with the broader literature on calcium dynamics.\nAstrocytes—a type of glial cell—take in information from action potentials (from neurons).2  There is some evidence that a small fraction of glia can generate action potentials, though such cells are “estimated to represent 5–10% of the cells” and so unlikely to substantially change calculations based on neurons.\nIt seems possible that further study or a more comprehensive survey of the literature would reveal other high-bandwidth signaling between glial cells, or that timescale-based estimates for the bandwidth of calcium signaling are too low, but at the moment we have little reason to suspect this.\nEnergy of glial signaling\nIf glia were performing substantially more computation than neurons, we would weakly expect them to consume more (or at least comparable) energy for a number of reasons:\n\nThe energy demands of the brain are very significant. If glia could perform comparable computation with much lower energy, we would expect them to predominate in terms of volume, whereas this does not seem to be the case.\nIt would be surprising if different computational elements in the brain exhibited radically different efficiency.\n\nHowever, the majority of energy in the brain is used to maintain resting potentials and propagate action potentials, for example a popularization in Scientific American summarizes “two thirds of the brain’s energy budget is used to help neurons or nerve cells “fire” or send signals.”\nAlthough we can imagine many possible designs on which glia would perform most of the information transfer in the brain while neurons provided particular kinds of special-purpose communication at great expense, this does not seem likely given our current understanding. This provides further mild evidence that the computational role of glial cells is unlikely to substantially exceed the role of neurons.", "url": "https://aiimpacts.org/glial-signaling/", "title": "Glial Signaling", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-04-16T23:29:27+00:00", "paged_url": "https://aiimpacts.org/feed?paged=21", "authors": ["Katja Grace"], "id": "746cb5fab12ed2269a8597b52ce1482b", "summary": []} {"text": "Scale of the Human Brain\n\nThe brain has about 10¹¹ neurons and 1.8-3.2 x 10¹⁴ synapses. These probably account for the majority of computationally interesting behavior.\nSupport\nNumber of neurons in the brain\nThe number of neurons in the brain is about 10¹¹. For instance, Azevado et al physically counted them and found 0.6-1 * 10¹¹. Eric Chudler has collected estimates from a range of textbooks, which estimate 1-2 x 10¹⁰ of these (10%-30%) are in the cerebral cortex.1\nNumber of synapses in the brain\nThe number of synapses in the brain is known much less precisely, but is probably about 10¹⁴. For instance Human-memory.net reports 10¹⁴-10¹⁵ (100 – 1000 trillion) synapses in the brain, with no citation or explanation. Wikipedia says the brain contains 100 billion neurons, with 7,000 synaptic connections each, for 7 x 10¹⁴ synapses in total, but this seems possibly in error.2\nNumber of synapses in the neocortex\nOne way to estimate of the number of synapses in the brain is to extrapolate from the number in the neocortex. According to stereologic studies that we have not investigated, there are around 1.4 x 10¹⁴ synapses in the neocortex.3 This is roughly consistent with Eric Chudler’s summary of textbooks, which gives estimates between 0.6-2.4 x 10¹⁴ for the number of synapses in the cerebral cortex.4\nWe are not aware of convincing estimates for synaptic density outside of the cerebral cortex, and our impression is that widely reported estimates of 10¹⁴ are derived from the assumption that the neocortex contains the great bulk of synapses in the brain. This seems plausible given the large volume of the neocortex, despite the fact that it contains a minority of the brain’s neurons. By volume, around 80% of the human brain is neocortex.5 The neocortex also consumes around 44% of the brain’s total energy, which may be another reasonable indicator of the fraction of synapses in contains.6 So our guess is that the number of synapses in the entire brain is somewhere between 1.3 and 2.3 times the number in the cerebral cortex. From above, the cerebral cortex contains around 1.4 x 10¹⁴ synapses, so this gives us 1.8-3.2 x 10¹⁴ total synapses.\nNumber of synapses per neuron\nThe number of synapses per neuron varies considerably. According to Wikipedia, the majority of neurons are cerebellum granule cells, which have only a handful of synapses, while the statistics above suggest that the average neuron has around 1,000 synapses. Purkinje cells have up to 200,000 synapses.7\nNumber of glial cells in the brain\nMain article: Glial signaling\nAzevado et al aforementioned investigation finds about 10¹¹ glial cells (the same as the number of neurons).\nRelevance of cells other than neurons to computations in the brain\nMain article: Glial signaling\nIt seems that the timescales of glial dynamics are substantially longer than for neuron dynamics. Sandberg and Bostrom write: “However, the time constants for glial calcium dynamics is generally far slower than the dynamics of action potentials (on the order of seconds or more), suggesting that the time resolution would not have to be as fine” (p. 36). This suggests that the computational role of glial cells is not too great. References to much larger numbers of glial cells appear to be common, but we were unable to track down any empirical research supporting these claims. An informal blog post suggests that a common claim that there are ten times as many glial cells as neurons may be a popular myth.\nWe are not aware of serious suggestions that cells other than neurons or glia play a computationally significant role in the functioning of the brain.\n\n ", "url": "https://aiimpacts.org/scale-of-the-human-brain/", "title": "Scale of the Human Brain", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-04-16T23:00:46+00:00", "paged_url": "https://aiimpacts.org/feed?paged=21", "authors": ["Katja Grace"], "id": "cf1d19524cc8577d09aa77101c16deda", "summary": []} {"text": "Neuron firing rates in humans\n\nOur best guess is that an average neuron in the human brain transmits a spike about 0.1-2 times per second.\nSupport\nBias from neurons with sparse activity\nWhen researchers measure neural activity, they can fail to see neurons which rarely fire during the experiment (those with ‘sparse’ activity).1 Preferentially recording more active neurons means overestimating average rates of firing. The size of the bias seems to be around a factor of ten: it appears that around 90% of neurons are ‘silent’, so unlikely to be detected in these kinds of experiments. This suggests that many estimates should be scaled down by around a factor of around ten.\nAssorted estimates\nInformal estimates\nInformal websites and articles commonly report neurons as firing between <1 and 200 times per second.2 These sources lack references and are not very consistent, so we do not put much stock in them.\nEstimates of rate of firing in human neocortex\nBased on the energy budget of the brain, it appears that the average cortical neuron fires around 0.16 times per second. It seems unlikely that the average cortical neuron spikes much more than once per second.\nThe neocortex is a large part of the brain. It accounts for around 80% of the brain’s volume3, and uses 44% of its energy4. It appears to hold at least a third of the brain’s synapses if not many more5. Thus we might use rates of firing of cortical neurons as a reasonable proxy for normal rates of neuron firing in the brain. We can also do a finer calculation.\nWe might roughly expect energy used by the brain to scale in proportion both to the spiking rate of neurons and to volume. This is because the energy required for every neuron to experience a spike scales up in proportion to the surface area of the neurons involved6, which we expect to be roughly proportional to volume.\nSo we can calculate:\nenergy(cortex) = volume(cortex) * spike_rate(cortex) * c\nenergy(brain) = volume(brain) * spike_rate(brain) * c\nFor c a constant.\nThus,\nenergy(cortex)/energy(brain) = volume(cortex) * spike_rate(cortex)/volume(brain) * spike_rate(brain)\nFrom figures given above then, we can estimate:\n0.44 = 0.8 * 0.16/spike_rate(brain)\nspike_rate(brain) = 0.8 * 0.16 /0.44 = 0.29\nOr for a high estimate:\n0.44 = 0.8 * 1/spike_rate(brain)\nspike_rate(brain) = 0.8 * 1 /0.44 = 1.82\nSo based on this rough extrapolation from neocortical firing rates, we expect average firing rates across the brain to be around 0.29 per second, and probably less than 1.82 per second. This has been a very rough calculation however, and we do not have great confidence in these numbers.\nEstimates of rate of firing in non-human visual cortex\nA study of macaque and cat visual cortex found rates of neural firing averaging 3-4 spikes per second for cats in different conditions, and 14-18 spikes per second for macaques. A past study found 9 spikes per second for cats.7 It is hard to know how these estimates depend on the region being imaged and on the animal being studied, which significantly complicates extracting conclusions from these results. Furthermore, these studies appear to be subject to the bias discussed above, from only sampling visually responsive cells. Thus they probably overestimate overall neural activity by something like a factor of ten. This suggests figures in the 0.3-1.8 range, consistent with estimates from the neocortex. Note that the visual cortex is part of the neocortex, so this increases our confidence in our estimates for that, without reducing our uncertainty about the rest of the brain.\nMaximum neural firing rates\nThe ‘refractory period’ for a neuron is the time after it fires during which it either can’t fire again (‘absolute refractory period’) or needs an especially large stimulus to fire again (‘relative refractory period’). According to physiologyweb.com, absolute refractory periods tend to be 1-2ms and relative refractory periods tend to be 3-4ms.8 This implies than neurons are generally not capable of firing at more than 250-1000 Hz. This is suggestive, however the site does not say anything about the distribution of maximum firing rates for different types of neurons, so the mean firing rate could in principle be much higher.\nConclusions\nInformal estimates place neural firing rates in the <1-200Hz range. Estimates from energy use in the neocortex suggests a firing rate of 0.16Hz in the neocortex, which suggests around 0.29Hz in the entire brain, and probably less than 1.8Hz, though we are not very confident in our estimation methodology here. We saw animal visual cortex firing rates in the 3-18Hz range, but these are probably an order of magnitude too high due to bias from recording active neurons, suggesting real figures of 0.3-1.8 Hz, which is consistent with the estimates from the neocortex previously discussed. Neuron refractory periods (recovery times) suggest 1000Hz is around as fast as a normal neuron can possibly fire. Combined with the observation that 90% of neurons rarely fire, this suggests 100Hz as a high upper bound on the average firing rate. However this does not tell us about unusual neurons, of which there might be many.\nSo we have two relatively weak lines of reasoning suggesting average firing rates of around 0.1Hz-2Hz. These estimates are low compared to the range of informal claims. However the informal claims appear to be unreliable, especially given that two are higher than our upper bound on neural firing rates (though these are also unreliable). 0.1-2Hz is also low compared to these upper bounds, as it should be. Thus our best guess is that neurons fire at 0.1-2Hz on average.\nNotes", "url": "https://aiimpacts.org/rate-of-neuron-firing/", "title": "Neuron firing rates in humans", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-04-14T18:57:50+00:00", "paged_url": "https://aiimpacts.org/feed?paged=21", "authors": ["Katja Grace"], "id": "92e7612cbeb40598b848f1da05d2398e", "summary": []} {"text": "Metabolic Estimates of Rate of Cortical Firing\n\nCortical neurons are estimated to spike around 0.16 times per second, based on the amount of energy consumed by the human neocortex.1 They seem unlikely to spike much more than once per second on average, based on this analysis.\nSupport\nEnergy spent on spiking\nLennie 2003 estimates the rate of neuron firing in the cortex based on estimates for energy spent on Na/K ion pumps during spikes, and the energy required by Na/K ion pumps per spike.\nLennie produces estimates for energy consumed in three parts:\n\nEstimates for adenosine triphosphate (ATP) molecules consumed by the neocortex: According to brain scans, glucose is metabolized at a rate of about 0.40 micro mol/g/min. Each glucose molecule yields around 30 molecules of ATP. This suggests that the entire cortex consumes 3.4 * 10²¹ molecules of ATP per minute.2 Note that ATP’s function is as energy source, so this is a measure of how much energy the neocortex uses.\nEstimates for the fraction of this ATP used to maintain ion balances: If you inactivate Na/K ion pumps with the drug ouabain, this reduces energy consumption by 50%, suggesting that these ion pumps use about half of the cortex’s energy. 3 This gives us 1.7 * 10²¹ molecules of ATP per minute being used to maintain ion balances.\nEstimates for the fraction of ion balancing ATP used in spikes: Maintaining resting potentials (not part of spiking) in all neurons costs 1.3 x 10²¹ ATP molecules per minute. This leaves 3.9 * 10²⁰ ATP per minute for spiking.4\n\nHowever, other authors report higher fractions of cortical energy are spent on spiking. Laughlin 2001 writes that spiking accounts for 80% of total energy consumption in mammalian cortex.5 Other work by Laughlin and Attwell, which is a primary source for Lennie’s estimates, reports that spiking consumes around 47% of energy.6\nOur understanding is that the difference can be attributed to differences between the rodent brain and the human brain, and the scaling estimates from one to the other. We are not particularly confident in this methodology.\nEnergy per spike\nAccording to Lenny, each spike consumes around 2.4 * 10⁹ molecules of ATP.7 This estimate is produced by scaling up estimates for the rat brain.8 The estimates for the rat brain were inferred from ‘anatomic and physiologic data’, which we have not scrutinized.9  We are not particularly confident in this scaling methodology. These estimates appear to be produced by counting ion channels and applying detailed knowledge of the mechanics of ion channels (which consume a roughly fixed amount of ATP per transported molecule).\nSpikes per neuron per second\nWe saw above that the cortex uses 3.9 * 10²⁰ ATP/minute for spiking, and that each spike consumes around 2.4 * 10⁹ molecules of ATP. So the cortex overall has around 2.7 * 10⁹ spikes per second.  There are 1.9 *10¹⁰ neurons in the cortex, so together we can calculate that these neurons produce around 0.16 spikes per second on average.10\nEven assuming that essentially all of the energy in the brain is spent on signaling, this would introduce a bias of only a factor of 8 in Lennie’s estimates. On page S1 Lennie presents an analysis of other possible sources of error, and overall it seems unlikely to us that the estimate is too low by more than an order of magnitude or so.\n \n\n ", "url": "https://aiimpacts.org/metabolic-estimates-of-rate-of-cortical-firing/", "title": "Metabolic Estimates of Rate of Cortical Firing", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-04-10T17:47:11+00:00", "paged_url": "https://aiimpacts.org/feed?paged=21", "authors": ["Katja Grace"], "id": "98b3553f910e0f7151d55ecbb4f6469a", "summary": []} {"text": "Preliminary prices for human-level hardware\n\nBy Katja Grace, 4 April 2015\nComputer hardware has been getting cheap now for about seventy five years. Relatedly, large computing projects can afford to be increasingly large. If you think the human brain is something like a really impressive computer1, then a natural question is ‘what happens when big projects can afford to use as much computation as the brain?’\nOne possibility is that we get something like human-level AI then and there. Maybe the brain just uses easy algorithmic ideas, and as soon as someone gets enough hardware together and tries a bit on the brain-like software, the creation will be about as good as a human brain.\nAnother possibility is that sufficient hardware to simulate a brain is basically irrelevant. On this story, the brain has big algorithmic secrets we don’t know about, and it would take unimaginable warehouses of hardware to replace these insights with brute force computation.\nMost possibilities lie somewhere between these two, where hardware and software are both somewhat helpful. In these cases, knowing when we will have enough hardware to run a brain isn’t everything, but is informative. For more discussion of how to model this situation, see our older post.\nThere doesn’t seem to be consensus on how important hardware is. But given the range of possibilities, whenever ‘human-level hardware’ arrives seems like a disproportionately likely time for human-level AI to arrive.\nFor this reason, we want to know when this human-level hardware point is. Which means we want to know the price of hardware, how fast that price is falling, how much hardware you need to do what the human brain does, and how much anyone is likely to pay for that. Ideally, we would like a few relatively independent estimates of several of these things.\nWe’ve just been checking the prices, and price trends, so this seems like a good time to pause and see what they imply when combined with some existing estimates of the brain’s computational requirements. Later we will explore these estimates in more detail, and hopefully add at least one based on measuring TEPS.\nMoravec estimates that the brain performs around 100 million MIPS.2 MIPS are not directly comparable to MFLOPS (millions of FLOPS), and have deficiencies as a measure, but the empirical relationship in computers is something like MFLOPS = 2.3 x MIPS0.89, according to Sandberg and Bostrom (2008).3 This suggests Moravec’s estimate coincides with around 3.0 x 10¹³ FLOPS. Given that an order of magnitude increase in computing power per dollar corresponds to about four years of time, knowing that MFLOPS and MIPS are roughly comparable is plenty of precision.\nWe estimated FLOPShours cost around $10-13 each. Thus if we want to run a brain, putting these figures together, it would cost us around $3/hour!\nThat is, if Moravec’s estimate was right, and my conversion of it to FLOPS was basically right, and our prices were right, and hardware mattered a lot more than software, we would already be in the robot revolution.4 How informative!\nOur prices are pretty consistent with the latest extrapolation in Moravec’s 1997 graph (see Figure 1).5 However his graph doesn’t reach human-equivalence until around 2020, and he predicts such computers appear around then.6 This difference from our reading of his figures is because his graph is of what can be bought with $1,000, whereas we expect someone would build an AI by the time it cost around $1M at the latest.7 So our threshold of ‘affordable’ is three orders of magnitude more expensive than his, and thus a bit over a decade earlier. This seems to be a disagreement about at what price it becomes economically viable to replace a human, but we do not understand his position8 well. A ‘brain’ that costs $1000, and runs for a few years, lets say doing useful work for only 40h/week, is costing $0.16/hour! Yet it can earn at least several hundred times more than that. There are other costs to laboring than having a virtual brain, but not that many.\nFigure 1: The growth of cheap MIPS, from Moravec.\nIncidentally, by 2009, Moravec lengthened his prediction to 20-30 years to ‘close the gap’, seemingly due to cost reductions abating since 1990.9 We don’t know of evidence of this slowing however, and as mentioned, it seems recent hardware prices are in line with his earlier estimates.\nMoravec’s is not the only estimate of computation done by the brain. Sandberg and Bostrom (2008) project the processing required to emulate a human brain at different levels of detail.10 For the three levels that their workshop participants considered most plausible, their estimates are 10¹⁸, 10²², and 10²⁵ FLOPS. These would presently cost around $100K/hour, $1bn/hour and $1T/hour.\nWe estimated that FLOPS are coming down in price by roughly an order of magnitude roughly every four years, so if that is about right and continues, these figures suggest that an AI might compete with a $100/hour human in 12 years, 28 years or 40 years respectively: between 2027 and 2055.\nIncidentally, Sandberg and Bostrom predicted a $1M project could purchase enough hardware for a brain between 2019 and 2044 or between 2042 and 2087, depending on whether one is extrapolating retail computing or supercomputing price trends. So our extrapolation of their figures based on recent data is in basically line with theirs. Figure 2 shows roughly where our 2015 FLOPS/$ figures fall on their graph of MFLOPS.\nFigure 2: Growth in MFLOPS/$ according to Sandberg and Bostrom (p88), annotated with our recent estimate of cheap FLOPS in 2015.\nAnother estimate of human-level hardware comes from Kurzweil. In The Singularity is Near, he claimed that a human brain required 10¹⁶ calculations per second, which appears to be roughly equivalent to 10¹⁶ FLOPS.11 This would put the current price of human-level hardware at $1000/hour, beginning to be competitive with some highly expensive people, and reaching the $100/hour people in about four years.12\nSo it seems human-level hardware presently costs between $3/hour and $1T/hour. And the date when we will reach it lies somewhere in a half-century long haze. We have already stepped into the haze: of the five dates we saw, we have passed one, and are reaching another. This is important, because the era of maybe-human-level hardware has a heightened chance of bringing human-level AI.\nIt is also important to dispel the haze. If we knew there was a substantially heightened chance of human-level AI in the next ten years, we might act differently from how we would if we knew hardware would be lacking for forty years. Also, if were sure we had uneventfully passed human-level hardware, then we could exclude the possibility that hardware was overwhelmingly important, and start on excluding nearby possibilities (e.g. that hardware is pretty important). Knowing better how much difference hardware makes would change our other expectations about the arrival of human-level AI. Though human-level hardware coming sooner would tend to shorten our AI timelines, if we knew that we had already thoroughly passed it we might expect longer timelines.\nThis has been a preview of our investigation into these things. The next steps will hopefully resolve some ambiguity around how much computation a brain does, or at least add some more to be resolved later.\n***\n(Featured image: Pyramidal hippocampal neuron 40x by MethoxyRoxy)", "url": "https://aiimpacts.org/preliminary-prices-for-human-level-hardware/", "title": "Preliminary prices for human-level hardware", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-04-04T10:09:37+00:00", "paged_url": "https://aiimpacts.org/feed?paged=22", "authors": ["Katja Grace"], "id": "af846be154c907126d8474d904adb4ff", "summary": []} {"text": "Current FLOPS prices\n\nIn November 2017, we estimate the price for one GFLOPS to be between $0.03 and $3 for single or double precision performance, using GPUs (therefore excluding some applications). Amortized over three years, this is $1.1 x 10-5 -$1.1 x 10-7 /GFLOPShour.\nBackground\nWe have written about long term trends and short term trends in the costs of computing hardware. We are interested in evaluating the current prices more thoroughly, both to validate the trend data, and because current hardware prices are particularly important to know about.\nDetails\nWe separately investigated CPUs, GPUs, computing as a service, and supercomputers. We used somewhat different methods to estimate the price in these categories, based on the data available. We did not find any definitive source on the most cost-effective in any category, or in general, so our examples are probably not the very cheapest. Nevertheless, these figures give a crude sense for the cost of computation in the contemporary market. Our data is here.\nIncluded costs\nFor CPUs and GPUs, we include only the original recommended retail price of the CPU or GPU, and not other computer components (i.e. we do not even include the cost of CPUs in the price of GPUs). In 2015 we compared prices between one complete rack server and the set of four processors inside it, and found the complete server was around 36% more expensive ($30,000 vs. $22,000). We expect this is representative at this scale, but diminishes with scale.\nFor computing services, we list the cheapest price for renting the instance for a long period, with no additional features. We do not include spot prices.\nFor supercomputers, we list costs cited where we could find them, which don’t tend to come with elaboration. We expect that they only include upfront costs, and that most of the costs are for hardware.\nWe have not included the costs of energy or other ongoing expenses in any prices. Non-energy costs are hard to find, and we suspect a relatively small and consistent fraction of costs. In 2015 we estimated energy costs to be around 10% of hardware costs.1\nFLOPS measurements\nWe are interested in empirical performance figures from benchmark tests, but often the only data we could find was for theoretical maximums. We try to use figures for LINPACK and sometimes for DGEMM benchmarks, depending on which are available. LINPACK relies heavily on DGEMM, suggesting DGEMM is fairly comparable.2\nPrices\nGraphics processing units (GPUs) and Xeon Phi machines\nWe collected performance and price figures from Wikipedia3, which are available here (see ‘Wikipedia GeForce, Radeon, Phi simplified’). These are theoretical performance figures, which we understand to generally be between somewhat optimistic and ten times too high. So this data suggests real prices of around $0.03-$0.3/GFLOPS. We collected both single and double precision figures, but the cheapest were similar.\nNote that GPUs are typically significantly restricted in the kinds of applications they can run efficiently; this performance is achieved for highly regular computations that can be carried out in parallel throughout a GPU (of the sort that are required for rendering scenes, but which have also proved useful in scientific computing). Xeon Phi units are similar to GPUs, and have broader application,4 but in this dataset were not among the cheapest machines.\nCentral processing units (CPUs)\nWe looked at a small number of popular CPUs on Geekbench from the past five years, and found the cheapest to be around $0.71/GFLOPS.5 However there appear to be 5x disparities between different versions of Geekbench, so we do not trust these numbers a great deal (these figures are from the version we have seen to give relatively high performance figures, and thus low implied prices).\nWe did not investigate these numbers in great depth, or search far for cheaper CPUs, because CPUs seem to be expensive relative to GPUs, and this minimal investigation, plus our previous investigation in 2015, support this.\nComputing as service\nAnother way to purchase FLOPS is via virtual computers.\nAmazon Elastic Cloud Compute (EC2) is a major seller of virtual computing. Based on their current pricing, as of October 5th, 2017, renting a c4.8xlarge instance costs $0.621 per hour (if you purchase it for three years, and pay upfront).\nAccording to a Geekbench report  from 2015, a c4.8xlarge instance delivers around 97.5 GFLOPS.6 We do not know if ‘c4.8xlarge’ referred to the same computing hardware in 2015, and we do know that the current version of Geekbench gives substantially different answers to the one in use here. However we estimate that the hardware should be less than twice as good as it was, and Geekbench seems unlikely to underestimate performance by more than an order of magnitude.\nThis implies that a GFLOPShour costs $6.3 x 10-3 , or optimistically as little as $3.2 x 10-4 . This is much higher than a GPU, at $3.4 x 10-6 for a GFLOPShour, if we suppose the hardware is used over around three years. Amazon is probably not the cheapest provider of cloud computing, however the difference seems to be something like a factor of two,7 which is not enough to make cloud computing competitive with GPUs.\nIn sum, virtual computing appears to cost two to three orders of magnitude more than GPUs. This high price is presumably partly because there are non-hardware costs which we have not accounted for in the prices of buying hardware, but are naturally included in the cost of renting it. However it is unlikely that these additional costs make up a factor of one hundred to one thousand, so cloud computing does not seem competitive.\nSupercomputing\nA top supercomputer can perform a GFLOPS for around $3, in 2017. (See Price performance trend in top supercomputers)\nTensor processing units (TPUs)\nTensor processing units appear to perform a GFLOPS for around $1, in February 2018. However it is unclear how this GFLOPS is measured, which makes it somewhat harder to compare (e.g. whether it is single precision or double precision). Such a high price is also at odds with rumors we have heard that TPUs are an especially cheap source of computing, so possibly TPUs are more efficient for a particular set of applications other than the ones where most of these machines have been measured.\nFurther considerations\nIn 2015, we estimated GPUs to cost around $3/GFLOPS, i.e. 10-100 times more than we would currently estimate. We do not believe that there has been nearly that much improvement in the past two years, so this discrepancy must be due to error and noise. We remain uncertain about the source of all of the difference, so until we resolve that question, it is plausible that our current GPU estimate errs. If so, the price should still be no higher than $3/GFLOPS (our previous estimate, and our current estimate for supercomputer prices).\nSummary\nThe lowest estimated GFLOPS prices we know of are $0.03-$3/GFLOPS, for GPUs and TPUs.\nThis is a summary of all of the prices we found:\n\n\n\n\nType of computerSourceType of performanceCurrent price ($/GFLOPS)Comments\n\n\n\n\nGPUs and Xeon Phi (single precision)WikipediaTheoretical peak.03-0.3$0.03/GFLOPS is given, but is underestimate\n\n\nGPUs and Xeon Phi (double precision)WikipediaTheoretical peak0.3-0.8Upward sloping; probably not optimized for (in GPUs)\n\n\nCloudAmazon EC2 and GeekbenchEmpirical158Expensive so less relevant; shallow investigation\n\n\nSupercomputingTop500 and misc pricesEmpirical2.94Expensive, so less relevant; shallow investigation\n\n\nCPUsGeekbench and misc pricesEmpirical0.71Unreliable, 5x disagreements between Geekbench versions\n\n\nTPUsGoogle Cloud Platform BlogUnclear0.95\n\n\n\n\nNotes", "url": "https://aiimpacts.org/current-flops-prices/", "title": "Current FLOPS prices", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-04-02T05:16:13+00:00", "paged_url": "https://aiimpacts.org/feed?paged=22", "authors": ["Katja Grace"], "id": "30d374c6542b27412c7e9ca66a06d2c1", "summary": []} {"text": "The cost of TEPS\n\nA billion Traversed Edges Per Second (a GTEPS) can be bought for around $0.26/hour via a powerful supercomputer, including hardware and energy costs only. We do not know if GTEPS can be bought more cheaply elsewhere.\nWe estimate that available TEPS/$ grows by a factor of ten every four years, based the relationship between TEPS and FLOPS. TEPS have not been measured enough to see long-term trends directly.\nBackground\nTraversed edges per second (TEPS) is a measure of computer performance, similar to FLOPS or MIPS.  Relative to these other metrics, TEPS emphasizes the communication capabilities of machines: the ability to move data around inside the computer. Communication is especially important in very large machines, such as supercomputers, so TEPS is particularly useful in evaluating these machines.\nThe Graph 500 is a list of computers which have been evaluated according to this metric. It is intended to complement the Top 500, which is a list of  the most powerful 500 computers, measured in FLOPS. The Graph 500 began in 2010, and so far has measured 183 machines, though many of these are not supercomputers, and would presumably not rank among the best 500 TEPS scores if more supercomputers computers were measured.\nThe TEPS benchmark is defined as the number of graph edges traversed per second during a breadth-first search of a very large graph. The scale of the graph is tuned to grow with the size of the hardware. See the Graph500 benchmarks page for further details.\nThe brain in TEPS\nWe are interested in TEPS in part because we would like to estimate the brain’s capacity in terms of TEPS, as an input to forecasting AI timelines. One virtue of this is that it will be a relatively independent measure of how much hardware the human brain is equivalent to, which we can then compare to other estimates. It is also easier to measure information transfer in the brain than computation, making this a more accurate estimate. We also expect that at the scale of the brain, communication is a significant bottleneck (much as it is for a supercomputer), making TEPS a particularly relevant benchmark. The brain’s contents support this theory: much of its mass and energy appears to be used on moving information around.\nCurrent TEPS available per dollar\nWe estimate that a TEPS can currently be produced for around $0.26 per hour in a supercomputer.\nOur estimate\nTable 1 shows our calculation, and sources for price figures.\nWe recorded the TEPS scores for the top eight computers in the Graph 500 (i.e. the best TEPS-producing computers known). We searched for price estimates for these computers, and found five of them. We assume these prices are for hardware alone, though this was not generally specified. The prices are generally from second-hand sources, and so we doubt they are particularly reliable.\nEnergy costs\nWe took energy use figures for the five remaining computers from the Top 500 list. Energy use on the Graph 500 and Top 500 benchmarks are probably somewhat different, especially because computers are often scaled down for the Graph 500 benchmark. See ‘Bias from scaling down’ below for discussion of this problem. There is a Green Graph 500 list, which gives energy figures for some of the supercomputers doing similar problems to those in the Graph 500, but the computers are run at different scales there to in the Graph 500 (presumably to get better energy ratings), so the energy figures given there are also not directly applicable.\nThe cost of electricity varies by location. We are interested in how cheaply one can produce TEPS, so we suppose computation is located somewhere where power is cheap, charged at industrial rates. Prevailing energy prices in the US are around $0.20 / kilowatt hour, but in some parts of Canada it seems industrial users pay less than $0.05 / kilowatt hour. This is also low relative to industrial energy prices in various European nations (though these nations too may have small localities with cheaper power). Thus we take $0.05 to be a cheap but feasible price for energy.\nBias from scaling down\nNote that our method likely overestimates necessary hardware and energy costs, as many computers do not use all of their cores in the Graph 500 benchmark (this can be verified by comparing to cores used in the Top 500 list compiled at the same time). This means that one could get better TEPS/$ prices by just not building parts of existing computers. It also means that the energy used in the Graph 500 benchmarking (not listed) was probably less than that used in the Top 500 benchmarking.\nWe correct for this by scaling down prices according to cores used. This is probably not a perfect adjustment: the costs of building and running a supercomputer are unlikely to be linear in the number of cores it has. However this seems a reasonable approximation, and better than making no adjustment.\nThis change makes the data more consistent. The apparently more expensive sources of TEPS were using smaller fractions of their cores (if we assume they used all cores in the Graph 500), and the very expensive Tianhe-2 was using only 6% of its cores. Scaled according to the fraction of cores used in Graph 500, Tianhe-2 produces TEPShours at a similar price to Sequoia. The two apparently cheapest sources of TEPShours (Sequoia and Mira) appear to have been using all of their cores. Figure 1 shows the costs of TEPShours on the different supercomputers, next to the costs when scaled down according to the fraction of cores that were used in the Graph 500 benchmark.\nFigure 1: Cost of TEPShours using five supercomputers, and cost naively adjusted for fraction of cores used in the benchmark test.\nOther costs\nSupercomputers have many costs besides hardware and energy, such as property, staff and software. Figures for these are hard to find. This presentation suggests the total cost of a large supercomputerover several years can be more than five times the upfront hardware cost. However these figures seem surprisingly high, and we suspect they are not applicable to the problem we are interested in: running AI. High property costs are probably because supercomputers tend to be built in college campuses. Strong AI software is presumably more expensive than what is presently bought, but we do not want to price this into the estimate. Because the figures in the presentation are the only ones we have found, and appear to be inaccurate, we will not further investigate the more inclusive costs of producing TEPShours here, and focus on upfront hardware costs and ongoing energy costs.\nSupercomputer lifespans\nWe assume a supercomputer lasts for five years. This was the age of Roadrunner when decommissioned in 2013, and is consistent with the ages of the computers whose prices we are calculating here — they were all built between 2011 and 2013. ASCI Red lasted for nine years, but was apparently considered ‘supercomputing’s high-water mark in longevity‘. We did not find other examples of large decommissioned supercomputers with known lifespans.\nCalculation\nFrom all of this, we calculate the price of a GTEPShour in each of these systems, as shown in table 1.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nName\nGTeps\nEstimated Price (million)\nHardware cost/hour (5 year life)\nEnergy (kW)\nHourly energy cost (at 5c/kWh)\nTotal $/hour\n(including hardware and energy)\n$/GTEPShours\n(including hardware and energy)\n$/GTEPShours scaled by cores used\nCost sources\n\n\nDOE/NNSA/LLNL Sequoia (IBM – BlueGene/Q, Power BQC 16C 1.60 GHz)\n23751\n$250\n$5,704\n7,890.00\n$394.50\n6,098.36\n$0.26\n $0.26\n1\n\n\nK computer (Fujitsu – Custom supercomputer)\n19585.2\n$1,000\n$22,815\n12,659.89\n$632.99\n23,448.42\n$1.20\n $1.13\n2\n\n\nDOE/SC/Argonne National Laboratory Mira (IBM – BlueGene/Q, Power BQC 16C 1.60 GHz)\n14982\n$50\n$1,141\n3,945.00\n$197.25\n1,338.02\n$0.09\n$0.09\n3\n\n\nTianhe-2 (MilkyWay-2) (National University of Defense Technology – MPP)\n2061.48\n$390\n$8,898\n17,808.00\n$890.40\n9,788.42\n$4.75\n$0.30\n4\n\n\nBlue Joule (IBM – BlueGene/Q, Power BQC 16C 1.60 GHz)\n1427\n$55.3\n$1,262\n657.00\n$32.85\n1,294.54\n$0.91\n $0.46\n5\n\n\n\nTable 1: Calculation of costs of a TEPS over one hour in five supercomputers.\nSequoia as representative of cheap TEPShours\nMira and then Sequoia produce the cheapest TEPShours of the supercomputers investigated here, and are also the only ones which used all of their cores in the benchmark, making their costs less ambiguous. Mira’s costs are ambiguous nonetheless, because the $50M price estimate we have was projected by an unknown source, ahead of time. Mira is also known to have been bought using some part of a $180M grant. If Mira cost most of that, it would be more expensive than Sequoia. Sequoia’s price was given by the laboratory that bought it, after the fact, so is more likely to be reliable.\nThus while Sequoia does not appear to be the cheapest source of TEPS, it does appear to be the second cheapest, and its estimate seems substantially more reliable. Sequoia is also a likely candidate to be especially cheap, since it is ranked first in the Graph 500, and is the largest of the IBM Blue Gene/Qs, which dominate the top of the Graph 500 list. This somewhat supports the validity of its apparent good price performance here.\nSequoia is also not much cheaper than the more expensive supercomputers in our list, once they are scaled down according to the number of cores they used on the benchmark (see Table 1), further supporting this price estimate.\nThus we estimate that GTEPShours can be produced for around $0.26 on current supercomputers. This corresponds to around $11,000/GTEP to buy the hardware alone.\nPrice of TEPShours in lower performance computing\nWe have only looked at the price of TEPS in top supercomputers. While these produce the most TEPS, they might not be the part of the range which produces TEPS most cheaply. However because we are interested in the application to AI, and thus to systems roughly as large as the brain, price performance near the top of the range is particularly relevant to us. Even if a laptop could produce a TEPS more cheaply than Sequoia, it produces too few of them to run a brain efficiently. Nonetheless, we plan to investigate TEPS/$ in lower performing computers in future.\nFor now, we checked the efficiency of an iPad 3, since one was listed near the bottom of the Graph 500. These are sold for $349.99, and apparently produce 0.0304 GTEPS. Over five years, this comes out at exactly the same price as the Sequoia: $0.26/GTEPShour. This suggests both that cheaper computers may be more efficient than large supercomputers (the iPad is not known for its cheap computing power) and that the differences in price are probably not large across the performance spectrum.\nTrends in TEPS available per dollar\nThe long-term trend of TEPS is not well known, as the benchmark is new. This makes it hard to calculate a TEPS/$ trend. Figure 2 is from a powerpoint Announcing the 9th Graph500 List! from the Top 500 website. One thing it shows is top performance in the Graph 500 list since the list began in 2010. Top performance grew very fast (3.5 orders of magnitude in two years), before completely flattening, then growing slowly. The powerpoint attributes this pattern to ‘maturation of the benchmark’, suggesting that the steep slope was probably not reflective of real progress.\nOne reason to expect this pattern is that during the period of fast growth, pre-existing high performance computers were being tested for the first time. This appears to account for some of it. However we note that in June 2012, Sequoia (which tops the list at present) and Mira (#3) had both already been tested, and merely had lower performance than they do now, suggesting at least one other factor is at play. One possibility is that in the early years of using the benchmark, people develop good software for the problem, or in other ways adjust how they use particular computers on the benchmark.\nFigure 2: Performance of the top supercomputer on Graph 500 each year since it has existed (along with the 8th best, and an unspecified sum).\n \nRelationship between TEPS and FLOPS\nThe top eight computers in the Graph 500 are also in the Top 500, so we can compare their TEPS and FLOPS ratings. Because many computers did not use all of their cores in the Graph 500, we scale down the FLOPS measured in the Top 500 by the fraction of cores used in the Graph 500 relative to the Top 500 (this is discussed further in ‘Bias from scaling down’ above). We have not checked thoroughly whether FLOPS scales linearly with cores, but this appears to be a reasonable approximation, based on the first page of the Top 500 list.\nThe supercomputers measured here consistently achieve around 1-2 GTEPS per scaled TFLOPS (see Figure 3). The median ratio is 1.9 GTEPS/TFLOP, the mean is 1.7 GTEPS/TFLOP, and the variance 0.14 GTEPS/TFLOP. Figure 4 shows GTEPS and TFLOPS plotted against one another.\nThe ratio of GTEPS to TFLOPS may vary across the range of computing power. Our figures may may also be slightly biased by selecting machines from the top of the Graph 500 to check against the Top 500. However the current comparison gives us a rough sense, and the figures are consistent.\nThis presentation (slide 23) reports that a Kepler GPU produces 109 TEPS, as compared to 1012 FLOPS reported here (assuming that both are top end models), suggesting a similar ratio holds for less powerful computers.\nFigure 3: GTEPS/scaled TFLOPS, based on Graph 500 and Top 500.\nFigure 4: GTEPS and scaled TFLOPS achieved by the top 8 machines on Graph 500. See text for scaling description.\n\nProjecting TEPS based on FLOPS\nSince the conversion rate between FLOPS and TEPS is approximately consistent, we can project growth in TEPS/$ based on the better understood growth of FLOPS/$. In the last quarter of a century, FLOPS/$ has grown by a factor of ten roughly every four years. This suggests that TEPS/$ also grows by a factor of ten every four years.\n \n\n ", "url": "https://aiimpacts.org/cost-of-teps/", "title": "The cost of TEPS", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-03-21T22:53:27+00:00", "paged_url": "https://aiimpacts.org/feed?paged=22", "authors": ["Katja Grace"], "id": "98f307b9c42edfc6ca9b0d11e71989eb", "summary": []} {"text": "Allen, The Singularity Isn’t Near\n\nThe Singularity Isn’t Near is an article in MIT Technology Review by Paul Allen which argues that a singularity brought about by super-human-level AI will not arrive by 2045 (as is predicted by Kurzweil).\nThe summarized argument\nWe will not have human-level AI by 2045:\n1. To reach human-level AI, we need software as well as hardware.\n2. To get this software, we need one of the following:\n\na detailed scientific understanding of the brain\na way to ‘duplicate’ brains\ncreation of something equivalent to a brain from scratch\n\n3. A detailed scientific understanding of the brain is unlikely by 2045:\n\nTo have enough understanding by 2045, we would need a massive acceleration of scientific progress:\n\nWe are just scraping the surface of understanding the foundations of human cognition.\n\n\nA massive acceleration of progress in brain science is unlikely\n\nScience progresses irregularly:\n\ne.g. The discovery of long-term potentiation, the columnar organization of cortical areas, neuroplasticity.\n\n\nScience doesn’t seem to be exponentially accelerating\nThere is a ‘complexity break’: the more we understand, the more complicated the next level to understand is\n\n\n\n4. ‘Duplicating’ brains is unlikely by 2045:\n\nEven if we have good scans of brains, we need good understanding of how the parts behave to complete the model\nWe have little such understanding\nSuch understanding is not exponentially increasing\n\n5. Creation of something equivalent to a brain from scratch is unlikely by 2045:\n\nArtificial intelligence research appears to be far from providing this\nArtificial intelligence research is unlikely to improve fast:\n\nArtificial intelligence research does not appear to be exponentially improving\nThe ‘complexity break’ (see above) also operates here\nThis is the kind of area where progress is not a reliable exponential\n\n\n\nComments\nThe controversial parts of this argument appear to be the parallel claims that progress is insufficiently fast (or accelerating) to reach an adequate understanding of the brain or of artificial intelligence algorithms by 2045. Allen’s argument does not present enough support to evaluate them from this alone. Others with at least as much expertise disagree with these claims, so they appear to be open questions.\nTo evaluate them, it appears we would need more comparable measures of accomplishments and rates of progress in brain science and AI. With only the qualitative style of Allen’s claims, it is hard to know whether progress being slow, and needing to go far, implies that it won’t get to a specific place by a specific date.", "url": "https://aiimpacts.org/allen-the-singularity-isnt-near/", "title": "Allen, The Singularity Isn’t Near", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-03-13T09:04:48+00:00", "paged_url": "https://aiimpacts.org/feed?paged=22", "authors": ["Katja Grace"], "id": "45057700ee9c85002440c5641865fb77", "summary": []} {"text": "Kurzweil, The Singularity is Near\n\nThe Singularity Is Near is a book by Ray Kurzweil. It argues that a technological singularity will occur in around 2045. This appears to be largely based on extrapolation from hardware in combination with a guess for how much machine computation is needed to produce a large disruption to human society. The book relatedly claims that a machine will be able to pass a Turing test by 2029.\nDetails\nCalculation of the date of the singularity\nThe following is our reconstruction of an argument Kurzweil makes in the book, for expecting the Singularity in 2045.\n\nIn the early 2030s one thousand dollars’ worth of computation will buy about 10¹⁷ computations per second (p119)\nToday we spend more than $10¹¹/year on computation, which will conservatively rise to $10¹²/year by 2030 (p119-20).\nTherefore in the early 2030s we will be producing about 10²⁶-10²⁹ computations per second of nonbiological computation per year, and by the mid 2040s, we will produce 10²⁶ cps with $1000 (p120)\nThe sum of all living biological human intelligence operates at around 10²⁶ computations per second (p113)\nThus in the early 2030s we will produce new computing power roughly equivalent to the capacity of all living biological human intelligence, every year. In the mid 2040s the total computing capacity we produce each year will be a billion times more powerful than all of human intelligence today. (p120)\nNon-biological intelligence will be better than our own brains because machines have some added advantages, such as accuracy and ability to run at peak capacity. (p120)\nThe early 2030s will not be a singularity, because its events do not yet correspond to a sufficiently profound expansion of our intelligence. (p120)\nIn the 1940s, when the computing capacity we produce each year is a billion times more powerful than all human intelligence today, these events will represent a profound and disruptive transformation in human capability, i.e. a singularity. (p120)\n\nRelevance of software\nWhile he doesn’t mention it in the prediction explained above, Kurzweil appears elsewhere to agree that substantial software progress is needed alongside hardware progress for human-level intelligence. He says, “The hardware computational capacity is necessary but not sufficient. Understanding the organization and content of these resources—the software of intelligence—is even more critical and is the objective of the brain reverse-engineering undertaking.” (p126).\nHis argument that the necessary understanding for producing human-level software will come in time with the hardware appears to be as follows:\n\nUnderstanding of the  brain is reasonably good; researchers rapidly turn data from studies into effective working models (p147)\nUnderstanding of the brain is growing exponentially:\n\nOur ability to observe the brain is growing exponentially: ‘Scanning and sensing tools are doubling their overall spatial and temporal resolution each year’. (p163)\n‘Databases of brain-scanning information and model building are also doubling in size about once per year.’ (p163)\nOur ability to model the brain follows closely behind our acquisition of the requisite tools and data (p163) and so is also growing exponentially in some sense.\n\n\n\nHuman-level AI\nAccording to Kurzweil, ‘With both the hardware and software needed to fully emulate human intelligence, we can expect computers to pass the Turing test, indicating intelligence indistinguishable from that of biological humans, by the end of the 2020s.’\nThe claims that hardware and software will be human-level by 2029 appear to share their justification with the above claims about the timing of the Singularity.\nKurzweil bet that by 2029 a computer would pass the turing test, and wrote an article explaining his optimism about the bet here.\nComments\nIf the ‘singularity’ is meant to refer to some particular event, it is unclear why this event would occur when the hardware produced is a billion times more powerful than all human intelligence today. This number might make some sense as an upper bound on when something disruptive should have happened. However it is unclear why the events predicted in the early 2030s would not cause a profound and disruptive transformation, while those in the mid 2040s would.\nKurzweil’s calculation of the date of the Singularity appears to have other minor gaps:\n\nThe argument is about flows of hardware, where it wants to make a conclusion in terms of stocks of hardware. Kurzweil wants to compare total biological and non-biological computation. However he calculates the computing hardware produced per year, instead of the total available that year, or the computation done in that year. These numbers are probably fairly similar in practice, if we suppose that hardware lasts a small number of years.\nThat non-biological machines appear to have some advantages over humans does not imply that some given non-biological machines have advantages overall.\nThe argument suggests software will develop ‘fast’ in some sense, but this isn’t actually compared to hardware progress or measured in years, so it is unclear whether it would be developed in time.\n\nA key disagreement with other commentators appears to be over the rate of progress of understanding relevant to producing software. In particular, Kurzweil believes that such understanding is growing exponentially, and that it will be be sufficient for producing machines as intelligent as humans in line with the hardware. Allen, for instance, has argued with this. Resolving this disagreement would require better measures of neuroscience progress, as well as a better understanding of its relevance.", "url": "https://aiimpacts.org/kurzweil-the-singularity-is-near/", "title": "Kurzweil, The Singularity is Near", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-03-12T12:15:11+00:00", "paged_url": "https://aiimpacts.org/feed?paged=22", "authors": ["Katja Grace"], "id": "009cb648ac586e2440cf3fee1e9415be", "summary": []} {"text": "Wikipedia history of GFLOPS costs\n\nThis is a list from Wikipedia, showing hardware configurations that authors claim perform efficiently, along with their prices per GFLOPS at different times in recent history.\nIn it, prices generally fall at around an order of magnitude every five years, and have continued to do so recently.\nNotes\nThis list is from November 5 2017 (archive version). It is not necessarily credible. We had trouble verifying at least one datapoint, of the few we tried. Performance numbers appear to be a mixture of theoretical peak performance and empirical performance. It is not clear to what extent one should expect the included systems to be especially cost-effective, or why these particular systems were chosen.\nThe last point is in October 2017, and appears to be roughly in line with the rest of the trend. The last order of magnitude  took around 4.5 years. The overall rate in the figure appears to be very roughly an order of magnitude every five years.\nList\n\n\n\nDate\nApproximate cost per GFLOPS\nApproximate cost per GFLOPS inflation adjusted to 2013 US dollars[54]\nPlatform providing the lowest cost per GFLOPS\nComments\n\n\n1961\nUS$18,672,000,000 ($18.7 billion)\nUS$145.5 billion\nAbout 2400 IBM 7030 Stretch supercomputers costing $7.78 million each\nThe IBM 7030 Stretch performs one floating-point multiply every 2.4 microseconds.[55]\n\n\n1984\n$18,750,000\n$42,780,000\nCray X-MP/48\n$15,000,000 / 0.8 GFLOPS\n\n\n1997\n$30,000\n$42,000\nTwo 16-processor Beowulfclusters with Pentium Promicroprocessors[56]\n\n\n\nApril 2000\n$1,000\n$1,300\nBunyip Beowulf cluster\nBunyip was the first sub-US$1/MFLOPS computing technology. It won the Gordon Bell Prize in 2000.\n\n\nMay 2000\n$640\n$836\nKLAT2\nKLAT2 was the first computing technology which scaled to large applications while staying under US-$1/MFLOPS.[57]\n\n\nAugust 2003\n$82\n$100\nKASY0\nKASY0 was the first sub-US$100/GFLOPS computing technology.[58]\n\n\nAugust 2007\n$48\n$52\nMicrowulf\nAs of August 2007, this 26.25 GFLOPS “personal” Beowulf cluster can be built for $1256.[59]\n\n\nMarch 2011\n$1.80\n$1.80\nHPU4Science\nThis $30,000 cluster was built using only commercially available “gamer” grade hardware.[60]\n\n\nAugust 2012\n$0.75\n$0.73\nQuad AMD Radeon 7970 GHz System\nA quad AMD Radeon 7970 desktop computer reaching 16 TFLOPS of single-precision, 4 TFLOPS of double-precision computing performance. Total system cost was $3000; Built using only commercially available hardware.[61]\n\n\nJune 2013\n$0.22\n$0.22\nSony PlayStation 4\nThe Sony PlayStation 4 is listed as having a peak performance of 1.84 TFLOPS, at a price of $400[62]\n\n\nNovember 2013\n$0.16\n$0.16\nAMD Sempron 145 & GeForce GTX 760 System\nBuilt using commercially available parts, a system using one AMD Sempron 145 and three Nvidia GeForce GTX 760 reaches a total of 6.771 TFLOPS for a total cost of $1090.66.[63]\n\n\nDecember 2013\n$0.12\n$0.12\nPentium G550 & Radeon R9 290 System\nBuilt using commercially available parts. Intel Pentium G550 and AMD Radeon R9 290 tops out at 4.848 TFLOPS grand total of US$681.84.[64]\n\n\nJanuary 2015\n$0.08\n$0.08\nCeleron G1830 & Radeon R9 295X2 System\nBuilt using commercially available parts. Intel Celeron G1830 and AMD Radeon R9 295X2tops out at over 11.5 TFLOPS at a grand total of US$902.57.[65][66]\n\n\nJune 2017\n$0.06\n$0.06\nAMD Ryzen 7 1700 & AMD Radeon Vega Frontier Edition\nBuilt using commercially available parts. AMD Ryzen 7 1700 CPU combined with AMD Radeon Vega FE cards in CrossFire tops out at over 50 TFLOPS at just under US$3,000for the complete system.[67]\n\n\nOctober 2017\n$0.03\n$0.03\nIntel Celeron G3930 & AMD RX Vega 64\nBuilt using commercially available parts. Three AMD RX Vega 64 graphics cards provide just over 75 TFLOPS half precision (38 TFLOPS SP or 2.6 TFLOPS DP when combined with the CPU) at ~$2,050 for the complete system.[68]\n\n\n\n \nThe following is a figure we made, of the above list.\n\nFurther discussion\nTrends in the cost of computing", "url": "https://aiimpacts.org/wikipedia-history-of-gflops-costs/", "title": "Wikipedia history of GFLOPS costs", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-03-11T01:58:46+00:00", "paged_url": "https://aiimpacts.org/feed?paged=22", "authors": ["Katja Grace"], "id": "88595916d583034c11ca17392b92f590", "summary": []} {"text": "Trends in the cost of computing\n\nPosted 10 Mar 2015\nComputing power available per dollar has probably increased by a factor of ten roughly every four years over the last quarter of a century (measured in FLOPS or MIPS).\nOver the past 6-8 years, the rate has been slower: around an order of magnitude every 10-16 years, measured in single precision theoretical peak FLOPS or Passmark’s benchmark scores.\nSince the 1940s, MIPS/$ have grown by a factor of ten roughly every five years, and FLOPS/$ roughly every 7.7 years.\nEvidence\nNordhaus\nNordhaus (2001) analyzes the cost of computing over the past century and a half, and produces Figure 1 (though the scale on the vertical axis appears to be off by many orders of magnitude). Much of his data comes from Moravec’s Mind Children (an updated version of the data is here). He converts all data points to ‘million standard operations per second’ (MSOPS), where a standard operation is a weighted mixture of multiplications and additions. He says it is approximately equivalent to 1 MIPS under the Dhrystone metric.\nHe calculates that performance improved at an average rate of 55% per year since 1940. That is, an order of magnitude roughly every five years. However he finds that the average growth rate in different decades differed markedly, with growth since 1980 (until writing in 2001) at around 80% per year, and growth in the 60s and 70s at less than 30% (see figure 2). This would correspond to improving by an order of magnitude every four years in the 80s and 90s.\nFigure 1: “The progress of computing measured in cost per million standardized operations per second (MSOPS) deflated by the consumer price index.” Note that the vertical axis appears to be mislabeled—the scale is around seven orders of magnitude different from other sources, such as Moravec. (From Figure 1, Nordhaus, 2001, p38)\nFigure 2: From Nordhaus p42,”Rate of Growth of Computer Power by Epoch…Real computer power is the inverse of the decline of real computation costs…”\nSandberg and Bostrom\nSandberg and Bostrom (2008) investigate hardware performance trends in their Whole Brain Emulation Roadmap (Appendix B). They plot price performance in MIPS/$ and FLOPS/$, as shown in Figures 3 and 4. They find MIPS/$ grows by a factor of ten every 5.6 years (with a bootstrap 95% confidence interval of 5.3-5.9), and FLOPs/$ grows by a factor of ten every 7.7 years (with a bootstrap confidence interval of 6.5‐9.2 years).\nThey find that growth in MIPS/$ slowed in the 70s and 80s, then accelerated again (most recently gaining an order of magnitude every 3.5 years), which is close to what Nordhaus found.\nSandberg and Bostrom’s data is from John McCallum’s CPU price performance dataset, which does not appear to draw directly from Moravec’s data.\nFigure 3: Processing power available per dollar over time, measured in MIPS and 2007 US dollars.\nFigure 4: Processing power available per dollar over time, measured in FLOPS using the LINPACK benchmark and in 2007 US dollars\nRieber and Muehlhauser\nMuehlhauser and Rieber (2014) extended Koh and Magee’s data on MIPS available per dollar to 2014 (data [not currently] available here). Koh and Magee’s data largely comes from Moravec (like Nordhaus’ above), though they too extended it some. Muehlhauser and Rieber produced Figure 5.\nIn this data, performance since 1940 appears to be growing by a factor of ten roughly every 5 years (14.2 orders of magnitude in 74 years). In the first fourteen years of this century, log(MIPS/$) grew from roughly -0.7 to 2.8, which corresponds to one order of magnitude every four years (or 77% growth per year).\nFigure 5: Rieber and Muehlhauser’s MIPS/$ data (modified to fix typo).\nWikipedia\nWikipedia has a small list of hardware configurations that authors claim produce gigaFLOPS efficiently, along with their prices at different times in recent history. Their data does not appear to cite other sources mentioned above.\nHere is their table, as of March 2 2015. Figure 6 shows inflation adjusted costs of gigaFLOPS over time, taken from the table. The examples in the table were apparently selected as follows:\nThe “cost per GFLOPS” is the cost for a set of hardware that would theoretically operate at one billion floating-point operations per second. During the era when no single computing platform was able to achieve one GFLOPS, this table lists the total cost for multiple instances of a fast computing platform which speed sums to one GFLOPS. Otherwise, the least expensive computing platform able to achieve one GFLOPS is listed.\nWe find this table dubious. It lacks many citations, and the citations it has frequently lack detail. For instance, the claims that the collections of hardware specified produce a GFLOPS are often unsubstantiated. We spent around thirty minutes trying to substantiate the 2015 figure, to no avail. The figure is more than an order of magnitude cheaper than current FLOPS prices we found.\nIn this data, the price of a gigaFLOPS falls by an order of magnitude roughly every four years (14 orders of magnitude in 54 years is 3.9 years per order of magnitude). Since 1997, each order of magnitude only took three years (5.7 orders of magnitude in 18 years). Note that there is very little data before 1997.\nFigure 6: Price of GFLOPS in different years according to Wikipedia, adjusted to 2013 US dollars.\nShort term trends\nMain article: Recent trends in the cost of computing\nThe cheapest hardware prices (for single precision FLOPS/$) are on track to fall by around an order of magnitude every 10-16 years, based on data from around 2011-2017. There was no particular sign of slowing between 2011 and 2017.\nSummary\nWe have looked at four efforts to measure long term hardware price performance trajectories. Two of them are based on Moravec’s earlier effort, while the other two appear to be more independent (though we suspect still draw on similar sources). Two investigations measured (G)FLOPS, two measured MIPS, and one measured MSOPS.\nResults seem fairly consistent in recent decades, and for MIPS/$ in the longer run. There is insufficient data on FLOPS in the long run to check consistency. All four estimates of growth later than the 1990s produce 3.5-4 years as the time for price performance to to grow an order of magnitude (we did not include an estimate for recent years from Sandberg and Bostrom’s FLOPS data, since they did not make one and it was not straightforward to make one ourselves).1 Though note that these measures are from different spans within that period, and use different benchmarks (two were MIPS, one FLOPS, one MSOPS). Only Rieber and Muehlhauser and Wikipedia have data after 2002. Though they give similar recent growth figures, it is not clear how consistent they are: Rieber and Muehlhauser’s data appears to decline sharply in the last few years, and appears to only use CPUs, while the Wikipedia data is fairly even, and moves to GPUs in later years.\nIf we take an MSOPS to be more or less equivalent to a MIPS (as Nordhaus claims), then growth in MIPS since the 1940s is fairly consistent across studies, gaining an order of magnitude roughly every 5 years (Nordhaus), 5 years (Rieber and Muehlhauser) or 5.6 years (Sandberg and Bostrom). Note that the former two draw on similar data.\nOur two estimates of long run growth in FLOPS/$ differ substantially: we have gained an order of magnitude either every 4 years or every 7.7 years. However the four year estimate comes from Wikipedia, which only has two entries prior to 1990, while Sandberg and Bostrom have on the order of hundreds of entries from that period. Thus we rely on Sanberg and Bostrom here, and estimate FLOPS grow by an order of magnitude every 7.7 years.\nPrior to the 1940s, growth appears to be ambiguous and small. It looks like 2.4 orders of magnitude over forty eight years in Rieber and Muehlhauser’s figure, for an order of magnitude every 20 years. Nordhaus measures it as negative.\nFurther work\nFurther work on this subject might:\n\nCheck Moravec’s data, as it appears to be widely cited and reused (perhaps just check consistency between the fraction of data from Moravec and that added later from another source in existing datasets).\nSeparate different types of computers (e.g. treat desktop CPUs, supercomputers, and GPUs separately)\nFind other datasets and analyses\nCombine all of the datasets into one\nProduce more relevant data\nConstruct and measure a more relevant benchmark\n", "url": "https://aiimpacts.org/trends-in-the-cost-of-computing/", "title": "Trends in the cost of computing", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-03-10T22:15:38+00:00", "paged_url": "https://aiimpacts.org/feed?paged=22", "authors": ["Katja Grace"], "id": "fb66f73da5ff7805904c909eb425d1f8", "summary": []} {"text": "What’s up with nuclear weapons?\n\nBy Katja Grace, 27 February 2015\nWhen nuclear weapons were first built, the explosive power you could extract from a tonne of explosive skyrocketed. But why?\nHere’s a guess. Until nuclear weapons, explosives were based on chemical reactions. Whereas nuclear weapons are based on nuclear reactions. As you can see from the below table of specific energies and energy densities I got (and innocuously shortened) from Wikipedia, the characteristic scale of nuclear energy stored in things is about a hundred thousand times higher than that of chemical energy stored in things (by mass). And in particular, there are an empty three orders of magnitude between the most chemical energy packed into a thing and the least nuclear energy packed into a thing. This is perhaps to do with the fact that chemical reactions exploit the electromagnetic force, while nuclear reactions exploit the strong fundamental force.\n\n\n\nStorage material\nEnergy type\nSpecific energy (MJ/kg)\nEnergy density (MJ/L)\nDirect uses\n\n\n\n\nUranium (in breeder)\nNuclear fission\n80,620,000[2]\n1,539,842,000\nElectric power plants (nuclear reactors), industrial process heat (to drive chemical reactions, water desalination, etc.)\n\n\nThorium (in breeder)\nNuclear fission\n79,420,000[2]\n929,214,000\nElectric power plants (nuclear reactors), industrial process heat\n\n\nTritium\nNuclear decay\n583,529\n ?\nElectric power plants (nuclear reactors), industrial process heat\n\n\nHydrogen (compressed)\nChemical\n142\n5.6\nRocket engines, automotive engines, grid storage & conversion\n\n\nmethane or natural gas\nChemical\n55.5\n0.0364\nCooking, home heating, automotive engines, lighter fluid\n\n\nDiesel / Fuel oil\nChemical\n48\n35.8\nAutomotive engines, power plants[3]\n\n\nLPG (including Propane/ Butane)\nChemical\n46.4\n26\nCooking, home heating, automotive engines, lighter fluid\n\n\nJet fuel\nChemical\n46\n37.4\nAircraft\n\n\nGasoline (petrol)\nChemical\n44.4\n32.4\nAutomotive engines, power plants\n\n\nFat (animal/vegetable)\nChemical\n37\n34\nHuman/animal nutrition\n\n\nEthanol fuel (E100)\nChemical\n26.4\n20.9\nFlex-fuel, racing, stoves, lighting\n\n\nCoal\nChemical\n24\n\nElectric power plants, home heating\n\n\nMethanol fuel (M100)\nChemical\n19.7\n15.6\nRacing, model engines, safety\n\n\nCarbohydrates(including sugars)\nChemical\n17\n\nHuman/animal nutrition\n\n\nProtein\nChemical\n16.8\n\nHuman/animal nutrition\n\n\nWood\nChemical\n16.2\n\nHeating, outdoor cooking\n\n\nTNT\nChemical\n4.6\n\nExplosives\n\n\nGunpowder\nChemical\n3\n\nExplosives\n\n\n\n\n\n\n\n\n\n\nThus it seems very natural that the first, lousiest, nuclear weapons that anyone could invent would be much more explosive than any chemical weapon ever known. The power of explosives is mostly a matter of physics, and physics contains discontinuities, for some reason.\nBut this doesn’t quite explain it. Consider cars. Turbojet propelled cars seem just fundamentally capable of greater speeds than internal combustion engine propelled cars. But the first turbojet cars that were faster than internal combustion cars were not much faster—it looks like they just had a steeper trajectory, which passed other cars and kept climbing. I’m not sure what caused this pattern in the car case specifically, but I hear it’s common. Maybe people basically know what current technology is capable of, and introduce new things as soon as they can be done at all, rather than as soon as they can be done well.\nAnyway, we could imagine the same thing happening with nuclear weapons: even if nuclear power was fundamentally very powerful, the first nukes could have made use of it very badly, exploding like a weak chemical explosive the first times, but being quickly improved.\nBut that isn’t how nuclear weapons work. For a nuclear weapon to be less explosive per mass it would need to contain less fissile material, be smaller (so the outside casing is more of the mass, and so that fewer neutrons hit other atoms), or be less well contained (so fewer neutrons hit other atoms). But to get a nuclear explosion going at all, you need to get enough neutrons to hit other atoms that the chain reaction starts. Nuclear weapons have a ‘critical mass‘. I’m not sure how much less powerful the first nuclear weapons could easily have been than they were, but measly inexplosive nuclear weapons were basically out.\nSo the first nuclear weapons had to be much more explosive than the chemical explosives they replaced, because they were based on much more powerful reactions, and primitive nuclear weapons weren’t an option.\nSo nuclear weapons were basically guaranteed to revolutionize explosives in a single hop: even if humanity had known about nuclear reactions for hundreds of years, and put a tiny amount of effort into nuclear weapons research each year, humanity would never have seen feeble, not-much-better-than-TNT type nuclear weapons. There would just have been no nuclear weapons, and then at some point there would have been powerful nuclear weapons.\nIt is somewhat interesting that this is not what happened. Physicists mostly came to believe nuclear weapons were plausible from about 1939, and within a few years America spent nominal $19Bn (roughly 1% of 1943 GDP, but spread over a few years) on nuclear weapons, and built some. So our story is that progress in explosives was very slow, and then America spent a huge pile of money on it, and then it was very fast, but the progress was independent of the massive influx of funding.\nThat sounds surprising. But perhaps the influx of funding was because of the large natural discontinuity visible in the distance? Why would you ever spend small amounts of money every year, if it was clear at the outset that you had to spend a gajillion dollars to get anywhere? If there wasn’t much requirement for serial activities, probably you would just save it up and spend it in one go. America didn’t save it up though—they tried to build nuclear weapons basically as soon as they realized it was feasible at all. So it looks like nuclear weapons were just discovered after it was cost-effective to build them.\nBut if it was immediately cost-effective to build nuclear weapons thousands of times more powerful than other bombs, then isn’t the requirement that nuclear weapons be fairly powerful irrelevant to the spending? If it was worth building powerful bombs immediately, then what does it matter if it is possible to build lesser weapons? Not really, because cost-effectiveness is relative. If is only possible to buy toothpaste in a large bucket, you will probably pay for it, and it will have been a good deal. However if it’s also available in small tubes, then the same bucket is probably a bad deal.\nSimilarly, if nuclear weapons must be powerful, then there’s a decent chance that as soon as they are discovered it will be cost-effective to spend a lot on them and make them so. However if they can come in many lower levels of quality, the same large amount of spending may not be cost effective, because it will often be better to spend an intermediate amount.\nSo a requirement that nuclear weapons be very explosive when they are first built could at least partly explain the huge amount of spending. And the inherently large amounts of energy available from nuclear reactions still seems relevant: any given amount of development will be cost-effective when it is more costly, if it is more effective compared to the alternative.\nThis also appears to fit in with an explanation of the further coincidence that there happened to be a huge war at the time. That is, the war made all military technologies more cost-effective, and thus made it more likely that when nuclear weapons became feasible to develop, they would already be cost-effective. However the war also makes it more likely that high quality weapons would already be cost-effective compared to cheaper counterparts, thus partly undermining the proposal that the large expenditure was due in part to nuclear weapons requiring a minimal level of quality.\nHere’s another plausible explanation for the large expense: because of their extreme explosiveness, nuclear weapons were very cost-effective at the time they were first considered. That is, they could have been produced a lot more cheaply than they were. However, due to the war, America was willing to pay a lot to make them come faster. In particular, America was willing to keep paying to make them come faster up until the point when they were roughly as cost-effective as older weapons, taking into account the upfront cost of making them come faster. This would explain the large amount of spending, and perhaps also why it aligned so well with what America could barely afford. It also explains why nuclear weapons appear to have been very roughly as cost-effective as older weapons. However on its own, it seems to leave the large amount of spending and the large amount of progress as coincidences.\nIn other ways, this story is in line with what I know about the development of nuclear weapons. For instance, that enriching uranium via several different methods in parallel was around half of the cost of the Manhattan project, and that the project was a lot more expensive than other countries’ later nuclear weapons projects.\nPerhaps the inherent explosiveness of nuclear weapons made them very cost-effective, and thus able to be sped up a lot and still be cost-effective? (Thus connecting the expense with the explosiveness) But if nuclear weapons had been too expensive already to speed up much, it seems we would have seen a similar amount of spending (or more) over a somewhat longer time. So on this story it seems the heavy spending didn’t cause the high explosiveness, and the high explosiveness (and thus cheapness) didn’t seem to cause the steep spending.\nIt seems there was probably one coincidence however: a physics discovery leading to weapons of unprecedented power was made just before the largest war in history, and it’s hard to see how the war and the discovery were related, unless history was choreographed to make Leo Szilard’s life interesting. Perhaps the weapons and the war are related because nuclear weapons caused us to think of WWII as a large war by averting later wars? But the World Wars really were quite large compared to wars in slightly earlier history, rather than just the last in a trend of growing conflicts. If there is at least one coincidence anyway, perhaps it doesn’t matter whether the massive expense is explained by the unique qualities of nuclear weapons or merely by the war inspiring haste.\nIn sum, my guesses: nuclear weapons represented abrupt progress in explosiveness because of a discontinuity inherent in physics, and because ineffective nuclear weapons weren’t feasible.  Coincidentally, at the same time as nuclear weapons were discovered, there was a large war. America spent a lot on nuclear weapons for a combination of reasons. Nuclear explosions were inherently unusually powerful, and so could be cost-effective while still being very expensive. They also required investment on a large scale, so were probably invested in at an unusually large scale. America probably also spent a lot more on nuclear weapons to get them quickly, because they were so cost-effective under the circumstances.\nMy guesses are pretty speculative however, and I’m not an expert here. Further speculation, or well-grounded theorizing, is welcome.\n(image: Oak Ridge Y-12 Alpha Track)", "url": "https://aiimpacts.org/whats-up-with-nuclear-weapons/", "title": "What’s up with nuclear weapons?", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-02-27T08:07:44+00:00", "paged_url": "https://aiimpacts.org/feed?paged=22", "authors": ["Katja Grace"], "id": "f2bfcc1b0e5c3341bc4723def12e908f", "summary": []} {"text": "Possible Empirical Investigations\n\nIn the course of our work, we have noticed a number of empirical questions which bear on our forecasts and might be (relatively) cheap to resolve. In the future we hope to address some of these.\n\nOur partial list of investigations into forecasting AI timelines\nOur list of investigations that bear on ‘multipolar’ AI scenarios\nLook at the work of ancient or enlightenment mathematicians and control for possible selection effects in this analysis of historical mathematical conjectures.\nLook for historical characterizations of the AI problem, and try to obtain unbiased (though uninformed) breakdowns of the problem which could be used to gauge progress.\nIdentify previous examples of technological projects with clear long-term goals, and then produce estimates of the time required to achieve those goals to varying degrees.\nAnalyze the performance of different versions of software for benchmark problems, like SAT solving or chess, and determine the extent to which hardware and software progress facilitated improvement.\nObtain a clearer picture of the extent to which historical developments in neuroscience have played a meaningful role in historical progress in AI. Our impression is that this influence has been minimal, but this judgment might be attributable to hindsight bias.\nIn the field of AI, estimate the ratio of spending on hardware to spending on researchers.\nEstimate the change in inputs in mathematicians, scientists, or engineers, as a complement to estimates for rates of progress in those fields.\nEstimate the historical and present size of the AI field, ideally with plausible adjustments for quality (for example performing in-depth investigations for a small number of random samples, perhaps invoking expert opinion) and using these as a basis for quality-adjustments.\nLuke Muehlhauser and the Future of Life Institute‘s section on forecasting both list further projects.\n\nUnfortunately this is an incomplete list (even of the ideas which have struck as promising during this project). We are beginning to flesh it out further in our aforementioned list of projects bearing on AI timelines.", "url": "https://aiimpacts.org/possible-investigations/", "title": "Possible Empirical Investigations", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-02-26T00:02:11+00:00", "paged_url": "https://aiimpacts.org/feed?paged=22", "authors": ["Katja Grace"], "id": "64e0e887faff7c024e4f23f1e8cb922b", "summary": []} {"text": "Research topic: Hardware, software and AI\n\nThis is the first in a sequence of articles outlining research which could help forecast AI development.\n\nInterpretation\nConcrete research projects are in boxes. ∑5 ∆8  means we guess the project will take (very) roughly five hours, and we rate its value (very) roughly 8/10.\nMost projects could be done to very different degrees of depth, or at very different scales. Our time cost estimates correspond to a size that we would be likely to intend if we were to do the project. Value estimates are merely ordinal indicators of worth, based on our intuitive sense, and unworthy of being taken very seriously.\n\n \n1. How does AI progress depend on hardware and software?\nAt a high level, AI improves when people make better software, when they can run it on better hardware, when they gather bigger, better training sets, etc. This makes present-day hardware and software progress a natural place to look for evidence about when advanced AI will arrive. In order to interpret any such data however, it is important to know how these pieces fit together. For instance, is the progress we see now mostly driven by hardware progress, or software progress? Can the same level of performance usually be achieved by widely varying mixtures of hardware and software? Does progress on software depend on progress on hardware?\nIt is important to understand the relationship between hardware, software and AI for several reasons. If hardware progress is the main driver of AI progress, then quite different evidence would tell us about AI timelines than if software is the main driver. Thus different research is valuable, and different timelines are likely. Many people base their AI predictions on hardware progress, while others decline to, so it would be broadly useful to know whether one should. We also expect understanding here to be generally useful.\nSo we think research in this direction seems valuable. We also think several projects seem tractable. Yet little appears to have been done in this direction. Thus this topic seems a high priority.\n1.1 How does AI progress depend qualitatively on hardware and software progress?\nFor instance, will human-level AI appear when we have both a certain amount of hardware, and certain developments in software? Or can hardware and software substitute for one another? Substitution seems a natural model of the relationship between hardware and software, since anecdotally many tasks can be done by low quality software and lots of hardware, or by high quality software and less hardware. However the extent of this is unclear. This kind of model is also not commonly used in estimating AI timelines, so judging whether it should be might be a useful contribution. Having a good model would also bear on the priority of other research directions. As far as we know, this issue has received almost no attention. It seems moderately tractable.\n1.1.A Evaluate qualitative models of the relationships between hardware, software and AI ∑30 ∆5\nOne way to approach the question of qualitative relationships is to assume some model, and work on projects such as those in 1.2 that measure quantitative details of the model, then revise the model if the measurements don’t make sense in it. Before that step, we might spend a short time detailing plausible models, and examining empirical and theoretical evidence we might already have, or could cheaply find. If we were going to follow up with empirical research, we would think about what evidence we would expect the research to reveal, given alternative models.\n~\nFor instance, we find the hardware-software indifference curve model described briefly above (and outlined better in a blog post) plausible. Here are some ways it might be inadequate, that we might consider in evaluating it:\n\n‘Hardware’ and ‘software’ are not sufficiently measurable entities for a ‘level’ of each in some domain to produce a stable level of performance.\nPerformance depends strongly on other factors, e.g. exactly what kind of hardware and software progress you make, unique details of the software being developed, training data available.\nDifferent problem types, and different performance metrics on them have different kinds of behavior\nThere are ‘indifference curves’ in a sense but they are not sufficiently consistent to be worth reasoning about.\nHumanity’s technological progress is not well characterized by an expanding rectangle of feasible hardware and software levels, but more as a complicated region of feasible combinations.\n\n\n1.2 How much do marginal hardware and software improvements alter AI performance?\nAs mentioned above, this question is key to determining which other investigations are worthwhile. Naturally, it could also change our timelines substantially. Thus this question seems thus important to resolve. We think the projects here are particularly tractable, though not particularly cheap. For all of these projects, we would probably choose a specific set of benchmarks on particular problems to focus on. We might do multiple of these projects on the same set of benchmarks, to trace a more complete picture.\n1.2.A Search for natural experiments combining modern hardware and early software approaches or vice versa. ∑80 ∆7\nFor instance, we might find early projects with very large hardware budgets, or recent projects with intentionally restricted hardware. Where these were tested on commonly used benchmarks, we can use them to map out the broad contributions of hardware and software to progress. For instance, if very small chess programs today run better than old chess programs which used similar (but then normal) amounts of hardware, then the difference between them can be attributed to improving software, roughly.\n1.2.B Apply a modern understanding of software to early hardware ∑2,000 ∆9\nChoose a benchmark problem that people worked on in the past, e.g. in the 1980s. Use a modern understanding of AI to solve the problem again, still using 1980’s hardware. Compare this to how researchers did in the 1980’s. This project requires substantial time from at least one AI researcher. Ideally they would spend a similar amount of effort as the past researchers did, so it may be worth choosing a problem where it is known that an achievable level of effort was applied in the past.\n1.2.C Apply early software understanding to modern hardware ∑2,000 ∆8\nUsing contemporary hardware and a 1970’s or 1980’s understanding of connectionism, observe the extent to which a modern AI researcher (or student) could replicate contemporary performance on benchmark AI problems. This project is relatively expensive, among those we are describing. It requires substantial time from collaborators with a historically accurate minimal understanding of AI. Students may satisfy this role well, if their education is incomplete in the right ways. One might compare to the work of similar students who had also learned about modern methods.\n1.2.D Measure marginal effects of hardware and software in existing performance trends  ∑100 ∆8\nOften the same software can be used with modest changes in hardware, so changes in performance from hardware over small margins can be measured. Improved software is also often written to be run on the same hardware as earlier software, so changes in performance from software alone can be measured over moderate margins. Thus we can often estimate these marginal changes from looking at existing performance measurements.\n~\nWe can also look at overall progress over time on some applications, and factor out what we know about hardware or software change, assuming it is close to the marginal values measured by the above methods. For instance, we can see how much individual Go programs improve with more hardware, and then we can look at longer term improvements in computer Go, and guess how much of that improvement came from hardware, given our earlier estimate of marginal improvement from hardware. In general these estimates will be less valid over larger distances, as the impact of hardware or software diverge from their marginal impact, and because arbitrary of hardware and software can’t generally be combined without designing the software to make use of the hardware. Grace 2013 includes some work this project.\n1.2.E Interview AI researchers on the relative importance of hardware and software in driving the progress they have seen. ∑20 ∆7\nAI researchers likely have firsthand experience regarding how hardware and software contribute to overall progress within the vicinity of their own work. This project will probably give relatively noisy estimates, but is very cheap compared to others described here. One could just ask for views on this question, and supporting anecdotes, or devise a more structured questionnaire beforehand.\n1.3 How do hardware and software progress interact?\nDo hardware and software progress relatively independently, or for instance do advances in hardware encourage advances in software? This might change how we generally expect software progress to proceed, and what combinations of hardware and software we expect to first produce human-level AI. We are likely to get some information about this from other projects looking at historical performance data e.g. 1.2.D. For instance, if overall progress is generally proportional to hardware progress, even as hardware progress varies, then this would be suggestive. Below are further possibilities.\n1.3.A Find natural experiments ∑80 ∆4\nSearch for performance data from cases where hardware being used for an application was largely constant then shifted upward at some point. Such cases are probably hard to find, and hard to interpret when found. However, a short search for them may be worthwhile.\n1.3.B Interview researchers ∑20 ∆7\nIf hardware tends to affect software research, it is likely that researchers notice this, and can talk about it. This seems a cheap and effective method of learning qualitatively about the topic. This project should probably be combined with 1.2.E.\n1.3.C Consider plausible models ∑10 ∆5\nThis is a short theoretical project that would benefit from being done in concert with 1.3.B (interview researchers), since researchers probably have a relatively good understanding of which models are plausible, and we are likely to ask better questions of them if we have thought about the topic. This project should probably be combined with 1.1.A.", "url": "https://aiimpacts.org/research-topic-hardware-software-and-ai/", "title": "Research topic: Hardware, software and AI", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-02-20T05:10:27+00:00", "paged_url": "https://aiimpacts.org/feed?paged=22", "authors": ["Katja Grace"], "id": "8fc54ce65a61a55d2ba37c4044d3dbcb", "summary": []} {"text": "Multipolar research questions\n\nBy Katja Grace, 11 February 2015\nThe Multipolar AI workshop we ran a fortnight ago went well, and we just put up a list of research projects from it. I hope this is helpful inspiration to those of you thinking about applying to the new FLI grants in the coming weeks.\nThanks to the many participants who contributed ideas!\n (Image by Evan Amos)", "url": "https://aiimpacts.org/multipolar-research-questions/", "title": "Multipolar research questions", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-02-11T19:29:02+00:00", "paged_url": "https://aiimpacts.org/feed?paged=23", "authors": ["Katja Grace"], "id": "0a39f8139fc0dae36234551ad0345d07", "summary": []} {"text": "List of multipolar research projects\n\nThis list currently consists of research projects suggested at the Multipolar AI workshop we held on January 26 2015.\nRelatively concrete projects are marked [concrete]. These are more likely to already include specific questions to answer and feasible methods to answer them with. Other ‘projects’ are more like open questions, or broad directions for inquiry.\nProjects are divided into three sections:\n\nPaths to multipolar scenarios\nWhat would happen in a multipolar scenario?\nSafety in a multipolar scenario\n\nOrder is not otherwise relevant. The list is an inclusive collection of the topics suggested at the workshop, rather than a prioritized selection from a larger list.\nLuke Muehlhauser’s list of ‘superintelligence strategy’ research questions contains further suggestions.\nList\nPaths to multipolar scenarios\n1.1 If we assume that AI software is similar to other software, what can we infer from observing contemporary software development? [concrete] For instance, is progress in software performance generally smooth or jumpy? What is the distribution? What are typical degrees of concentration among developers? What are typical modes of competition? How far ahead does the leading team tend to be to their competitors? How often does the lead change? How much does a lead in a subsystem produce a lead overall? How much do non-software factors influence who has the lead? How likely is a large player like Google—with its pre-existing infrastructure—to be the frontrunner in a random new area that they decide to compete in?\nA large part of this project would be collecting what is known about contemporary software development. This information would provide one view on how AI progress might plausibly unfold. Combined with several such views, this might inform predictions on issues like abruptness, competition and involved players.\n1.2 If the military is involved in AI development, how would that affect our predictions? [concrete] This is a variation on 1.1, and would similarly involve a large component of reviewing the nature of contemporary military projects.\n1.3 If industry were to be largely responsible for AI development, how would that affect our predictions? [concrete] This is a variation on 1.2, and would similarly involve a large component of reviewing the nature of contemporary industrial projects.\n1.4 If academia were to be largely responsible for AI development, how would that affect our predictions? [concrete] This is a variation on 1.2, and would similarly involve a large component of reviewing the nature of contemporary academic projects.\n1.5 Survey AI experts on the likelihood of AI emerging in the military, business or academia, and on the likely size of a successful AI project.  [concrete]\n1.6 Identify considerations that might tip us between multipolar and unipolar scenarios. \n1.7 To what extent will AGI progress be driven by developing significantly new ideas? 1.1 may bear on this. It could be approached in other ways, for instance asking AI researchers what they expect.\n1.8 Run prediction markets on near-term questions, such as rates of AI progress, which inform our long-run expectations. [concrete] \n1.9 Collect past records of ‘lumpiness’ of AI success. [concrete] That is, variation in progress over time. This would inform expectations of future lumpiness, and thus potential for single projects to gain a substantial advantage.\nWhat would happen in a multipolar scenario?\n2.1 To what extent do values prevalent in the near-term affect the long run, in a competitive scenario? One could consider the role of values over history so far, or examine the ways in which the role of values may change in the future. One could consider the degree of instrumental convergence between actors (e.g. firms) today, and ask how that affects long-term outcomes. One might also consider whether non-values mental features might become locked in in a way that produces similar outcomes to particular values being influential. e.g. priors or epistemological methods that make a particular religion more likely\n2.2 What other factors in an initial scenario are likely to have long-lasting effects? For instance social institutions, standards, and locations for cities.\n2.3 What would AI’s value in a multipolar scenario? We can consider a range of factors that might influence AI values:\n\nThe nature of the transition to AI\nPrevailing institutions\nThe extentto which AI values become static, as compared to changing human values\nWhat values do humans want AI’s to have\nCompetitive dynamics\n\nThere is a common view that a multipolar scenario would be better in the long run than a hegemonic ‘unfriendly AI’. This project would inform that comparison.\n2.4 What are the prospects for human capital-holders? In a simple model, humans who own capital might become very wealthy during a transition to AI. On a classical economic picture, this would be a critical way for humans to influence the future. Is this picture plausible? Evaluate the considerations.\n\nWhat are the implications of capital holders doing no intellectual work themselves?\n[concrete] What does the existing literature on principal-agent problems suggest about multipolar AI scenarios?\n[concrete] Could humans maintain investments for significant periods of their lives, if during that time aeons of subjective time passes for faster moving populations? (i.e. is it plausible to expect to hold assets through millions of years of human history?) Investigate this via data on past expropriations\n\n2.5 Identify risks distinctive to a multipolar scenario, or which are much more serious in a multipolar scenario. \nFor instance:\n\nEvolutionary dynamics bring an outcome that nobody desired initially\nThe AIs are not well integrated into human society, and consequently cause or allow destruction to human society\nThe AIs—integrated or not—have different values, and most of the resources end up being devoted to those values\n\n2.6 Choose a specific multipolar scenario and try to predict its features in detail. [concrete] Base this on the basic changes we know would occur (e.g. minds could be copied like software), and our best understanding of social science.\nSpecific instances:\n\nBrain emulations (Robin Hanson is working on this in an upcoming book)\nBrain emulations, without the assumption that software minds are opaque\nOne can buy maximally efficient software for anything you want; everything else is the same\nAI is much like contemporary software (see 1.1).\n\n2.7 How would multipolar AI change the nature and severity of violent conflict? For instance, conflict between states.\n2.8 Investigate the potential for AI-enforced rights. Think about how to enforce property rights in a multipolar scenario, given advanced artificial intelligence to do it with, and the opportunity to prepare ahead of time. Can you create programs that just enforce deals between two parties, but do nothing else? If you create AI with this stable motivational structure, possessed by many parties, how does this change the way that agents that interact? How could such a system be designed?\n2.9 What is the future of democracy in such a scenario? In a world where resources can rapidly and cheaply be turned into agents, the existing assignment of a vote per person may be destructive and unstable.\n2.10 How does the lumpiness of economic outcomes vary as a function of the lumpiness of origins? For instance, if one team creates brain emulations years before others, would that group have and retain extreme influence?\n2.11 What externalities can we foresee, in computer security? That is, will people invest less (or more) in security than is socially optimal?\n2.12 What externalities can we foresee in AI safety generally?\n2.13 To what extent can artificial agents make more effective commitments, or more effectively monitor commitments, than humans? How does this change competitive dynamics? What proofs of properties of one’s source code may be available in the future?\nSafety in a multipolar scenario\n3.1 Assess the applicability of general AI safety insights to multipolar scenarios. [concrete] How useful are capability control methods, such as boxing, stunting, incentives, or tripwires in a multi-polar scenario? How useful are motivation selection methods, such as direct specification, domesticity, indirect normatively, augmentation in a multipolar scenario?\n3.2 Would selective pressures strongly favor the existence of goal-directed agents, in a multipolar scenario where a variety of AI designs are feasible?\n3.3 Develop a good model for the existing computer security phenomenon where nobody builds secure systems, though they can. [concrete] Model the long-run costs of secure and insecure systems, given distributions of attacker sophistication and possibility for incremental system improvement. Determine the likely situation various future scenarios, especially where computer security is particularly important.\n3.4 Do paradigms developed for nuclear security and biological weapons apply to AI in a multi-polar scenario? [concrete] For instance, could similar control and detection systems be used?\n3.5 What do the features of computer security systems tell us about how multipolar agents might compete?\n3.8 What policies could help create more secure computer systems? For instance, the onus being on owners of systems to secure them, rather than on potential attackers to avoid attacking.\n3.9 What innovations (either in AI or coinciding technologies) might reduce principal-agent problems? \n3.10 Apply ‘reliability theory’ to the problem of manufacturing trustworthy hardware. \n3.11 How can we transition in an economically viable way to hardware that we can trust is uncorrupted? At present, we must assume that the hardware is uncorrupted upon purchase, but this may not be sufficient in the long run.\n ", "url": "https://aiimpacts.org/multipolar-research-projects/", "title": "List of multipolar research projects", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-02-11T18:54:51+00:00", "paged_url": "https://aiimpacts.org/feed?paged=23", "authors": ["Katja Grace"], "id": "dc71cdc2b879774de0349702e87f36f3", "summary": []} {"text": "How AI timelines are estimated\n\nBy Katja Grace, 9 February 2015\nA natural approach to informing oneself about when human-level AI will arrive is to check what experts who have already investigated the question say about it. So we made this list of analyses that we could find.\nIt’s a short list, though the bar for ‘analysis’ was low. Blame for the brevity should probably be divided between our neglect of worthy entries and the world’s neglect of worthy research. Nonetheless we can say interesting things about the list.\nAbout half of the estimates are based on extrapolating hardware, usually to something like ‘human-equivalence’. A stylized estimate along these lines might run as follows:\n\nCalculate how much computation the brain does.\nExtrapolate the future costs for computing hardware (it goes downward, fast)\nFind the point in the computing hardware cost trajectory where brain-equivalent hardware (1) becomes pretty cheap, for some value of ‘pretty cheap’.\nGuestimate how long software will take once we have enough hardware; add this to the date produced in (3).\nThe date produced in (4) is your estimate for human-level AI.\n\nHow solid is this kind of estimate? Let us consider it in a bit of detail.\nHow much computation is a brain worth?\nIt is not trivial to estimate how much computation a brain does. A basic philosophical problem is that we don’t actually know what the brain is doing much, so it’s not obvious what part of its behavior is contributing to computation in any particular way. For instance (implausibly) if some of the neuron firing was doing computation, and the rest was just keeping the neurons prepared, we wouldn’t know. We don’t know how much detail of the neurons and their contents and surrounds is relevant to the information processing we are interested in.\nMoravec (2009) estimates how much computation the brain does by extrapolation from the retina. He estimates how much computing hardware would be needed for a computer to achieve the basic image processing that parts of the retina do, then multiplies this by how much heavier the brain is than parts of the retina. As he admits, this is a coarse estimate. I don’t actually have much idea how accurate you would expect this to be. Some obvious possible sources of inaccuracy are the retina being unrepresentative of the brain (as it appears to be for multiple reasons), the retina being capable of more than the processing being replicated by a computer, and mass being poorly correlated with capacity for computation (especially across tissue which is in different parts of the body).\nOne might straightforwardly improve upon this estimate by extrapolating from other parts of the brain in a similar way, or from calculating how much information could feasibly be communicated in patterns of neuron firing (assuming these were the main components contributing to relevant computation).\nThe relationship between hardware and software\nSuppose that you have accurately estimated how much computation the brain does. The argument above treats this as a lower bound on when human-level intelligence will arrive. This appears to rest on a model in which there is a certain level of hardware and a certain level of software that you need, and when you have them both you will have human-level AI.\nIn reality, the same behavior can often be achieved with different combinations of hardware and software. For instance (as shown in figure 1), you can achieve the same Elo in Go using top of the range software (MoGo) and not much hardware (enough for 64 simulations) or weak software (FatMan) and much more hardware (enough for 1024 simulations, which probably take less hardware each than those used for the sophisticated program). The horizontal axis is doublings of hardware, but FatMan begins with much more hardware.\nFigure 1: Performance of strong Mogo and weak FatMan with successive doublings in hardware. Mogo starts out doing 64 simulations, and FatMan 1024. The horizontal axis is doublings of hardware, and the vertical axis is performance measured in Elo.\nIn Go we thus have a picture of indifference curves something like this:\n\n \nGo is not much like general intelligence, but the claim is that software in general has this character. If this is true, it suggests that the first human-level AI designed by human engineers might easily use much more or much less hardware than the human brain. This is illustrated in figure 3. Our trajectory of software and hardware progress could run into the frontier of human-level ability above or below human-level. If our software engineering is more sophisticated than that of evolution at the point where we hit the frontier, we would reach human-level AI with much less than ‘human-equivalent’ hardware.\nFigure 3: two possible trajectories for human hardware and software progress, which achieve human-level intelligence (the curved line) with far more and far less hardware than humans require.\nAs an aside, if we view the human brain as a chess playing machine, an analog to the argument outlined earlier in the post suggests that we should achieve human-level chess playing at human-equivalent hardware. We in fact achieved it much earlier, because indeed humans can program a chess player more efficiently than evolution did when it programmed humans. This is obviously in part because the human brain was not designed to play chess, and is mostly for other things. However, it’s not obvious that the human brain was largely designed for artificial intelligence research either, suggesting economic dynamite such as this might also arrive without ‘human-level’ hardware.\nI don’t really know how good current human software engineering is compared to evolution, when they set their minds to the same tasks. I don’t think I have particularly strong reason to think they are about the same. Consequently, it seems I don’t seem to have strong reason to expect hardware equivalent to the brain is a particularly important benchmark (though if I’m truly indifferent between expecting human engineers to be better or worse, human-level hardware is indeed my median estimate).\nHuman equivalent hardware might be more important however: I said nothing about how hardware trades off against software. If the frontier of human-level hardware/software combinations is more like that in figure 4 below than figure 3, a very large range of software sophistication corresponds to human-level AI occurring at roughly similar levels of hardware, which means at roughly similar times. If this is so, then the advent of human-level hardware is a good estimate for when AI will arrive, because AI would arrive around then for a large range of  levels of software sophistication.\nFigure 4: If it takes a lot of software sophistication to replace a small amount of hardware, the amount of hardware that is equivalent to a human brain may be roughly as much as is needed for many plausible designs.\nThe curve could also look opposite however, with the level of software sophistication being much more important than available hardware. I don’t know of strong evidence either way, so for now the probability of human-level AI at around the time we hit human-level hardware only seems moderately elevated.\nThe shape of the hardware/software frontier we have been discussing could be straightforwardly examined for a variety of software, using similar data to that presented for Go above. Or we might find that this ‘human-level frontier’ picture is not a useful model. The general nature of such frontiers seem highly informative about the frontier for advanced AI. I have not seen such data for anything other than parts of it for chess and Go. If anyone else is aware of such a thing, I would be interested to see it.\nCosts\nWill there be human-level AI when sufficient hardware (and software) is available at the cost of a supercomputer? At the cost of a human? At the cost of a laptop?\nPrice estimates used in this kind of calculation often seem to be chosen to be conservative—low prices so that the audience can be confident that an AI would surely be built if it were that cheap. For instance, when will human-level hardware be available for $1,000? While this is helpful for establishing an upper bound on the date, it does not seem plausible as a middling estimate. If human-level AI could be built for a million dollars instead of a thousand, it would still be done in a flash, and this corresponds to a difference of around fifteen years.\nThis is perhaps the easiest part of such an estimate to improve, with a review of how much organizations are generally willing to spend on large, valuable, risky projects.\n***\nIn sum, this line of reasoning seems to be a reasonable start. As it stands, it probably produces wildly inaccurate estimates, and appears to be misguided in its implicit model of how hardware and software relate to each other. However it is a good beginning that could be incrementally improved into a fairly well informed estimate, with relatively modest research efforts. Which is just the kind of thing one wants to find among previous efforts to answer one’s question.\n \n \n \n ", "url": "https://aiimpacts.org/how-ai-timelines-are-estimated/", "title": "How AI timelines are estimated", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-02-09T15:00:32+00:00", "paged_url": "https://aiimpacts.org/feed?paged=23", "authors": ["Katja Grace"], "id": "63e3446f5174bac0cb9f8f66770fafe2", "summary": []} {"text": "At-least-human-level-at-human-cost AI\n\nBy Katja Grace, 7 February 2015\nOften, when people are asked ‘when will human-level AI arrive?’ they suggest that it is a meaningless or misleading term. I think they have a point. Or several, though probably not as many as they think they have.\nOne problem is that if the skills of an AI are developing independently at different rates, then at the point that an AI eventually has the full kit of human skills, they also have a bunch of skills that are way past human-level. For instance, if a ‘human-level’ AI were developed now, it would be much better than human-level at arithmetic.\nThus the term ‘human-level’ is misleading because it invites an image of an AI which competes on even footing with the humans, rather than one that is at least as skilled as a human in every way, and thus what we would usually think of as extremely superhuman.\nAnother problem is that the term is used to mean multiple things, which then get confused with each other. One such thing is a machine which replicates human cognitive behavior, at any cost. Another is a machine which replicates human cognitive behavior at the price of a human. The former could plausibly be built years before the latter, and should arguably not be nearly as economically exciting. Yet often people imagine the two events coinciding, seemingly for lack of conceptual distinction.\nShould we use different terms for human level at human cost, and human level at any cost? Should we have a different term altogether, which evokes at-least-human capabilities? I’ll leave these questions to you. For now we just made a disambiguation page.\nAn illustration I made for the Superintelligence Reading Group\n ", "url": "https://aiimpacts.org/at-least-human-level-at-human-cost-ai/", "title": "At-least-human-level-at-human-cost AI", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-02-07T14:00:15+00:00", "paged_url": "https://aiimpacts.org/feed?paged=23", "authors": ["Katja Grace"], "id": "53a90b915c801d5f9e36dedace4521bc", "summary": []} {"text": "Penicillin and syphilis\n\nBy Katja Grace, 2 February 2015\nPenicillin was a hugely important discovery. But was it a discontinuity in the normal progression of research, or just an excellent discovery which followed a slightly less excellent discovery, and so on? \nThere are several senses in which penicillin might have represented a discontinuity. Perhaps its discovery was a huge leap in effectiveness, saving lives at a rate nobody thought possible. Or it might have been dramatically cheaper than expected. Or it could have heralded a step up in life expectancy, or a step down in disease prevalence.\nWe investigated this, to add to our list of cases of discontinuous technological progress, or alternatively to our new list of things we have investigated to potentially add to that list, but didn’t.\nTo make life easier, we focused on penicillin as used to remedy syphilis in particular. Our only conscious reason for choosing syphilis was that penicillin is used to cure it. It is also a disease against which penicillin was considered an important step forward.\nFirst, was penicillin a huge step forward in effectiveness, compared to the usual progress? In the field of syphilis prevention, the recent competition was actually tough. A treatment for syphilis developed thirty years earlier was literally nicknamed ‘magic bullet’ and spawned a nobel prize as well as a frightening looking movie. More quantitatively, existing treatments at the time of penicillin’s introduction in the early 40s were apparently ‘successful’ for about 90% of patients who took them, and it doesn’t look like penicillin did better initially. So probably penicillin didn’t jump ahead on effectiveness.\nHowever a big difference appears to be hidden within ‘patients who took them’. The ‘magic bullet’ was an arsenic compound, unstable in air, needing frequent injections for weeks, and producing risk to life and limb. According to the only paper I found with figures on this, about three quarter of patients ‘defected’ before receiving a ‘minimum curative dose’ of an arsenic and bismuth treatment, which suggests that while earlier cures were not obviously worse than dying of syphilis, they were also not obviously better (or else perhaps people dropped out when the severity of their side effects prohibited them from continuing). Penicillin allowed virtually everyone to get a minimum curative dose. So it’s possible that penicillin represented a discontinuity in this more inclusive success measure, but alas we don’t have the data to check. If penicillin really treated four times as many patients as recent precursors, and both cured most of them, and many would die otherwise, and little progress had happened on this measure since salvarsan, then penicillin would be worth at least thirty years of progress, and perhaps much more. \n(I hear you wondering, if this magic bullet was so much better than its precursors, what on earth were the precursors? Breathing mercury was one. From before the 16th century until the discovery of salvarsan, ‘mercury treatments’ were common. Early mercury treatments included rubbing mercury into one’s body, and inhaling the mercury fumes, while later ones involved more sophisticated injections of mercury compounds).\nI should point out that I’m confused by these apparently high defection rates. Syphilis has an untreated mortality rate of 8-58%, so defection from treatment would appear to be living pretty dangerously. This suggests I’m wrong about something, just so you know. Nonetheless, being right about everything doesn’t seem cost-effective here.\nSo far it seems like penicillin’s main advantage was in being less costly (including costs such as suffering and complications). Was this an abrupt change? It is hard to get figures for the inclusive costs of penicillin and its precursors, let alone a longer term trend to compare to. We can say that penicillin took around eight days at the very start, while other treatments took more than twenty. And penicillin seems to have been a lot less dangerous. However qualitatively speaking, salvarsan also sounds a lot less dangerous than mercury treatments. There were also further improvements in safety prior to penicillin, such as in neosalvarsan. So guessing qualitatively, penicillin might have been a few decades worth of previous progress in reducing costs, but probably not much more.\nEven at the height of syphilis, the disease was not common enough that resolving all of it in one year would produce a visible change to life expectancy, so we shan’t look at that. \nDid syphilis rates decline abruptly? Both having syphilis and dying from it became much less common during the 1940s, which is probably due in large part to antibiotics (see figures 1 and 2). Deaths from syphilis declined by 98%. However, as you can see, these were not abrupt changes. Things got modestly better every year for decades.\nFigure 1: Syphilis infection in the US\nFigure 2: syphilis deaths declined massively in the middle of last century.\nIn sum, it seems unlikely that there was abrupt progress on drug effectiveness conditional on completing the treatment, or on how many people had or died from syphilis. There may have been abrupt progress in overall costs or in drug effectiveness conditional on being offered the treatment, but these have been too hard to evaluate here. \nSo for now, we add it to the list of things we checked, but not the list of things that were abrupt. The list of things we checked is part of a larger page about this project, in case you are curious. We also looked into the Haber process recently, but didn’t think it involved much discontinuity. We might blog about it at some point.\nSome other interesting facts about syphilis and penicillin I learned:\n\nA pre-penicillin treatment for neurosyphilis was to give the patient malaria, because malaria mitigated the syphilis, and was considered more treatable than syphilis.\nEarly on, penicillin was so valuable that doctors recycled what hadn’t been metabolized by extracting it from patients’ urine.\nThere is a whole wikipedia page of previous occasions in history where people observed that mould prevented bacterial infection, more or less, but apparently didn’t follow up that much.\n", "url": "https://aiimpacts.org/penicillin-and-syphilis/", "title": "Penicillin and syphilis", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-02-02T19:21:52+00:00", "paged_url": "https://aiimpacts.org/feed?paged=23", "authors": ["Katja Grace"], "id": "336ee0980061efc8e283b646761446fd", "summary": []} {"text": "Discontinuous progress investigation\n\nPublished Feb 2, 2015; last substantially updated April 12 2020\nWe have collected cases of discontinuous technological progress to inform our understanding of whether artificial intelligence performance is likely to undergo such a discontinuity. This page details our investigation.\nWe know of ten events that produced a robust discontinuity in progress equivalent to more than a century at previous rates in at least one interesting metric and 53 events that produced smaller or less robust discontinuities.\nDetails\nMotivations\nWe are interested in learning whether artificial intelligence is likely to see discontinuous progress in the lead-up to human-level capabilities, or to produce discontinuous change in any other socially important metrics (e.g. percent of global wealth possessed by a single entity, economic value of hardware). We are interested because we think this informs us about the plausibility of different future scenarios and about which research and other interventions are best now, and also because it is a source of disagreement, and so perhaps fruitful for resolution.1\nWe seek to answer this question by investigating the prevalence and nature of discontinuities in other technological progress trends. The prevalence can then act as a baseline for our expectations about AI, which can be updated with any further AI-specific evidence, including that which comes from looking at the nature of other discontinuities (for instance, whether they arise in circumstances that are predicted by the arguments that are made for predicting discontinuous progress in AI). \nIn particular, we want to know:\nHow common are large discontinuities in metrics related to technological progress?Do any factors predict where such discontinuities will arise? (For instance, is it true that progress in a conceptual endeavor is more likely to proceed discontinuously? If there have been discontinuities in progress on a metric in the past, are further discontinuities more likely?) \nAs a secondary goal, we are interested in learning about the circumstances that have surrounded discontinuous technological change in the past, insofar as it may inform our expectations about the consequences of discontinuous progress in AI, should it happen.\nMethods\nMain article: methodology for discontinuous progress investigation.\nTo learn about the prevalence and nature of discontinuities in technological progress, we:\nSearched for potential examples of discontinuous progress (e.g. ‘Eli Whitney’s cotton gin’) via our own understanding, online search, and suggestions from others.2Chose specific metrics related to these potential examples (e.g. ‘cotton ginned per person per day’, ‘value of cotton ginned per cost’) and found historic data on progress on those metrics (usually in conjunction with choosing metrics, since metrics for which we can find data are much preferred). Some datasets we found already formed in one place, while others we collected ourselves from secondary sources.Defined a ‘rate of past progress’ throughout each historic dataset (e.g. if the trend is broadly flat then gets steeper, we decide whether to call this exponential progress, or two periods of linear growth.)Measured the discontinuity at each datapoint in each trend by comparing the progress at the point to the expected progress at that point based on the last datapoint and the rate of past progress (e.g. if the last datapoint five years ago was 600 units, and progress had been going at two units per year, and now a development took it to 800 units, we would calculate 800 units – 600 units = 200 units of progress = 100 years of progress in 5 years, for a 95 year discontinuity.)Noted any discontinuities of more than ten years (‘moderate discontinuities’), and more than one hundred years (‘large discontinuities’)Judged subjectively whether the discontinuity was a clear divergence from the past trend (i.e. the past trend was well-formed enough that the new point actually seemed well outside of plausible continuations of it).3Noted anything interesting about the circumstances of each discontinuity (e.g. the type of metric it was in, the events that appeared to lead to the discontinuity, the patterns of progress around it.)\nNote that this is not an attempt to rigorously estimate the frequency of discontinuities in arbitrary trends, since we have not attempted to select arbitrary trends. We have instead selected trends we think might contain large discontinuities. Given this, it may be used as a loose upper bound on the frequency of discontinuities in similar technological trends.\nIt is likely that there are many minor errors in this collection of data and analysis, based on the rate at which we have found and corrected them, and the unreliability of sources used. \nDefinitions\nThroughout, we use:\nDiscontinuity: abrupt progress far above what one would have expected by extrapolation, measured in terms of how many years early the progress appeared relative to its expected date.Moderate discontinuity: 10-100 years of progress at previous rates occurred on one occasionLarge discontinuity: at least 100 years of progress at previous rates occurred on one occasionSubstantial discontinuity: a moderate or large discontinuityRobust discontinuity: a discontinuity judged to involve a clear divergence from the past trend\nSummary figures\nWe collected 21 case studies of potentially discontinuous technological progress (see Case studies below) and investigated 38 trends associated with them.20 trends had a substantial discontinuity, and 15 had a large discontinuity.4 We found 88 substantial discontinuities, 39 of them large.These discontinuities were produced by 63 distinct eventsTen events produced robust large discontinuities in at least one metric.\nCase studies\nThis is a list of areas of technological progress which we have tentatively determined to either involve discontinuous technological progress, or not. Note that we largely investigate cases that looked likely to be discontinuous.\nShip size\nMain article: Historic trends in ship size\nTrends for ship tonnage (builder’s old measurement) and ship displacement for Royal Navy first rate line-of-battle ships saw eleven and six discontinuities of between ten and one hundred years respectively during the period 1637-1876, if progress is treated as linear or exponential as usual. There is a hyperbolic extrapolation of progress such that neither measurement sees any discontinuities of more than ten years.\nWe do not have long term data for ship size in general, however the SS Great Eastern seems to have produced around 400 years of discontinuity in both tonnage (BOM) and displacement if we use Royal Navy ship of the line size as a proxy, and exponential progress is expected, or 11 or 13 in the hyperbolic trend.\nFigure 1a: Record tonnages for Royal Navy ships of the line\nFigure 1b: Ship weight (displacement) over time, Royal Navy ships of the line and the Great Eastern, a discontinuously large civilian ship. The largest ship in the world three years prior to the Great Eastern was around 4% larger than the Ship of the Line of that time in this figure, so we know that the overall largest ship trend cannot have been much steeper than the Royal Navy ship of the line trend shown.\nImage recognition\nMain article: Effect of AlexNet on historic trends in image recognition\nAlexNet did not represent a greater than 10-year discontinuity in fraction of images labeled incorrectly, or log or inverse of this error rate, relative to progress in the past two years of competition data.\nFigure 2: Error rate (%) of ImageNet competitors from 2010 – 2012\nTransatlantic passenger travel\nMain article: Historic trends in transatlantic passenger travel\nThe speed of human travel across the Atlantic Ocean has seen at least seven discontinuities of more than ten years’ progress at past rates, two of which represented more than one hundred years’ progress at past rates: Columbus’ second journey, and the first non-stop transatlantic flight.\nFigure 3a: Historical fastest passenger travel across the Atlantic (speeds averaged over each transatlantic voyage)\n Figure 3b: Previous figure, shown since 1730\nTransatlantic message speed\nMain article: Historic trends in transatlantic message speed\nThe speed of delivering a short message across the Atlantic Ocean saw at least three discontinuities of more than ten years before 1929, all of which also were more than one thousand years: a 1465-year discontinuity from Columbus’ second voyage in 1493, a 2085-year discontinuity from the first telegraph cable in 1858, and then a 1335-year discontinuity from the second telegraph cable in 1866.\n Figure 4: Average speed for message transmission across the Atlantic. \nLong range military payload delivery\nMain article: Historic trends in long range military payload delivery\nThe speed at which a military payload could cross the Atlantic ocean contained six greater than 10-year discontinuities in 1493 and between 1841 and 1957: \nDateMode of transportKnotsDiscontinuity size(years of progress at past rate)1493Columbus’ second voyage5.814651884Oregon18.6101919WWI Bomber (first non-stop transatlantic flight)1063511938Focke-Wulf Fw 200 Condor174191945Lockheed Constellation288251957R-7 (ICBM)~10,000~500\nFigure 5: Historic speeds of sending hypothetical military payloads across the Atlantic Ocean\nBridge spans\nMain article: Historic trends in bridge span length\nWe measure eight discontinuities of over ten years in the history of longest bridge spans, four of them of over one hundred years, five of them robust as to slight changes in trend extrapolation. \nFigure 6: Record bridge span lengths for five bridge types since 1800\nLight intensity\nMain article: Historic trends in light intensity\nMaximum light intensity of artificial light sources has discontinuously increased once that we know of: argon flashes represented roughly 1000 years of progress at past rates.\nFigure 7: Light intensity trend since 1800 (longer trend available)\nBook production\nMain article: Historic trends in book production\nThe number of books produced in the previous hundred years, sampled every hundred or fifty years between 600AD to 1800AD contains five greater than 10-year discontinuities, four of them greater than 100 years. The last two follow the invention of the printing press in 1492. \nThe real price of books dropped precipitously following the invention of the printing press, but the longer term trend is sufficiently ambiguous that this may not represent a substantial discontinuity.\nThe rate of progress of book production changed shortly after the invention of the printing press, from a doubling time of 104 years to 43 years.\nFigure 8a: Total book production in Western Europe\nFigure 8b: Real price of books in England\nTelecommunications performance\nMain article: Historic trends in telecommunications performance\nThere do not appear to have been any greater than 10-year discontinuities in telecommunications performance, measured as: \nbandwidth-distance product for all technologies 1840-2015bandwidth-distance product for optical fiber 1975-2000total bandwidth across the Atlantic 1956-2018\nRadio does not seem likely to have represented a discontinuity in message speed.\nFigure 9a: Growth in bandwidth-distance product across all telecommunications during 1840-2015 from Agrawal, 20165\nFigure 9b: Bandwidth-distance product in fiber optics alone, from Agrawal, 20166 (Note: 1 Gb = 10^9 bits) \nFigure 9c: Transatlantic cable bandwidth of all types. Pre-1980 cables were copper, post-1980 cables were optical fiber.\nCotton gins\nMain article: Effect of Eli Whitney’s cotton gin on historic trends in cotton ginning\nWe estimate that Eli Whitney’s cotton gin represented a 10 to 25 year discontinuity in pounds of cotton ginned per person per day, in 1793. Two innovations in 1747 and 1788 look like discontinuities of over a thousand years each on this metric, but these could easily stem from our ignorance of such early developments. We tentatively doubt that Whitney’s gin represented a large discontinuity in the cost per value of cotton ginned, though it may have represented a moderate one.\nFigure 10: Claimed cotton gin productivity figures, 1720 to modern day, coded by credibility and being records. The last credible best point before the modern day is an improved version of Whitney’s gin, two years after the original (the original features in the two high non-credible claims slightly earlier).\nAltitude\nMain article: Historic trends in altitude\nAltitude of objects attained by man-made means has seen six discontinuities of more than ten years of progress at previous rates since 1783, shown below.\nYearHeight (m)Discontinuity (years)Entity178440001032Balloon180372801693Balloon191842,300227Paris gun194285,000120V-2 Rocket1944174,60011V-2 Rocket1957864,000,00035Pellets (after one day)\nFigure 11: Post-1750 altitudes of various objects, including many non-records. Whether we collected data for non-records is inconsistent, so this is not a complete picture of progress within object types. See image in detail here.\nSlow light\nMain article: Historic trends in slow light technology\nGroup index of light appears to have seen discontinuities of 22 years in 1995 from Coherent Population Trapping (CPT) and 37 years in 1999 from EIT (condensate). Pulse delay of light over a short distance may have had a large discontinuity in 1994 but our data is not good enough to judge. After 1994, pulse delay does not appear to have seen discontinuities of more than ten years. \n\nFigure 12: Progress in pulse delay and group index. “Human speed” shows the rough scale of motion familiar to humans.\nParticle accelerators\nMain article: Historic trends in particle accelerator performance\nNone of particle energy, center-of-mass energy nor Lorentz factor achievable by particle accelerators appears to have undergone a discontinuity of more than ten years of progress at previous rates. \nFigure 13a: Particle energy in eV over time\nFigure 13b: Center-of-mass energy in eV over time\nFigure 13c: Lorentz factor (gamma) over time.\nPenicillin on syphilis\nMain article: Penicillin and historic syphilis trends\nPenicillin did not precipitate a discontinuity of more than ten years in deaths from syphilis in the US. Nor were there other discontinuities in that trend between 1916 and 2015. \nThe number of syphilis cases in the US also saw steep decline but no substantial discontinuity between 1941 and 2008.\nOn brief investigation, the effectiveness of syphilis treatment and inclusive costs of syphilis treatment do not appear to have seen large discontinuities with penicillin, but we have not investigated either thoroughly enough to be confident.\n\nFigure 14a: Syphilis—Reported Cases by Stage of Infection, United States, 1941–2009, according to the CDC7\nFigure 14b: Syphilis and AIDS mortality rates in the US during the 20th century.8 \nNuclear weapons\nMain article: Effect of nuclear weapons on historic trends in explosives\nNuclear weapons constituted a ~7 thousand year discontinuity in energy released per weight of explosive (relative effectiveness).\nNuclear weapons do not appear to have clearly represented progress in the cost-effectiveness of explosives, though the evidence there is weak.\nFigure 15: Relative effectiveness of explosives, up to early nuclear bomb (note change to log scale) \nHigh temperature superconductors\nMain article: Historic trends in the maximum superconducting temperature\nThe maximum superconducting temperature of any material up to 1993 contained four greater than 10-year discontinuities: A 14-year discontinuity with NbN in 1941, a 26-year discontinuity with LaBaCuO4 in 1986, a 140-year discontinuity with YBa2Cu3O7 in 1987, and a 10-year discontinuity with BiCaSrCu2O9 in 1987.\nYBa2Cu3O7 superconductors seem to correspond to a marked change in the rate of progress of maximum superconducting temperature, from a rate of progress of .41 Kelvin per year to a rate of 5.7 Kelvin per year.\nFigure 16: Maximum superconducting temperate by material over time through 2015\nLand speed records\nMain article: historic trends in land speed records\nLand speed records did not see any greater-than-10-year discontinuities relative to linear progress across all records. Considered as several distinct linear trends it saw discontinuities of 12, 13, 25, and 13 years, the first two corresponding to early (but not first) jet-propelled vehicles.\nThe first jet-propelled vehicle just predated a marked change in the rate of progress of land speed records, from a recent 1.8 mph / year to 164 mph / year.\nFigure 17: Historic land speed records in mph over time. Speeds on the left are an average of the record set in mph over 1 km and over 1 mile. The red dot represents the first record in a cluster that was from a jet propelled vehicle. The discontinuities of more than ten years are the third and fourth turbojet points, and the last two points.\nChess AI\nMain article: Historic trends in chess AI\nThe Elo rating of the best chess program measured by the Swedish Chess Computer Association did not contain any greater than 10-year discontinuities between 1984 and 2018. \n Figure 18: Elo ratings of the best program on SSDF at the end of each year.\nFlight airspeed\nMain article: Historic trends in flight airspeed records\nFlight airspeed records between 1903 and 1976 contained one greater than 10-year discontinuity: a 19-year discontinuity corresponding to the Fairey Delta 2 flight in 1956.\nThe average annual growth in flight airspeed markedly increased with the Fairey Delta 2, from 16mph/year to 129mph/year.\nFigure 19: Flight airspeed records over time\nStructure heights\nMain article: Historic trends in structure heights\nTrends for tallest ever structure heights, tallest ever freestanding structure heights, tallest existing freestanding structure heights, and tallest ever building heights have each seen 5-8 discontinuities of more than ten years. These are:\nDjoser and Meidum pyramids (~2600BC, >1000 year discontinuities in all structure trends)Three cathedrals that were shorter than the all-time record (Beauvais Cathedral in 1569, St Nikolai in 1874, and Rouen Cathedral in 1876, all >100 year discontinuities in current freestanding structure trend)Washington Monument (1884, >100 year discontinuity in both tallest ever structure trends, but not a notable discontinuity in existing structure trend)Eiffel Tower (1889, ~10,000 year discontinuity in both tallest ever structure trends, 54 year discontinuity in existing structure trend)Two early skyscrapers: the Singer Building and the Metropolitan Life Tower (1908 and 1909, each >300 year discontinuities in building height only)Empire State Building (1931, 19 years in all structure trends, 10 years in buildings trend)KVLY-TV mast (1963, 20 year discontinuity in tallest ever structure trend)Taipei 101 (2004, 13 year discontinuity in building height only)Burj Khalifa (2009, ~30 year discontinuity in both freestanding structure trends, 90 year discontinuity in building height trend)\nFigure 20a: All-time record structure heights, long term history\nFigure 20b: All-time record structure heights, recent history\nFigure 20c: All-time record freestanding structure heights, long term history\nFigure 20d: All-time record freestanding structure heights, recent history\nFigure 20e: At-the-time record freestanding structure heights, long term history\nFigure 20f: At-the-time record freestanding structure heights, recent history\nFigure 20g: All-time record building heights, longer term history\nFigure 20h: All-time record building heights, longer term history\nBreech loading rifles\nMain article: Effects of breech loading rifles on historic trends in firearm progress\nBreech loading rifles do not appear to have represented a discontinuity in firing rate of guns, since it appears that other guns had a similar firing rate already. It remains possible that breech loading rifles represent a discontinuity in another related metric.\nIncomplete case studies\nThis is a list of cases we have partially investigated, but insufficiently to include in this page.\nExtended observations\nThis spreadsheet contains summary data and statistics about the entire set of case studies, including all calculations for findings that follow.\nPrevalence of discontinuities\n\nWe investigated 38 trends in around 21 broad areas9\nOf the 38 trends that we investigated, we found 20 to contain at least one substantial discontinuity, and 15 to contain at least one large discontinuity. (Note that our trends were selected for being especially likely to contain discontinuities, so this is something like an upper bound on their frequency in trends in general. However some trends we investigated for fairly limited periods, so these may have contained more discontinuities than we found.)\nTrends we investigated had in expectation 2.3 discontinuities each, including 1 large discontinuity each, and 0.37 large robust discontinuities each (that we found–we did not necessarily investigate trends for the entirety of their history).\nWe found 88 substantial discontinuities, 20 of them robust, 14 of them large and robust.\nThese discontinuities were produced by 63 distinct events, 29 of them producing large discontinuities.\nThe robust large discontinuities were produced by 10 events\n32% of trends we investigated saw at least one large, robust discontinuity (though note that trends were selected for being discontinuous, and were a very non-uniform collection of topics, so this could at best inform an upper bound on how likely an arbitrary trend is to have a large, robust discontinuity somewhere in a chunk of its history)\n53% of trends saw any discontinuity (including smaller and non-robust ones), and in expectation a trend saw more than two of these discontinuities.\nOn average, each trend had 0.001 large robust discontinuities per year, or 0.002 for those trends with at least one at some point10\nOn average 1.4% of new data points in a trend make for large robust discontinuities, or 4.9% for trends which have one.\nOn average 14% of total progress in a trend came from large robust discontinuities (or 16% of logarithmic progress), or 38% among trends which have at least one.\nAcross all years of any metric we considered, the rate of discontinuities/year was around 0.02% (though note that this is heavily influenced by how often you consider thousands of years with poor data at the start).\n\nSome fuller related data, from spreadsheet:\n\n\n\n\n\n\n\n\n\n \nAll discontinuities\nLarge\nRobust\nRobust large\n\n\nMetrics checked\n38\n38\n38\n38\n\n\nDiscontinuity count\n88\n39\n20\n14\n\n\nTrends exhibiting that type of discontinuity\n20\n15\n16\n12\n\n\nTrends with 2+  discontinuities of that type\n14\n10\n4\n2\n\n\nP(discontinuity|trend)\n0.53\n0.39\n0.42\n0.32\n\n\nE(discontinuities per trend)\n2.3\n1.0\n0.5\n0.4\n\n\nP(multiple discontinuities|trend)\n0.37\n0.26\n0.11\n0.05\n\n\nP(multiple discontinuities|trend with at least one)\n0.70\n0.67\n0.25\n0.17\n\n\nP(multiple discontinuities|trend with at least one, and enough search to find more)\n0.78\n0.77\n0.29\n0.20\n\n\n\nNature of discontinuous metrics\nWe categorized each metric as one of:\n\n‘technical’: to do with basic physical parameters (e.g. light intensity, particle energy in particle accelerators)\n‘product’: to do with usable goods or services (e.g. cotton ginned per person per day, size of largest ships, height of tallest structures)\n‘industry’: to do with an entire industry rather than individual items (e.g. total production of books)\n‘societal’: to do with society at large (e.g. syphilis mortality)\n\nWe also categorized each metric as one of:\n\n‘feature’: a characteristic that is good, but not close to encompassing the purpose of most related efforts (e.g. ship size, light intensity)\n‘performance proxy’: approximates the purpose of the endeavor (e.g. cotton ginned per person per day, effectiveness of syphilis treatment)\n‘value proxy’: approximates the all-things-considered value of the endeavor (e.g. real price of books, cost-effectiveness of explosives)\n\nMost metrics fell into ‘product feature’ (16) ‘technical feature’ (8) or ‘product performance proxy’ (6), with the rest (8) spread across the categories.\nHere is what these trends are like (from this spreadsheet):\n\nproduct featuretechnical featureproduct performance proxyrare categoriesAll discontinuitiesnumber of discontinuities73825number of trends16868number of trends with discontinuities13421discontinuities per trend4.61.00.30.6fraction of trends with discontinuity0.810.500.330.13Large discontinuitiesnumber of large discontinuities32304number of trends16868number of trends with large discontinuities11301large discontinuities per trend2.00.40.00.5fraction of trends with large discontinuity0.690.380.000.13\n\nPrimary authors: Katja Grace, Rick Korzekwa, Asya Bergal, Daniel Kokotajlo.\nThanks to many other researchers whose work contributed to this project. \nThanks to Stephen Jordan, Jesko Zimmermann, Bren Worth, Finan Adamson, and others for suggesting potential discontinuities for this project in response to our 2015 bounty, and to many others for suggesting potential discontinuities since, especially notably Nuño Sempere, who conducted a detailed independent investigation into discontinuities in ship size and time to circumnavigate the world11. \nNotes", "url": "https://aiimpacts.org/discontinuous-progress-investigation/", "title": "Discontinuous progress investigation", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-02-02T19:11:36+00:00", "paged_url": "https://aiimpacts.org/feed?paged=23", "authors": ["Katja Grace"], "id": "0b07019c174a4e8465cf1a01be533f58", "summary": []} {"text": "List of Analyses of Time to Human-Level AI\n\nThis is a list of most of the substantial analyses of AI timelines that we know of. It also covers most of the arguments and opinions of which we are aware.\nDetails\nThe list below contains substantial publically available analyses of when human-level AI will appear. To qualify for the list, an item must provide both a claim about when human-level artificial intelligence (or a similar technology) will exist, and substantial reasoning to support it. ‘Substantial’ is subjective, but a fairly low bar with some emphasis on detail, novelty, and expertise. We exclude arguments that AI is impossible, though they are technically about AI timelines.\nList\n\nGood, Some future social repercussions of computers (1970) predicts 1993 give or take a decade, based roughly on the availability of sufficiently cheap, fast, and well-organized electronic components, or on a good understanding of the nature of language, and on the number of neurons in the brain.\nMoravec, Today’s Computers, Intelligent Machines and Our Future (1978) projects that ten years later hardware equivalent to a human brain would be cheaply available, and that if software development ‘kept pace’ then machines able to think as well as a human would begin to appear then.\nSolomonoff, The Time Scale of Artificial Intelligence: Reflections on Social Effects (1985) estimates one to fifty years to a general theory of intelligence, then ten or fifteen years to a machine with general problem solving capacity near that of a human, in some technical professions.\nWaltz, The Prospect for Building Truly Intelligent Machines (1988) predicts human-level hardware in 2017 and says the development of human-level AI might take another twenty years.\nVinge, The Coming Technological Singularity: How to Survive in the post-Human Era (1993) argues for less than thirty years from 1993, largely based on hardware extrapolation.\nEder, Re: The Singularity (1993) argues for 2035 based on two lines of reasoning: hardware extrapolation to computation equivalent to the human brain, and hyperbolic human population growth pointing to a singularity at that time.\nYudkowsky, Staring Into the Singularity 1.2.5 (1996) presents calculation suggesting a singularity will occur in 2021, based on hardware extrapolation and a simple model of recursive hardware improvement.\nBostrom, How Long Before Superintelligence?(1997) argues that it is plausible to expect superintelligence in the first third of the 21st Century. In 2008 he added that he did not think the probability of this was more than half.\nBostrom, When Machines Outsmart Humans(2000) argues that we should take seriously the prospect of human-level AI before 2050, based on hardware trends and feasibility of uploading or software based on understanding the brain.\nKurzweil, The Singularity is Near (pdf) (2005) predicts 2029, based mostly on hardware extrapolation and the belief that understanding necessary for software is growing exponentially. He also made a bet with Mitchell Kapor, which he explains along with the bet and here. Mitchell also explains his reasoning alongside the bet, though it nonspecific about timing to the extent that it isn’t clear whether he thinks AI will ever occur, which is why he isn’t included in this list.\nPeter Voss, Increased Intelligence, Improved Life (video) (2007) predicts less than ten years and probably less than five, based on the perception that other researchers pursue unnecessarily difficult routes, and that shortcuts probably exist.\nMoravec, The Rise of the Robots (2009) predicts AI rivalling human intelligence well before 2050, based on progress in hardware, estimating how much hardware is equivalent to a human brain, and comparison with animals whose brains appear to be equivalent to present-day computers. Moravec made similar predictions in the 1988 book Mind Children.\nLegg, Tick, Tock, Tick Tock Bing (2009) predicts 2028 in expectation, based on details of progress and what remains to be done in neuroscience and AI. He agreed with this prediction in 2012.\nAllen, The Singularity Isn’t Near (2011) criticizes Kurzweil’s prediction of a singularity around 2045, based mostly on disagreeing with Kurzweil on rates of brain science and AI progress.\nHutter, Can Intelligence Explode (2012) uses a prediction of not much later than the 2030s, based on hardware extrapolation, and the belief that software will not lag far behind.\nChalmers (2010) guesses that human-level AI is more likely than not this century. He points to several early estimates, but expresses skepticism about hardware extrapolation, based on the apparent algorithmic difficulty of AI. He argues that AI should be feasible within centuries (conservatively) based on the possibility of brain emulation, and the past success of evolution.\nFallenstein and Mennen (2013) suggest using a Pareto distribution to model time until we get a clear sign that human-level AI is imminent. They get a median estimate of about 60 years, depending on the exact distribution (based on an estimate of 60 years since the beginning of the field).\nDrum, Welcome, Robot Overlords. Please Don’t Fire Us? (2013) argues for around 2040, based on hardware extrapolation.\nMuehlhauser, When will AI be Created? (2013) argues for uncertainty, based on surveys being unreliable, hardware trends being insufficient without software, and software being potentially jumpy.\nBostrom, Superintelligence (2014) concludes that ‘…it may be reasonable to believe that human-level machine intelligence has a fairly sizeable chance of being developed by mid-century and that it has a non-trivial chance of being developed considerably sooner or much later…’, based on expert surveys and interviews, such as these.\nSutton, Creating Human Level AI: How and When? (2015) places a 50% chance on human-level AI by 2040, based largely on hardware extrapolation and the view that software has a 1/2 chance of following within a decade of sufficient hardware.\n", "url": "https://aiimpacts.org/list-of-analyses-of-time-to-human-level-ai/", "title": "List of Analyses of Time to Human-Level AI", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-01-22T14:39:11+00:00", "paged_url": "https://aiimpacts.org/feed?paged=23", "authors": ["Katja Grace"], "id": "9db638f17e32572be7810c033fa068c3", "summary": []} {"text": "The slow traversal of ‘human-level’\n\nBy Katja Grace, 21 January 2015\nOnce you have normal-human-level AI, how long does it take to get Einstein-level AI? We have seen that a common argument for ‘not long at all’ based on brain size does not work in a straightforward way, though a more nuanced assessment of the evidence might. Before we get to that though, let’s look at some more straightforward evidence (from our new page on the range of human intelligence).\nIn particular, let’s look at chess. AI can play superhuman-level chess, so we can see how it got there. And how it got there is via about four decades of beating increasingly good players, starting at beginners and eventually passing Kasparov (note that a beginner is something like level F or below, which doesn’t make it onto this graph):\nFigure 1: Chess AI progress compared to human performance, from Coles 2002. The original article was apparently written before 1993, so note that the right of the graph (after ‘now’) is imagined, though it appears to be approximately correct.\nSomething similar is true in Go (where -20 on this graph is a good beginner score, and go bots are not yet superhuman, but getting close):\nFrom Grace 2013.\nBackgammon and poker AI’s seems to have progressed similarly, though backgammon took about 2 rather than 4 decades (we will soon post more detailed descriptions of progress in board games).\nGo, chess, poker, and backgammon are all played using different algorithms. But the underlying problems are sufficiently similar that they could easily all be exceptions.\nOther domains are harder to measure, but seem basically consistent with gradual progress. Machine translation seems to be gradually moving through the range of human expertise, as does automatic driving. There are fewer clear cases where AI abilities took years rather than decades to move from subhuman to superhuman, and the most salient cases are particularly easy or narrow problems (such as arithmetic, narrow perceptual tasks, or easy board games).\nIf narrow AI generally traverses the relevant human range slowly, this suggests that general AI will take some time to go from minimum minimum wage competency to—well, at least to AI researcher competency. If you combine many narrow skills, each progressing gradually through the human spectrum at different times, you probably wouldn’t end up with a much more rapid change in general performance. And it isn’t clear that a more general method should tend to progress faster than narrow AI.\nHowever, we can point to ways that general AI might be different from board game AI.\nPerhaps progress in chess and go has mostly been driven by hardware progress, while progress in general AI will be driven by algorithmic improvements or acquiring more training data.\nPerhaps the kinds of algorithms people really use to think scale much better than chess algorithms. Chess algorithms only become 30-60 Elo points stronger with each doubling of hardware, whereas a very rough calculation suggests human brains become more like 300 Elo points better per doubling in size.\nIn humans, brain size has roughly a 1/3 correlation with intelligence. Given that the standard deviation of brain size is about 10% of the size of the brain (p. 39), this suggests that a doubling of brain size leads to a relatively large change in chess-playing ability. On a log scale, a doubling is a 7 standard deviation change in brain size, which would suggest a ~2 standard deviation change in intelligence. It’s hard to know how this relates to chess performance, but in Genius in Chess Levitt gives an unjustified estimate of 300 Elo points. This is what we would expect if intelligence were responsible for half of variation in performance (neglecting the lower variance of chess player intelligence), since a standard deviation of chess performance is about 2000/7 ~ 300 Elo. Each of these correlations is problematic but nevertheless suggestive.\nIf human intelligence in general scales much better with hardware than existing algorithms, and hardware is important relative to software, then AI based on an understanding of human intelligence may scale from sub-human to superhuman more quickly than the narrow systems we have seen. However these are both open questions.\n(Image: The Chess Players, by Honoré Daumier)", "url": "https://aiimpacts.org/the-slow-traversal-of-human-level/", "title": "The slow traversal of ‘human-level’", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-01-21T19:34:14+00:00", "paged_url": "https://aiimpacts.org/feed?paged=23", "authors": ["Katja Grace"], "id": "38fff988966e7c341ba94a1c2eafaf47", "summary": []} {"text": "Making or breaking a thinking machine\n\nBy Katja Grace, 18 January 2015\nHere is a superficially plausible argument: the brains of the slowest humans are almost identical to those of the smartest humans. And thus—in the great space of possible intelligence—the ‘human-level’ band must be very narrow. Since all humans are basically identical in design—since you can move from the least intelligent human to the sharpest human with imperceptible changes—then artificial intelligence development will probably cross this band of human capability in a blink. It won’t stop on the way to spend years being employable but cognitively limited, or proficient but not promotion material. It will be superhuman before you notice it’s nearly human. And from our anthropomorphic viewpoint, from which the hop separating village idiot and Einstein looks like most of the spectrum, this might seem like shockingly sudden progress.\nThis whole line of reasoning is wrong.\nIt is true that human brains are very similar. However, this implies very little about the design difficulty of moving from the intelligence of one to the intelligence of the other artificially. The basic problem is that the smartest humans need not be better-designed — they could be better instantiations of the same design.\nWhat’s the difference? Consider an analogy. Suppose you have a yard full of rocket cars. They all look basically the same, but you notice that their peak speeds are very different. Some of the cars can drive at a few hundred miles per hour, while others can barely accelerate above a crawl. You are excited to see this wide range of speeds, because you are a motor enthusiast and have been building your own vehicle. Your car is not quite up to the pace of the slowest cars in your yard yet, but you figure that since all those cars are so similar, once you get it to two miles per hour, it will soon be rocketing along.\nIf a car is slow because it is a rocket car with a broken fuel tank, that car will be radically simpler to improve than the first car you build that can go over 2 miles per hour. The difference is something like an afternoon of tinkering vs. two centuries. This is intuitively because the broken rocket car already contains almost all of the design effort in making a fast rocket car. It’s not being used, but you know it’s there and how to use it.\nSimilarly, if you have a population of humans, and some of them are severely cognitively impaired, you shouldn’t get too excited about the prospects for your severely cognitively impaired robot.\nAnother way to see there must be something wrong with the argument is to note that humans can actually be arbitrarily cognitively impaired. Some of them are even dead. And the brain of a dead person can closely resemble the brain of a live person. Yet while these brains are again very similar in design, AI passed dead-human-level years ago, and this did not suggest that it was about to zip on past live-human-level.\nHere is a different way to think about the issue. Recall that we were trying to infer from the range of human intelligence that AI progress would be rapid across that range. However, we can predict that human intelligence has a good probability of varying significantly, using only evolutionary considerations that are orthogonal to the ease of AI development.\nIn particular, if much of the variation in intelligence is from deleterious mutations, then the distribution of intelligence is more or less set by the equilibrium between selection pressure for intelligence and the appearance of new mutations. Regardless of how hard it was to design improvements to humans, we would always see this spectrum of cognitive capacities, so this spectrum cannot tell us about how hard it is to improve intelligence by design. (Though this would be different if the harm inflicted by a single mutation was likely to be closely related to the difficulty of designing an incrementally more intelligent human).\nIf we knew more about the sources of the variation in human intelligence, we might be able to draw a stronger conclusion. And if we entertain several possible explanations for the variation in human intelligence, we can still infer something; but the strength of our inference is limited by the prior probability that deleterious mutations on their own can lead to significant variation in intelligence. Without learning more, this probability shouldn’t be very low.\nIn sum, while the brain of an idiot is designed much like that of a genius, this does not imply that designing a genius is about as easy as designing an idiot.\nWe are still thinking about this, so now is a good time to tell us if you disagree. I even turned on commenting, to make it easier for you. It should work on all of the blog posts now.\nRocket car, photographed by Jon ‘ShakataGaNai’ Davis.\n(Top image: One of the first cars, 1769)", "url": "https://aiimpacts.org/making-or-breaking-a-thinking-machine/", "title": "Making or breaking a thinking machine", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-01-18T20:59:27+00:00", "paged_url": "https://aiimpacts.org/feed?paged=23", "authors": ["Katja Grace"], "id": "5ea9d5ae6e2054fdd870875effb8a138", "summary": []} {"text": "The range of human intelligence\n\nThis page may be out-of-date. Visit the updated version of this page on our wiki.\nThe range of human intelligence seems large relative to the space below it, as measured by performance on tasks we care about—despite the fact that human brains are extremely similar to each other. \nWithout knowing more about the sources of variation in human performance, however, we cannot conclude much at all about the likely pace of progress in AI: we are likely to observe significant variation regardless of any underlying facts about the nature of intelligence.\nDetails\nMeasures of interest\nPerformance\nIQ is one measure of cognitive performance. Chess ELO is a narrower one. We do not have a general measure that is meaningful across the space of possible minds. However when people speak of ‘superhuman intelligence’ and the intelligence of animals they imagine that these can be meaningfully placed on some rough spectrum. When we say ‘performance’ we mean this kind of intuitive spectrum.\nDevelopment effort\nWe are especially interested in measuring intelligence by the difficulty of building a machine which exhibits that level of intelligence. We will not use a formal unit to measure this distance, but are interested in comparing the range between humans to distances between other milestones, such as that between a mouse and a human, or a rock and a mouse.\nVariation in cognitive performance\nIt is sometimes argued that humans occupy a very narrow band in the spectrum of cognitive performance. For instance, Eliezer Yudkowsky defends this rough schemata1—\n\n—over these, which he attributes to others:\n\n\nSuch arguments sometimes go further, to suggest that AI development effort needed to traverse the distance from the ‘village idiot’ to Einstein is also small, and so given that it seems so large to us, AI progress at around human level will seem very fast.\nThe landscape of performance is not easy to parameterize well, as there are many cognitive tasks and dimensions of cognitive ability, and no good global metric for comparison across different organisms. Nonetheless, we offer several pieces of evidence to suggest that the human range is substantial, relative to the space below it. We do not approach the topic here of how far above human level the space of possible intelligence reaches.\nLow human performance on specific tasks\nFor most tasks, human performance reaches all of the way to the bottom of the possible spectrum. At the extreme, some comatose humans will fail at almost any cognitive task. Our impression is that people who are completely unable to perform a task are not usually isolated outliers, but that there is a distribution of people spread across the range from completely incapacitated to world-champion level. That is, for a task like ‘recognize a cat’, there are people who can only do slightly better than if they were comatose.\nFor our purposes we are more interested in where normal human cognitive performance fall relative to the worst and best possible performance, and the best human performance.\nMediocre human performance relative to high human performance\nOn many tasks, it seems likely that the best humans are many times better than mediocre humans, using relatively objective measures.2\nShockley (1957) found that in science, the productivity of the top researchers in a laboratory was often at least ten times as great as the least productive (and most numerous) researchers. Programmers purportedly vary by an order of magnitude in productivity, though this is debated. A third of people scored nothing in this Putnam competition, while someone scored 100. Some people have to work ten times harder to pass their high school classes than others.\nNote that these differences are among people skilled enough to actually be in the relevant field, which in most cases suggests they are above average. Our impression is that something similar is true in other areas such as sales, entrepreneurship, crafts, and writing, but we have not seen data on them.\nThese large multipliers on performance at cognitive tasks suggest that the range between mediocre cognitive ability and genius is many times larger than the range below mediocre cognitive ability. However it is not clear that such differences are common, or to what extent they are due to differences in underlying general cognitive ability, rather than learning or non-cognitive skills, or a range of different cognitive skills that aren’t well correlated.\nHuman performance spans a wide range in other areas\nIn qualities other than intelligence, humans appear to span a fairly wide range below their peak levels. For instance, the fastest human runners are multiple times faster than mediocre runners (twice as fast at a 100m sprint, four times as fast for a mile). Humans can vary in height by a factor of about four, and commonly do by a factor of about 1.5. The most accurate painters are hard to distinguish from photographs, while some painters are arguably hard to distinguish from monkeys, which are very easy to distinguish from photographs. These observations weakly suggest that the default expectation should be for humans to span a wide absolute range in cognitive performance also.\nAI performance on human tasks\nIn domains where we have observed human-level performance in machines, we have seen rather gradual improvement across the range of human abilities. Here are five relevant cases that we know of:\n1. Chess: human chess Elo ratings conservatively range from around 800 (beginner) to 2800 (world champion). The following figure illustrates how it took chess AI roughly forty years to move incrementally from 1300 to 2800.\nFigure 1: Chess AI progress compared to human performance, from Coles 2002. The original article was apparently written before 1993, so note that the right of the graph (after ‘now’) is imagined, though it appears to be approximately correct.\n2. Go: Human go ratings range from 30-20 kyu (beginner) to at least 9p (10p is a special title). Note that the numbers go downwards through kyu levels, then upward through dan levels, then upward through p(rofessional dan) levels. The following figure suggests that it took around 25 years for AI to cover most of this space (the top ratings seem to be closer together than the lower ones, though there are apparently multiple systems which vary).\nFigure 2. From Grace 2013.\n3. Checkers: According to Wikipedia’s timeline of AI, a program was written in 1952 that could challenge a respectable amateur. In 1994 Chinook beat the second highest rated player ever. (In 2007 checkers was solved.) Thus it took around forty years to pass from amateur to world-class checkers-playing. We know nothing about whether intermediate progress was incremental however.\n4. Physical manipulation: we have not investigated this much, but our impression is that robots are somewhere in the the fumbling and slow part of the human spectrum on some tasks, and that nobody expects them to reach the ‘normal human abilities’ part any time soon (Aaron Dollar estimates robotic grasping manipulation in general is less than one percent of the way to human level from where it was 20 years ago).\n5. Jeopardy: AI appears to have taken two or three years to move from lower ‘champion’ level to surpassing world champion level (see figure 9; Watson beat Ken Jennings in 2011). We don’t know how far ‘champion’ level is from the level of a beginner, but would be surprised if it were less than four times the distance traversed here, given the situation in other games, suggesting a minimum of a decade for crossing the human spectrum.\nIn all of these narrow skills, moving AI from low-level human performance to top-level human performance appears to take on the order of decades. This further undermines the claim that the range of human abilities constitutes a narrow band within the range of possible AI capabilities, though we may expect general intelligence to behave differently, for example due to smaller training effects.\nOn the other hand, most of the examples here—and in particular the ones that we know the most about—are board games, so this phenomenon may be less usual elsewhere. We have not investigated areas such as Texas hold ’em, arithmetic or constraint satisfaction sufficiently to add them to this list.\nWhat can we infer from human variation?\nThe brains of humans are nearly identical, by comparison to the brains of other animals or to other possible brains that could exist. This might suggest that the engineering effort required to move across the human range of intelligences is quite small, compared to the engineering effort required to move from very sub-human to human-level intelligence (e.g. see p21 and 29, p70). The similarity of human brains also suggest that the range of human intelligence is smaller than it seems, and its apparent breadth is due to anthropocentrism (see the same sources). According to these views, board games are an exceptional case–for most problems, it will not take AI very long to close the gap between “mediocre human” and “excellent human.”\nHowever, we should not be surprised to find meaningful variation in the cognitive performance regardless of the difficulty of improving the human brain. This makes it difficult to infer much from the observed variations.\nWhy should we not be surprised? De novo deleterious mutations are introduced into the genome with each generation, and the prevalence of such mutations is determined by the balance of mutation rates and negative selection. If de novo mutations significantly impact cognitive performance, then there must necessarily be significant selection for higher intelligence–and hence behaviorally relevant differences in intelligence. This balance is determined entirely by the mutation rate, the strength of selection for intelligence, and the negative impact of the average mutation.\nYou can often make a machine worse by breaking a random piece, but this does not mean that the machine was easy to design or that you can make the machine better by adding a random piece. Similarly, levels of variation of cognitive performance in humans may tell us very little about the difficulty of making a human-level intelligence smarter.\nIn the extreme case, we can observe that brain-dead humans often have very similar cognitive architectures. But this does not mean that it is easy to start from an AI at the level of a dead human and reach one at the level of a living human.\nBecause we should not be surprised to see significant variation–independent of the underlying facts about intelligence–we cannot infer very much from this variation. The strength of our conclusions are limited by the extent of our possible surprise.\nBy better understanding the sources of variation in human performance we may be able to make stronger conclusions. For example, if human intelligence is improving rapidly due to the introduction of  new architectural improvements to the brain, this suggests that discovering architectural improvements is not too difficult.  If we discover that spending more energy on thinking makes humans substantially smarter, this suggests that scaling up intelligences leads to large performance changes. And so on. Existing research in biology addresses the role of deleterious mutations, and depending on the results this literature could be used to draw meaningful inferences.\nThese considerations also suggest that brain similarity can’t tell us much about the “true” range of human performance. This isn’t too surprising, in light of the analogy with other domains. For example, although the bodies of different runners have nearly identical designs, the worst runners are not nearly as good as the best.\nxxx[This background rate of human-range crossing is less informative about the future in scenarios where the increasing machine performance of interest is coming about in a substantially different way from how it came about in the past. For instance, it is sometimes hypothesized that major performance improvements will come from fast ‘recursive self improvement’, in which case the characteristic time scale might be much faster. However the scale of the human performance range (and time to cross it) relative to the area below the human range should still be informative.]", "url": "https://aiimpacts.org/is-the-range-of-human-intelligence-small/", "title": "The range of human intelligence", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-01-18T20:58:12+00:00", "paged_url": "https://aiimpacts.org/feed?paged=23", "authors": ["Katja Grace"], "id": "05577f348126d22a6ced0d7e61cc8f98", "summary": []} {"text": "Are AI surveys seeing the inside view?\n\nBy Katja Grace, 15 January 2015\nAn interesting thing about the survey data on timelines to human-level AI is the apparent incongruity between answers to ‘when will human-level AI arrive?’ and answers to ‘how much of the way to human-level AI have we come recently?‘\nIn particular, human-level AI will apparently arrive in thirty or forty years, while in the past twenty years most specific AI subfields have apparently moved only five or ten percent of the remaining distance to human-level AI, with little sign of acceleration.\nSome possible explanations:\n\nThe question about how far we have come has hardly been asked, and the small sample size has hit slow subfields, or hard-to-impress researchers, perhaps due to a different sampling of events.\nHanson (the only person who asked how far we have come) somehow inspires modesty or agreement in his audience. His survey methodology is conversational, and the answers do agree with his own views.\nThe ‘inside view‘ is overoptimistic: if you ask a person directly when their project will be done, they tend to badly underestimate. Taking the ‘outside view‘ – extrapolating from similar past situations – helps to resolve these problems, and is more accurate. The first question invites the inside view, while the second invites the outside view.\nDifferent people are willing to answer the different questions.\nEstimating ‘how much of the way between where we were twenty years ago and human-level capabilities’ is hopelessly difficult, and the answers are meaningless.\nEstimating ‘when will we have human-level AI?’ is hopelessly difficult, and the answers are meaningless.\nWhen people answer the ‘how far have we come in the last twenty years?…’ question, they use a different scale to when they answer the ‘…and are we accelerating?’ question, for instance thinking of where we are as a fraction of what is left to do in the first case, and expecting steady exponential growth in that fraction, but not thinking of steady exponential growth as ‘acceleration’.\nAI researchers expect a small number of fast-growing subfields to produce AI with the full range of human-level skills, rather than for it to combine contributions from many subfields.\nResearchers have further information not captured in the past progress and acceleration estimates. In particular, they have reason to expect acceleration.\n\nSince the two questions have so far yielded very different answers, it would be nice to check whether the different answers come from the different kinds of questions (rather than e.g. the small and casual nature of the Hanson survey), and to get a better idea of which kind of answer is more reliable. This might substantially change the message we get from looking at the opinions of AI researchers.\nLuke Muehlhauser and I have written before about how to conduct a larger survey like Hanson’s. One might also find or conduct experiments comparing these different styles of elicitation on similar predictions that can be sooner verified. There appears to be some contention over which method should be more reliable, so we could also start by having that discussion.", "url": "https://aiimpacts.org/are-ai-surveys-seeing-the-inside-view/", "title": "Are AI surveys seeing the inside view?", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-01-16T01:00:02+00:00", "paged_url": "https://aiimpacts.org/feed?paged=24", "authors": ["Katja Grace"], "id": "50799861ec5eb3771b9e33219284c2dc", "summary": []} {"text": "Event: Multipolar AI workshop with Robin Hanson\n\nBy Katja Grace, 14 January 2015\nOn Monday 26 January we will be holding a discussion on promising research projects relating to ‘multipolar‘ AI scenarios. That is, future scenarios where society persists in containing a large number of similarly influential agents, rather than a single winner who takes all. The event will be run in collaboration with Robin Hanson, a leading researcher on the social consequences of whole brain emulation.\nThe goal of the meeting will be to identify promising concrete research projects.\nWe will consider projects under various headings, for example:\nCausal origins and probability of multipolar scenarios\n\nCollect past records of lumpiness of AI success\nSurvey military, business or academic projects which were particularly analogous to successful emulation or AI projects, to learn about the situations in which emulations or AI might appear.\nSurvey AI experts on the likelihood of AI emerging in the military, business or academia, and on the likely size of a successful AI project.\n…\n\nConsequences of multipolar scenarios\n\nHanson’s project of detailing a default whole brain emulation scenario\nHow does the lumpiness of economic outcomes vary as a function of the lumpiness of origins?\nAre there initial social institutions which might substantially influence longer term outcomes?\n…\n\nApplicability of broader AI safety insights to multipolar outcomes\n\nHow useful are capability control methods, such as boxing, stunting, incentives, or tripwires in a multi-polar scenario?\nHow useful are motivation selection methods, such as direct specification, domesticity, indirect normatively, augmentation in a multipolar scenario?\nWould selective pressures strongly favor the existence of goal-directed agents, in a multipolar scenario where a variety of AI designs are feasible?\n…\n\n\n\nDetails in brief\nTime:\n2pm until 2-3 hours later\nThere will be an evening social event at 7pm in the same location, which workshop attendees are welcome to stay or return for. Some participants will go to dinner at a nearby restaurant in between.\nDate: Monday 26 January 2015\nLocation: Private Berkeley residence. Detail available upon RSVP.\nRSVP: to katja.s.grace@gmail.com.\nThis is helpful, but not required (if you can deduce the location).\nOther things: Tea, coffee and snacks provided.\n ", "url": "https://aiimpacts.org/event-multipolar-ai-workshop-with-robin-hanson/", "title": "Event: Multipolar AI workshop with Robin Hanson", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-01-14T18:52:42+00:00", "paged_url": "https://aiimpacts.org/feed?paged=24", "authors": ["Katja Grace"], "id": "5fd23677859db7fdc5ea5aca4e7feb4e", "summary": []} {"text": "Michie and overoptimism\n\nBy Katja Grace, 12 January 2015\nWe recently wrote about Donald Michie’s survey on timelines to human-level AI. Michie’s survey is especially interesting because it was taken in 1972, which is three decades earlier than any other surveys we know of that ask about human-level AI.\nEarly AI predictions are renowned for being absurdly optimistic. And while the scientists in Michie’s survey had had a good decade and a half since the Dartmouth Conference to observe the lack of AI, they were still pretty early. Yet they don’t seem especially optimistic. Their predictions were so far in the future that almost three quarters of them still haven’t been demonstrated wrong.\nAnd you might think computer scientists in the early 70s would be a bit more optimistic than contemporary forecasters, given that they hadn’t seen so many decades of research not produce human-level AI. But the median estimate they gave for how many years until AI was further out than those given in almost all surveys since (fifty years vs. thirty to forty). The survey doesn’t look like it was very granular – there appear to be only five options – so maybe a bunch of people would have said thirty-five years, and rounded up to fifty. Still, their median expectations don’t look like they were substantially more optimistic (in terms of time to AI) than present-day ones.\nIn terms of absolute dates, the Michie participants’ median fifty-year choice of 2022 is still considered at least 10% likely to give rise to human-level AI by recent survey participants, some four decades later.\nIt’s not like anyone said that all early predictions were embarrassingly optimistic though. Maybe Michie’s computer scientists were outliers? Perhaps, but if everyone else whose predictions we know of disagreed with them, it would be those others who were the outliers: Michie’s survey has sixty-three respondents, whereas the MIRI dataset contains only eleven other predictions made before 1980 (it looks like twelve, but the other interesting looking survey attributed to Firschein and Coles appears to be an accidental duplicate of the Michie survey, which Firschein and Coles mention in their paper). Sixty-three is more than all the predictions in the MIRI dataset until the 2005 Bainbridge survey.\nAI researchers may have been extremely optimistic in the very early days (the MIRI dataset suggests this, though it consists of public statements, which tend to be more optimistic anyway). However, it doesn’t seem to have taken AI researchers long to move to something like contemporary views. It looks like they didn’t just predict ‘twenty years’ every year since Dartmouth.\nSurvey results, as shown in Michie’s paper (pdf download).\nThanks to Luke Muehlhauser for suggesting this line of inquiry.\n(Image: “Donald Michie teaching” by Petermowforth)", "url": "https://aiimpacts.org/michie-and-overoptimism/", "title": "Michie and overoptimism", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-01-13T01:24:59+00:00", "paged_url": "https://aiimpacts.org/feed?paged=24", "authors": ["Katja Grace"], "id": "2e451e542d1d7b30e8c34d9e8d159710", "summary": []} {"text": "Were nuclear weapons cost-effective explosives?\n\nBy Katja Grace, 11 January 2015\nNuclear weapons were radically more powerful per pound than any previous bomb. Their appearance was a massive discontinuity in the long-run path of explosive progress, that we have lately discussed.\nBut why do we measure energy per mass in particular? Energy per dollar may be a better measure of ‘progress’, if the goal was to explode things cheaply, rather than lightly. So we looked into this too. The first thing we found was that information about the costs of pre-WWII explosives is surprisingly sparse. However from what we can gather, nuclear weapons did not immediately improve the cost-effectiveness of explosives much at all.\nThe marginal cost of an early 200kt nuclear weapon was about $25M, which is quite similar to the $21M price of the equivalent amount of TNT. Early nuclear explosives did not deliver radically more bang-for-the-buck than their conventional counterparts (even setting aside the considerable upfront development costs); their immediate significance was their greater energy density.\nThis is not a precise comparison. For one thing, conventional bombs have costs beyond the explosive material, which were not included in the figures here. For another, conventional bombs contain explosives other than TNT (alternatives gave somewhat more explosive power per dollar). Nonetheless, the ballpark costs of the two explosives seem comparable.\nThis is interesting for a few reasons.\nContinuity from continuously falling costs\nIt tells us something about why and when technological progress is continuous. One possible explanation for continuous progress is that people do things when they first become reasonably cost-effective, at which point they are unlikely to be radically cost-effective. Imagine there are a range of possible projects, with different costs and payoffs. Every year, they all get a little cheaper. If a project is a great deal this year, then it would have been done last year, when it was already a good deal. So all of the available deals would be about as good as each other. On this model, new technologies might be suddenly very good, but only if the advance was also very expensive. Nothing would be suddenly very cost-effective.\nOn their face, nuclear weapons appear to support this theory. They made abrupt progress, but were incredibly expensive to develop and build. But this story doesn’t stand up to closer inspection. A few years before they were deployed, not even physicists generally thought them possible; no one considered and then rejected an investment in nuclear weapons. As soon as the possibility was seriously considered, it was considered to merit a significant fraction of GDP.\nContinuity from incremental improvements\nIt’s also plausible that progress is continuous on metrics that people care most about, because if they care they search hard for improvements, and there are in fact plenty of small improvements to find. Cost-effectiveness of explosives is a thing we care about, whereas we care less (though still quite a bit) about explosive power per weight. The nuclear case provides some support for this explanation.\nContinuity in something\nThere are many ways to measure progress. Finding measures that don’t change abruptly can help avoid surprises, by allowing us to forecast the trends that are most likely to advance predictably. The nuclear example may help shed light on what kinds of measures tend to be continuous.\n~\nThere are even more relevant metrics than this literal meaning of “bang for the buck.”  Destruction per dollar might be closer, but harder to measure; the effect on P(winning the war) would be closer still, and the effect on national interests more broadly would be even better. It was hard enough to find cost-effectiveness of in terms of energy, so we won’t be investigating these any time soon.\nIn general, I would guess that progress becomes more continuous as we move closer to measures that people really care about. The reverse may be true, however, for nuclear weapons: the costs of deploying weapons might see more abrupt progress than the narrower measure we have considered. A nuclear weapon requires one plane, whereas a rain of small bombs requires many, and these planes and crews appear to be much more expensive than the bombs.\n\nThis is a small part of an ongoing investigation into discontinuous technological change. More blog posts are here and here. The most relevant pages about this so far are Cases of Discontinuous Technological Progress and Discontinuity from Nuclear Weapons.\n(Image: EOD teams detonate expired ordnance in the Kuwaiti desert on July 12, 2002.)", "url": "https://aiimpacts.org/were-nuclear-weapons-cost-effective-explosives/", "title": "Were nuclear weapons cost-effective explosives?", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-01-12T01:30:06+00:00", "paged_url": "https://aiimpacts.org/feed?paged=24", "authors": ["Katja Grace"], "id": "5b7c0e5b169afa7e1f5ac795fbd8cc27", "summary": []} {"text": "A summary of AI surveys\n\nBy Katja Grace, 10 January 2015\nIf you want to know when human-level AI will be developed, a natural approach is to ask someone who works on developing AI. You might however be put off by such predictions being regularly criticized as inaccurate and biased. While they do seem overwhelmingly likely to be inaccurate and biased, I claim they would have to be very inaccurate and biased before they were worth ignoring, especially in the absence of many other sources of quality information. The bar for ridicule is well before the bar for being uninformative.\nSo on that note, we made a big summary of all of the surveys we know of on timelines to human-level AI. And also a bunch of summary pages on specific human-level AI surveys. We hope they are a useful reference, and also help avert selection bias selection bias from people only knowing about surveys that support their particular views on selection bias.\nIt’s interesting to note the consistency between the surveys that asked participants to place confidence intervals. They all predict there is a ten percent chance of human-level AI sometime in the 2020s, and almost all place a fifty percent chance of human-level AI between 2040 and 2050. They are even pretty consistent on the 90% date, with more than half in 2070-2080. This is probably mostly evidence that people talk to each other and hear about similar famous predictions. However it is some evidence of accuracy, since if each survey produced radically different estimates we must conclude that surveys are fairly inaccurate.\nIf you know of more surveys on human-level AI timelines, do send them our way.\nHere’s a summary of our summary:\n\n\n\n\n\nYear\nSurvey\n#\n 10%\n 50%\n 90%\n Other key ‘Predictions’\nParticipants\nResponse rate\nLink to original document\n\n\n1972\n Michie\n67\n\n\n\nMedian 50y (2022) (vs 20 or >50)\nAI, CS\n–\nlink\n\n\n2005\n Bainbridge\n26\n\n\n\n Median 2085\nTech\n–\n link\n\n\n2006\n AI@50\n\n\n\n\nmedian >50y (2056)\nAI conf\n–\nlink\n\n\n2007\n Klein\n888\n\n\n\nmedian 2030-2050\nFuturism?\n–\nlink and link\n\n\n2009\n AGI-09\n\n 2020\n 2040\n 2075\n\nAGI conf; AI\n–\nlink\n\n\n2011\n FHI Winter Intelligence\n35\n 2028\n2050\n 2150\n\nAGI impacts conf; 44% related technical\n41%\nlink\n\n\n2011-2012\n Kruel interviews\n37\n 2025\n 2035\n 2070\n\nAGI, AI\n–\nlink\n\n\n2012\n FHI: AGI\n72\n 2022\n 2040\n 2065\n\nAGI & AGI impacts conf; AGI, technical work\n65%\nlink\n\n\n2012\n FHI:PT-AI\n43\n 2023\n 2048\n 2080\n\nPhilosophy & theory of AI conf; not technical AI\n49%\nlink\n\n\n2012-present\n Hanson\n~10\n\n\n\n ≤ 10% progress to human level in past 20y\nAI\n–\nlink\n\n\n2013\n FHI: TOP100\n29\n2022\n 2040\n 2075\n\nTop AI\n29%\nlink\n\n\n2013\n FHI:EETN\n26\n 2020\n 2050\n 2093\n\nGreek assoc. for AI; AI\n10%\nlink\n\n\n\n \n(Image: AGI-09 participants, by jeriaska)", "url": "https://aiimpacts.org/a-summary-of-ai-surveys/", "title": "A summary of AI surveys", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-01-10T23:19:17+00:00", "paged_url": "https://aiimpacts.org/feed?paged=24", "authors": ["Katja Grace"], "id": "e1ca9e2b6ee300db3cab895797a1de0e", "summary": []} {"text": "AI Timeline Surveys\n\nThis page is out-of-date. Visit the updated version of this page on our wiki.\nPublished 10 January 2015\nWe know of twelve surveys on the predicted timing of human-level AI. If we collapse a few slightly different meanings of ‘human-level AI’, then:\n\nMedian estimates for when there will be a 10% chance of human-level AI are all in the 2020s (from seven surveys), except for the 2016 ESPAI, which found median estimates ranging from 2013 to long after 2066, depending on question framing.\nMedian estimates for when there will be a 50% chance of human-level AI range between 2035 and 2050 (from seven surveys), except for the 2016 ESPAI, which found median estimates ranging from 2056 to at least 2106, depending on question framing.\nOf three surveys in recent decades asking for predictions but not probabilities, two produced median estimates of when human-level AI will arrive in the 2050s, and one in 2085.\n\nParticipants appear to mostly be experts in AI or related areas, but with a large contingent of others. Several groups of survey participants seem likely over-represent people who are especially optimistic about human-level AI being achieved soon.\nDetails\nList of surveys\nThese are the surveys that we know of on timelines to human-level AI:\n\nMichie (1972)\nBainbridge (2005)\nAI@50 (2006)\nKlein (2007)\nAGI-09 (2009)\nFHI Winter Intelligence (2011)\nKruel (2011-12)\nHanson (2012 onwards)\nMüller and Bostrom: AGI-12, TOP100, EETN, PTAI (2012-2013)\n\nResults\nResults summary\n \n \n\n\nYear\nSurvey\n#\n 10%\n 50%\n 90%\n Other key ‘Predictions’\nParticipants\nResponse rate\nLink to original document\n\n\n1972\n Michie\n67\n \n \n \nMedian 50y (2022) (vs 20 or >50)\nAI, CS\n–\nlink\n\n\n2005\n Bainbridge\n26\n \n \n \n Median 2085\nTech\n–\n link\n\n\n2006\n AI@50\n \n \n \n \nmedian >50y (2056)\nAI conf\n–\nlink\n\n\n2007\n Klein\n888\n \n \n \nmedian 2030-2050\nFuturism?\n–\nlink and link\n\n\n2009\n AGI-09\n 21\n 2020\n 2040\n 2075\n \nAGI conf; AI\n–\nlink\n\n\n2011\n FHI Winter Intelligence\n35\n 2028\n2050\n 2150\n \nAGI impacts conf; 44% related technical\n41%\nlink\n\n\n2011-2012\n Kruel interviews\n37\n 2025\n 2035\n 2070\n \nAGI, AI\n–\nlink\n\n\n2012\n FHI: AGI-12\n72\n 2022\n 2040\n 2065\n \nAGI & AGI impacts conf; AGI, technical work\n65%\nlink\n\n\n2012\n FHI:PT-AI\n43\n 2023\n 2048\n 2080\n \nPhilosophy & theory of AI conf; not technical AI\n49%\nlink\n\n\n2012-?\n Hanson\n~10\n \n \n \n ≤ 10% progress to human level in past 20y\nAI\n–\nlink\n\n\n2013\n FHI: TOP100\n29\n2022\n 2040\n 2075\n \nTop AI\n29%\nlink\n\n\n2013\n FHI:EETN\n26\n 2020\n 2050\n 2093\n \nGreek assoc. for AI; AI\n10%\n\nlink\n\n\n\n\nTime to a 10% chance and a 50% chance of human-level AI\nThe FHI Winter Intelligence, Müller and Bostrom, AGI-09, Kruel, and 2016 ESPAI surveys asked for years when participants expected 10%, 50% and 90% probabilities of human-level AI (or a similar concept). All of these surveys were taken between 2009 and 2012, except the 2016 ESPAI.\nSurvey participants’ median estimates for when there will be a 10% chance of human-level AI are all in the 2020s or 2030s. Until the 2016 ESPAI survey, median estimates for when there will be a 50% chance of human-level AI ranged between 2035 and 2050. The 2016 ESPAI asked about human-level AI using both very similar questions to previous surveys, and a different style of question based on automation of specific human occupations. The former questions found median dates of at least 2056, and the latter question prompted median dates of at least 2106.\nNon-probabilistic predictions\nThree surveys (Bainbridge, Klein, and AI@50) asked about predictions, rather than confidence levels. These produced median predictions of  >2056 (AI@50), 2030-50 (Klein), and 2085 (Bainbridge). It is unclear how participants interpret the request to estimate when a thing will happen; these responses may mean the same as the 50% confidence estimate discussed above. These surveys together appear to contain a high density of people who don’t work in AI, compared to the other surveys.\nMichie survey\nMichie’s survey is unusual in being much earlier than the others (1972). In it, less than a third of participants expected human-level AI by 1992, another almost third estimated 2022, and the rest expected it later. Note that the participants’ median expectation (50 years away) was further from their present time than those of contemporary survey participants. This point conflicts with a common perception that early AI predictions were shockingly optimistic, and quickly undermined.\nHanson survey\nHanson’s survey is unusual in its methodology. Hanson informally asked some AI experts what fraction of the way to human-level capabilities we had come in 20 years, in their subfield. He also asked about apparent acceleration. Around half of answers were in the 5-10% range, and all except one which hadn’t passed human-level already were less than 10%. Of six who reported on acceleration, only one saw positive acceleration.\nThese estimates suggest human-level capabilities in most fields will take more than 200 years, if progress proceeds as it has (i.e. if we progress at 10% per twenty years, it will take 200 years to get to 100%). This estimate is quite different from those obtained from most of the other surveys.\nThe 2016 ESPAI attempted to replicate this methodology, and did not appear to find similarly long implied timelines, however little attention has been paid to analyzing that data.\nThis methodology is discussed more in the methods section below.\nMethods\nSurvey participants\nIn assessing the quality of predictions, we are interested in the expertise of the participants, the potential for biases in selecting them, and the degree to which a group of well-selected experts generally tend to make good predictions. We will leave the third issue to be addressed elsewhere, and here describe the participants’ expertise and the surveys’ biases. We will see that the participants have much expertise relevant to AI, but – relatedly – their views are probably biased toward optimism because of selection effects as well as normal human optimism about projects.\nSummary of participant backgrounds\nThe FHI (2011), AGI-09, and one of the four FHI collection surveys are from AGI (artificial general intelligence) conferences, so will tend to include a lot of people who work directly on trying to create human-level intelligence, and others who are enthusiastic or concerned about that project. At least two of the aforementioned surveys draw some participants from the ‘impacts’ section of the AGI conference, which is likely to select for people who think the effects of human-level intelligence are worth thinking about now.\nKruel’s participants are not from the AGI conferences, but around half work in AGI. Klein’s participants are not known, except they are acquaintances of a person who is enthusiastic about AGI (his site is called ‘AGI-world’). Thus many participants either do AGI research, or think about the topic a lot.\nMany more participants are AI researchers from outside AGI. Hanson’s participants are experts in narrow AI fields. Michie’s participants are computer scientists working close to AI. Müller and Bostrom’s surveys of the top 100 artificial intelligence researchers, and Members of the Greek Association for Artificial Intelligence, would be almost entirely AI researchers, and there is little reason to expect them to be in AGI. AI@50 seems to include a variety of academics interested in AI rather than those in the narrow field of AGI, though also includes others, such as several dozen graduate and post-doctoral students. 2016 ESPAI is everyone publishing in two top machine learning conferences, so largely machine learning researchers.\nThe remaining participants appear to be mostly highly educated people from academia and other intellectual areas. The attendees at the 2011 Conference on Philosophy and Theory of AI appear to be a mixture of philosophers, AI researchers, and academics from related fields such as brain sciences. Bainbridge’s participants are contributors to ‘converging technology’ reports, on topics of nanotechnology, biotechnology, information technology, and cognitive science. From looking at what appears to be one of these reports, these seem to be mostly experts from government and national laboratories, academia, and the private sector. Few work in AI in particular. An arbitrary sample includes the Director of the Division of Behavioral and Cognitive Sciences at NSF, a person from the Defense Threat Reduction Agency, and a person from HP laboratories.\nAGI researchers\nAs noted above, many survey participants work in AGI – the project to create general intelligent agents, as opposed to narrow AI applications. In general, we might expect people working on a given project to be unusually optimistic about its success, for two reasons. First, those who are most optimistic initially will more likely find the project worth investing in. Secondly, people are generally observed to be especially optimistic about the time needed for their own projects to succeed. So we might expect AGI researchers to be biased toward optimism, for these reasons.\nOn the other hand, AGI researchers are working on projects most closely related to human-level AI, so probably have the most relevant expertise.\nOther AI researchers\nJust as AGI researchers work on topics closer to human-level AI than other AI researchers – and so may be more biased but also more knowledgeable – AI researchers work on more relevant topics than everyone else. Similarly, we might expect them to both be more accurate due to their additional expertise, but more biased due to selection effects and optimism about personal projects.\nHanson’s participants are experts in narrow AI fields, but are also reporting on progress in their own fields of narrow AI (rather than on general intelligence), so we might expect them to be more like the AGI researchers – especially expert and especially biased. On the other hand, Hanson asks about past progress rather than future expectations, which should diminish both the selection effect and the effect from the planning fallacy, so we might expect the bias to be weaker.\nDefinitions of human-level AI\nA few different definitions of human-level AI are combined in this analysis.\nThe AGI-09 survey asked about four benchmarks; the one reported here is the Turing-test capable AI. Note that ‘Turing test capable’ seems to sometimes be interpreted as merely capable of holding a normal human discussion. It isn’t clear that the participants had the same definition in mind.\nKruel only asked that the AI be as good as humans at science, mathematics, engineering and programming, and asks conditional on favorable conditions continuing (e.g. no global catastrophes). This might be expected prior to fully human-level AI.\nEven where people talk about ‘human-level’ AI, they can mean a variety of different things. For instance, it is not clear whether a machine must operate at human cost to be ‘human-level’, or to what extent it must resemble a human.\nAt least three surveys use the acronym ‘HLMI’, but it can stand for either ‘human-level machine intelligence’ or ‘high level machine intelligence’ and is defined differently in different surveys.\nHere is a full list of exact descriptions of something like ‘human-level’ used in the surveys:\n\nMichie: ‘computing system exhibiting intelligence at adult human level’\nBainbridge: ‘The computing power and scientific knowledge will exist to buildmachines that are functionally equivalent to the human brain’\nKlein: ‘When will AI surpass human-level intelligence?’\nAI@50: ‘When will computers be able to simulate every aspect of human intelligence?’\nFHI 2011: ‘Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of human-level machine intelligence? Feel free to answer ‘never’ if you believe such a milestone will never be reached.’\nMüller and Bostrom: ‘[machine intelligence] that can carry out most human professions at least as well as a typical human’\nHanson: ‘human level abilities’ in a subfield (wording is probably not consistent, given the long term and informal nature of the poll)\nAGI-09: ‘Passing the Turing test’\nKruel: Variants on, ‘Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?’\n2016 ESPAI (our emboldening): \n\nSay we have ‘high level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption.\nSay an occupation becomes fully automatable when unaided machines can accomplish it better and more cheaply than human workers. Ignore aspects of occupations for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption.\nSay we have reached ‘full automation of labor’ “when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers.”\n\n\n\nInside vs. outside view methods\nHanson’s survey was unusual in that it asked participants for their impressions of past rates of progress, from which extrapolation could be made (an ‘outside view’ estimate), rather than asking directly about expected future rates of progress (an ‘inside view’ estimate). It also produced much later median dates for human-level AI, suggesting that this outside view methodology in general produces much later estimates (rather than for instance, Hanson’s low sample size and casual format just producing a noisy or biased estimate that happened to be late).\nIf so, this would be important because outside view estimates in general are often informative.\nHowever the 2016 ESPAI included a set of questions similar to Hanson’s, and did not at a glance find similarly long implied timelines, though the data has not been carefully analyzed. This is some evidence against the outside view style methodology systematically producing longer timelines, though arguably not enough to overturn the hypothesis.\nWe might expect Hanson’s outside view method to be especially useful in AI forecasting because a key merit is that asking people about the past means asking questions more closely related to their expertise, and the future of AI is arguably especially far from anyone’s expertise (relative to say asking a dam designer how long it will take for their dam to be constructed) . On the other hand, AI researchers’ expertise may include a lot of information about AI other than how far we have come, and translating what they have seen into what fraction of the way we have come may be difficult and thus introduce additional error.\n", "url": "https://aiimpacts.org/ai-timeline-surveys/", "title": "AI Timeline Surveys", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-01-10T09:37:48+00:00", "paged_url": "https://aiimpacts.org/feed?paged=24", "authors": ["Katja Grace"], "id": "4e32546a634bd4d4546331de1933fd99", "summary": []} {"text": "Michie Survey\n\nIn a 1972 poll of sixty-seven AI and computer science experts, respondents were roughly divided between expecting human-level intelligence in 20 years, in 50 years and in more than 50 years. They were also roughly divided between considering a ‘takeover’ by AI a negligible and a substantial risk – with ‘overwhelming risk’ far less popular.\nDetails\nMethods\nDonald Michie reported on the poll in Machines and the Theory of Intelligence (pdf download). The participants were sixty-seven British and American computer scientists working in or close to machine intelligence. The paper does not say much more about the methodology used. It is unclear whether Michie ran the poll.\nFindings\nMichie presents Figure 4 below, and Firschein and Coles present the table in Figure 2, which appears to be the same data. Michie’s interesting findings include:\n\n‘Most considered that attainment of the goals of machine intelligence would cause human intellectual and cultural processes to be enhanced rather than to atrophy.’\n‘Of those replying to a question on the risk of ultimate ‘takeover’ of human affairs by intelligent machines, about half regarded it as ‘negligible’, and most of the remainder as ‘substantial’ with a few voting for ‘overwhelming’.’\nAlmost all participants predicted human level computing systems would not emerge for over twenty years. They were roughly divided between 20, 50, and more. See figure 4 below (from p512).\n\n\nFigure 2: Firschein and Coles 1973 present this table, which appears to report on the same survey.\n \n ", "url": "https://aiimpacts.org/michie-survey/", "title": "Michie Survey", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-01-10T01:58:11+00:00", "paged_url": "https://aiimpacts.org/feed?paged=24", "authors": ["Katja Grace"], "id": "4d78506733d65733f2a4c0b9c3750274", "summary": []} {"text": "AI and the Big Nuclear Discontinuity\n\nBy Katja Grace, 9 January 2015\nAs we’ve discussed before, the advent of nuclear weapons was a striking technological discontinuity in the effectiveness of explosives. In 1940, no one had ever made an explosive twice as effective as TNT. By 1945 the best explosive was 4500 times more potent, and by 1960 the ratio was 5 million.\nProgress in nuclear weapons is sometimes offered as an analogy for possible rapid progress in AI (e.g. by Eliezer Yudkowsky here, and here). It’s worth clarifying the details of this analogy, which has nothing to do with the discontinuous progress in weapon effectiveness. It’s about a completely different discontinuity: a single nuclear pile’s quick transition from essentially inert to extremely reactive.\nAs you add more fissile material to a nuclear pile, little happens until it reaches a critical mass. After reaching critical mass, the chain reaction proceeds much faster than the human actions that assembled it. By analogy, perhaps as you add intelligence to a pile of intelligence, little will happen until it reaches a critical level which initiates a chain reaction of improvements (‘recursive self-improvement’) which proceeds much faster than the human actions that assembled it.\nThis discontinuity in individual nuclear explosions is not straightforwardly related to the technological discontinuity caused by their introduction. Older explosives were also based on chain reactions. The big jump seems to be a move from chemical chain reactions to nuclear chain reactions, two naturally occurring sources of energy with very different characteristic scales–and with no alternatives in between them. This jump has no obvious analog in AI.\nOne might wonder if the technological discontinuity was nevertheless connected to the discontinuous dynamics of individual nuclear piles. Perhaps the density and volume of fissile uranium required for any explosive was the reason that we did not see small, feeble nuclear weapons in between chemical weapons and powerful nuclear weapons. This doesn’t match the history however. Nobody knew that concentrating fissile uranium was important until after fission was discovered in 1938, less than seven years before the first nuclear detonation. Even if nuclear weapons had grown in strength gradually over this period, this would still be around one thousand years of progress at the historical rate per year. The dynamics of individual piles can only explain a miniscule part of the discontinuity.\nThere may be an important analogy between AI progress and nuclear weapons. And the development of nuclear weapons was in some sense a staggeringly abrupt technological development. But we probably shouldn’t conclude that the development of AI is much more likely to be comparably abrupt.\n If you vaguely remember that AI progress and nuclear weapons are analogous, and that nuclear weapons were a staggeringly abrupt development in explosive technology, try not to infer from this that AI is especially likely to be a staggeringly abrupt development.\n(Image: Trinity test after ten seconds, taken from Atomic Bomb Test Site Photographs, courtesy of U.S. Army White Sands Missile Range Public Affairs Office)", "url": "https://aiimpacts.org/ai-and-the-big-nuclear-discontinuity/", "title": "AI and the Big Nuclear Discontinuity", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-01-09T22:06:46+00:00", "paged_url": "https://aiimpacts.org/feed?paged=24", "authors": ["Katja Grace"], "id": "cc258eb60edd74fbe7f3cdf8ed6fd25f", "summary": []} {"text": "The Biggest Technological Leaps\n\nBy Katja Grace, 9 January 2015\nOver thousands of years, humans became better at producing explosions. A weight of explosive that would have blown up a tree stump in the year 800 could have blown up more than three tree stumps in the 1930s. Then suddenly, a decade later, the figure became more like nine thousand tree stumps. The first nuclear weapons represented a massive leap – something like 6000 years of progress in one step*.\nThough such jumps have been historical exceptions, some some observers think a massive jump in AI capability is likely. Progress may be fast due to the apparent amenability of software to groundbreaking insights, the possibility of rapid applications (to deploy a new algorithm, you don’t have to build any factories), the plausibility of simple conceptual ingredients underlying intelligent behavior, and the potential for ‘recursive self-improvement’ to speed software development to rates characteristic of superhumanly-programmed computers rather than that of humans.\nWe think the question, ‘will AI progress be discontinuous?’ is a good one to investigate. Not just because advance notice of abrupt world-changing developments is sure to come in handy somehow–nor because of the exciting degree of disagreement it elicits. What makes this a particularly good topic to study now is that it it helps us know what other information is most relevant to understanding AI progress.\nOne might hope to make predictions about how soon AI will reach human-level by extrapolating from how fast we are moving and how far we have to go, for instance.**  Or we could monitor the rate at which automation replaces workers, or at which the performance of AI systems improves. These all provide valuable information if you think AI will be reached gradually, by the continuation of existing processes. However if you expect progress to be abrupt and uneven, these indicators are much less informative.\nSo whether AI will be reached abruptly or incrementally is an important question. But is it a tractable one to make progress on? My guess is yes. Plenty of evidence bears on this question: the historical patterns of progress in other technologies, instances of abnormally uneven progress, arguments suggesting abnormal degrees of abnormality in the AI case, theories explaining past continuity and discontinuities, cases that look relevantly analogous to AI…\nWe know some examples of very fast technological progress; simply understanding those cases better is likely to be an informative start.\nSo we’ve have started a list of cases here. Each case appears to involve abrupt technological progress. We looked into each one a little, usually just enough to check it really involved abrupt progress and to get approximate rates of progress before and during the discontinuity. We intend to do a more thorough job later for the cases seem particularly important or interesting.\nThis list will hopefully help us understand what fast progress looks like historically (How fast is it? How far is it? How unexpected is it?), and when it happens (Does it usually flow from a huge intellectual insight? The discovery of a new natural phenomenon? Overcoming a large upfront investment?).\nSo far, we have a couple of really big jumps, a couple of smaller jumps, a bunch of potentially interesting but uncertain cases, and a rich assortment of purported discontinuities that we are yet to investigate.\nAfter nuclear weapons, the second most interesting case we’ve found is high temperature superconductivity. The maximum temperature of superconduction appears to have made something like 150 years of progress in one jump in 1986,  after the discovery of a new class of materials with maximum temperatures for superconducting behavior above what was thought possible.\nDo you have thoughts on this line of research? Do you have ideas for how to investigate cases? Do you know of historical cases of abrupt technological progress? Do you want to see our list?\n\n* Measured in doublings; you would get a much more extreme estimate if you expected linear progress. Relative effectiveness (RE) had doubled less than twice in 1100 years, then it doubled more than eleven times when the first nuclear weapons emerged. (For more on nuclear weapons, see our page on them).\n** Interestingly, asking AI researchers about rates of progress gives much more pessimistic estimates than asking them about when human-level AI will arrive, based on some very preliminary research. This may mean that AI researchers expect human-level AI to arrive following abnormally fast progress, though the discrepancy could be explained in many other ways. It seems worth of looking in to.\n(Image: The first nuclear chain reaction. Painting by Gary Sheehan (Atomic Energy Commission).", "url": "https://aiimpacts.org/the-biggest-technological-leaps/", "title": "The Biggest Technological Leaps", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-01-09T21:59:04+00:00", "paged_url": "https://aiimpacts.org/feed?paged=24", "authors": ["Katja Grace"], "id": "06eb4d2f347147007ad0b2de4d691c3d", "summary": []} {"text": "The AI Impacts Blog\n\nBy Katja Grace, 9 January 2015\nWelcome to the AI Impacts blog. \nAI Impacts is premised on two ideas (at least!):\n\nThe details of the arrival of human-level artificial intelligence matterSeven years to prepare is very different from seventy years to prepare. A weeklong transition is very different from a decade-long transition. Brain emulations require different preparations than do synthetic AI minds. Etc.\nAvailable data and reasoning can substantially educate our guesses about these detailsWe can track progress in AI subfields. We can estimate the hardware represented by the human brain. We can detect the effect of additional labor on software progress. Etc.\n\nOur goal is to assemble relevant evidence and considerations, and to synthesize reasonable views on questions such as when AI will surpass human-level capabilities, how rapid development will be at that point, what advance notice we might expect, and what kinds of AI are likely to reach human-level capabilities first.\nWe are doing this recursively, first addressing much smaller questions, like:\n\nIs AI likely to surpass human level in a discontinuous spurt, or through incremental progress?\nDoes AI software undergo discontinuous progress often?\nIs technological progress of any sort discontinuous often?\nWhen is technological progress discontinuous?\nWhy did explosives undergo discontinuous progress in the form of nuclear weapons?\n\nIn this way, we hope to inform decisions about how to prepare for advanced AI, and about whether it is worth prioritizing over other pressing issues in the world. Researchers, funders, and other thinkers and doers are choosing how to spend their efforts on the future impacts of AI, and we want to help them choose well.\nAI impacts is currently something like a (brief) encyclopedia of semi-original AI forecasting research. That is, it is a growing collection of pages addressing particular questions or bodies of evidence relating to the future of AI. We intend to revise these in an ongoing fashion, according to new investigations and debates. \nAt the same time as producing reasonable views, we are interested in exploring and bettering humanity’s machinery for producing reasonable views. To this end, we have chosen this unusual – but we think promising – format, and may experiment with novel methods of organizing information and resolving questions and disagreements. \nIf you want to know more about the project overall, see About, or peruse our research pages and see it firsthand. \nThis blog exists to show you the most interesting findings of the AI Impacts project as we find them, and before they get lost in what we hope becomes a dense network of research pages. We might also write about other things, such as our thoughts on methodology, speculative opinions, news about the project itself, and anything else that seems like a good idea at the time.\nIf you like the sound of any of these things, consider signing up for one of our RSS feeds (blog, articles). If you don’t, or if you think you could (cheaply) like it more, we welcome your thoughts or suggestions.\nAI Impacts is currently authored by Paul Christiano and Katja Grace.", "url": "https://aiimpacts.org/the-ai-impacts-blog/", "title": "The AI Impacts Blog", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2015-01-09T21:56:23+00:00", "paged_url": "https://aiimpacts.org/feed?paged=24", "authors": ["Katja Grace"], "id": "0f14a96b6b7ab2734cf5711ce6330be9", "summary": []} {"text": "Cases of Discontinuous Technological Progress\n\nWe know of ten events which produced a robust discontinuity in progress equivalent to more than one hundred years at previous rates in some interesting metric. We know of 53 other events which produced smaller or less robust discontinuities.\nBackground\nThese cases were researched as part of our discontinuous progress investigation.\nList of cases\nEvents causing large, robust discontinuities\n\nThe Pyramid of Djoser, 2650BC (discontinuity in structure height trends)\nThe SS Great Eastern, 1858 (discontinuity in ship size trends)\nThe first telegraph, 1858 (discontinuity in speed of sending a 140 character message across the Atlantic Ocean)\nThe second telegraph, 1866 (discontinuity in speed of sending a 140 character message across the Atlantic Ocean)\nThe Paris Gun, 1918 (discontinuity in altitude reached by man-made means)\nThe first non-stop transatlantic flight, in a modified WWI bomber, 1919 (discontinuity in both speed of passenger travel across the Atlantic Ocean and speed of military payload travel across the Atlantic Ocean)\nThe George Washington Bridge, 1931 (discontinuity in longest bridge span)\nThe first nuclear weapons, 1945 (discontinuity in relative effectiveness of explosives)\nThe first ICBM, 1958 (discontinuity in average speed of military payload crossing the Atlantic Ocean)\nYBa2Cu3O7 as a superconductor, 1987 (discontinuity in warmest temperature of superconduction)\n\nEvents causing moderate, robust discontinuities\n\nHMS Warrior, 1860 (discontinuity in both Royal Navy ship tonnage and Royal Navy ship displacement)\nEiffel Tower, 1889 (discontinuity in tallest existing freestanding structure height, and in other height trends non-robustly)\nFairey Delta 2, 1956 (discontinuity in airspeed)\nPellets shot into space, 1957, measured after one day of travel (discontinuity in altitude achieved by man-made means)1\nBurj Khalifa, 2009 (discontinuity in height of tallest building ever)\n\nNon-robust discontinuities\nThis spreadsheet details all discontinuities found, as of April 2020.", "url": "https://aiimpacts.org/cases-of-discontinuous-technological-progress/", "title": "Cases of Discontinuous Technological Progress", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2014-12-31T23:44:11+00:00", "paged_url": "https://aiimpacts.org/feed?paged=25", "authors": ["Katja Grace"], "id": "44d7e1188e914ea3419fea2988c7658a", "summary": []} {"text": "Effect of nuclear weapons on historic trends in explosives\n\nNuclear weapons constituted a ~7 thousand year discontinuity in relative effectiveness factor (TNT equivalent per kg of explosive).\nNuclear weapons do not appear to have clearly represented progress in the cost-effectiveness of explosives, though the evidence there is weak.\nDetails\nThis case study is part of AI Impacts’ discontinuous progress investigation.\nBackground\nThe development of nuclear weapons is often referenced informally as an example of discontinuous technological progress. Discontinuities are sometimes considered especially plausible in this case because of the involvement of a threshold phenomenon in nuclear chain reactions.\n21-kiloton underwater nuclear explosion (Bikini Atoll, 1946)1\nTrends\nRelative effectiveness factor\nThe “relative effectiveness factor” (RE Factor) of an explosive measures the mass of TNT required for an equivalent explosion.2\nData\nWe collected data on explosive effectiveness from an online timeline of explosives and a comparison of RE factors on Wikipedia.3 These estimates modestly understate the impact of nuclear weapons, since the measured mass of the nuclear weapons includes the rest of the bomb while the conventional explosives are just for the explosive itself. \nFigures 1-3 below show the data we collected, which can also be found in this spreadsheet. Our data below is incomplete– we elide many improvements between 800 and 1942 that would not affect the size of the discontinuity from “Fat man”. We have verified that there are no explosives with higher RE factor than Hexanitrobenzene before “Fat man” (see the ‘Relative effectiveness data’ in this spreadsheet for this verification). \nFigure 1: Approximate relative effectiveness factor for selected explosives over time, prior to nuclear weapons.\nFigure 2: Approximate relative effectiveness factor for selected explosives, up to early nuclear bomb (note change to log scale) \nDiscontinuity Measurement\nTo compare nuclear weapons to past rates of progress, we treat progress as exponential.4 With this assumption, the first nuclear weapon, “Fat man”, represented a around seven thousand years of discontinuity in the RE factor of explosives at previous rates.5 In addition to the size of this discontinuity in years, we have tabulated a number of other potentially relevant metrics here.6 \nWe checked if “Fat Man” constituted a discontinuity, but did not look for other discontinuities, because we have not thoroughly searched for data on earlier developments. Even though we’re missing data, since gunpowder is the earliest known explosive and Hexanitrobenzene is the explosive before “Fat man” with the highest RE factor, the missing data should not affect discontinuity calculations for “Fat man” unless it suggests we should be predicting using a different trend. This seems unlikely given that early explosives all have an RE factor close to that of our existing data points, around 1 – 3 (see table here)7, so are not vastly inconsistent with our exponential. If we instead assumed a linear trend, or an exponential ignoring the early gunpowder datapoint, we still get answers of over three thousand years (see spreadsheet for calculations).\nDiscussion of causes\nInterestingly, at face value this discontinuous jump does not seem to be directly linked to the chain reaction that characterizes nuclear explosions, but rather to the massive gap between the energies involved in chemical interactions and nuclear interactions. It seems likely that similar results would obtain in other settings; for example, the accessible energy in nuclear fuel enormously exceeds the energy stored in chemical fuels, and so at some far future time we might expect a dramatic jump in the density with which we can store energy (though arguably not in the cost-effectiveness).8\nCost-effectiveness of explosives\nAnother important measure of progress in explosives is cost-effectiveness. Cost-effectiveness is particularly important to understand, because some plausible theories of continuous progress would predict continuous improvements in cost-effectiveness much more strongly than they would predict continuous improvements in explosive density.\nData\nCost-effectiveness of nuclear weapons\nAssessing the cost of nuclear weapons is not straightforward empirically, and depends on the measurement of cost. The development of nuclear weapons incurred a substantial upfront cost, and so for some time the average cost of nuclear weapons significantly exceeded their marginal cost. We provide estimates for the marginal costs of nuclear weapons, as well as for the “average” cost of all nuclear explosives produced by a certain date.\nWe focus our attention on WWII and the immediately following period, to understand the extent to which the development of nuclear weapons represented a discontinuous change in cost-effectiveness.\nSee our spreadsheet for a summary of the data explained below. According to the Brookings Institute, nuclear weapons were by 1950 considered to be especially cost-effective (though not obviously in terms of explosive power per dollar), and adopted for this reason. However, Brookings notes that this has never been validated, and appears to distrust it.9 This disagreement weakly suggests that nuclear weapons are at least not radically more or less cost-effective than other weapons.\nAccording to Wikipedia, the cost of the Manhattan project was about $26 billion (in 2014 dollars), 90% of which “was for building factories and producing the fissile materials.”10 The Brookings U.S. Nuclear Weapons Cost Study Project estimates the price as $20 billion 2014 dollars, resulting in similar conclusions.11 This post claims that 9 bombs were produced through the end of “Operation Crossroads” in 1946, citing Chuck Hansen’s Swords of Armageddon.12 The explosive power of these bombs was likely to be about 20kT, suggesting a total explosive capacity of 180kT. Anecdotes suggest that the cost to actually produce a bomb were about $25M,13 or about $335M in 2014 dollars. This would make the marginal cost around $16.8k per ton of TNT equivalent ($335M/20kT = $16.75k/T), and the average cost around $111k/T.\nIn 2013 the US apparently planned to build 3,000 nuclear weapons for $60B.14 However it appears that at least some of these may be refurbishments rather than building from scratch, and the B61-12 design at least appears to be designed to be less powerful than it could be, since it is less powerful than the bombs it is replacing15 and much less powerful than a nuclear weapon such as the Tsar Bomba, with a yield of 50mT.16 The B61-12 is a 50kT weapon. These estimates give us $400/T ($60B/3,000*50kT). They are very approximate, for reasons given. However have not found better estimates. Note that they are for comparison, and not integral to our conclusions.\nThese estimates could likely be improved by a more careful survey, and extended to later nuclear weapons; the book Atomic Audit seems likely to contain useful resources.17\nYear Description of explosive Cost per ton TNT equivalent 1920 Ammonium nitrate $5.6k 1920 TNT $10.5k 1946 9 (Mark 1 and Mark 3’s) x 20kt (marginal) $16.8k (marginal Mark 3) 1946 9 (Mark 1 and Mark 3’s) x 20kt (average) $111k (average Mark 3)  3,000 weapons in the 3+2 plan $400\nTable 2: Total, average and marginal costs associated with different weapons arsenals\nFigure 4: Cost-effectiveness of nuclear weapons\nCost-effectiveness of non-nuclear weapons\nWe have found little information about the cost of pre-nuclear bombs in the early 20th Century. However what we have (explained below) suggests they cost a comparable amount to nuclear weapons, for a certain amount of explosive energy.\nAmmonium nitrate and TNT appear to be large components of many high explosives used in WWII. For instance, blockbuster bombs were apparently filled with amatol, which is a mixture of TNT and ammonium nitrate.18\nAn appropriations bill from 1920 (p289) suggests that the 1920 price of ammonium nitrate was about $0.10-0.16 per pound, which is about $1.18 per pound in 2014.19 It suggests TNT was $0.44 per pound, or around $5.20 per pound in 2014. These estimates are consistent with that of a Quora commenter.20\nThis puts TNT at $10.4k/ton: very close to the $16.8k/ton marginal cost of an equivalent energy from Mark 3 nuclear weapons, and well below the average cost of Mark 3 nuclear weapons produced by the end of Operation Crossroads.\nAmmonium nitrate is about half as energy dense as TNT, suggesting a price of about $5.6k/T.21 22 This is substantially lower than the marginal cost of the Mark 3.\nNote that these figures are for explosive material only, whereas the costs of nuclear weapons used here are more inclusive. Ammonium nitrate may be far from the most expensive component of amatol-based explosives, and so what we have may be a very substantial underestimate for the price of conventional explosives. There is also some error from synergy between the components of amatol.23\nDiscontinuity Measurement\nWithout a longer-run price trend in explosives, we do not have enough pre-discontinuity data to measure a discontinuity.24 However, from the evidence we have here, it is unclear that nuclear weapons represent any development at all in cost-effectiveness, in terms of explosive power per dollar. Thus it seems unlikely that nuclear weapons were surprisingly cost-effective, at least on that metric.\nNotes", "url": "https://aiimpacts.org/discontinuity-from-nuclear-weapons/", "title": "Effect of nuclear weapons on historic trends in explosives", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2014-12-31T11:45:48+00:00", "paged_url": "https://aiimpacts.org/feed?paged=25", "authors": ["Katja Grace"], "id": "e41af7783301790d1947c2e802cfd9c1", "summary": []} {"text": "AGI-09 Survey\n\nBaum et al. surveyed 21 attendees of the AGI-09 conference, on AGI timelines with and without extra funding. They also asked about other details of AGI development such as social impacts, and promising approaches.\nTheir findings include the following:\n\nThe median dates when participants believe there is a 10% , 50% and 90% probability that AI will pass a Turing test are 2020, 2040, and 2075 respectively.\nPredictions changed by only a few years when participants were asked to imagine $100 billion (or sometimes $1 billion, due to a typo) in funding.\nThere was apparently little agreement on the ordering of milestones (‘turing test’, ‘third grade’, ‘Nobel science’, ‘super human’), except that ‘super human’ AI would not come before the other milestones.\nA strong majority of participants believed ‘integrative designs’ were more likely to contribute critically to creation of human-level AGI than narrow technical approaches.\n\nDetails\nDetailed results\nMedian confidence levels for different milestones\nTable 1 shows median dates given for different confidence levels of AI reaching four benchmarks: able to pass an online third grade test, able to pass a Turing test, able to produce science that would win a Nobel prize, and ‘super human’.\n\nBest guess times for various milestones\nFigure 2 shows the distribution of participants’ best guesses – probably usually interpreted as 50 percent confidence points – for the timing of these benchmarks, given status quo levels of funding.\n\nIndividual confidence intervals for each milestone\nFigure 4 shows all participants’ confidence intervals for all benchmarks. Participant 17 appears to be interpreting ‘best guess’ as something other than fiftieth percentile of probability, though the other responses appear to be consistent with this interpretation.\n\nExpected social impacts\nFigure 6 illustrates responses to three questions about social impact. The participants were asked about the probability of negative social impact, if the first AGI that can pass the Turing test is created by an open source project, by the United States military, or by a private company focused on commercial profit. The paper summarises that the experts lacked consensus.\n‘Fig. 6. Probability of a negative-to-humanity outcome for different development scenarios. The three development scenarios are if the first AGI that can pass the Turing test is created by an open source project (x’s), the United States military (squares), or a private company focused on commercial profit (triangles). Participants are displayed in the same order as in figure 4, such that Participant 1 in figure 6 is the same person as Participant 1 in figure 4.’\nMethodological details\nThe survey contained a set of standardized questions, plus individualized followup questions. It can be downloaded from here.\nIt included questions on:\n\nwhen AI would meet certain benchmarks (passing third grade, turing test, Nobel quality research, superhuman), with and without billions of dollars of additional funding. Participants were asked for confidence intervals (10%, 25%, 75%, 90%) and ‘best estimates’ (interpreted above as 50% confidence levels).\nEmbodiment of the first AGIs (physical, virtual, minimal)\nWhat AI software paradigm the first AGIs would be based on (formal neural networks, probability theory, uncertain logic, evolutionary learning, a large hand-coded knowledge-base, mathematical theory, nonlinear dynamical systems, or an integrative design combining multiple paradigms)\nProbability of strongly negative-to-humanity outcome if the first AGIs were created by different parties (an open-source project, the US military, or a private for-profit software company)\nIf quantum computing or hypercomputing would be required for AGI.\nWhether brain emulations would be conscious\nThe experts’ area of expertise\n\nParticipants\nMost of the participants were actively involved in AI research. The paper describes them:\nStudy participants have a broad range of backgrounds and experience, all with significant prior thinking about AGI. Eleven are in academia, including six Ph.D. students, four faculty members, and one visiting scholar, all in AI or allied fields. Three lead research at independent AI research organizations and three do the same at information technology organizations. Two are researchers at major corporations. One holds a high-level administrative position at a relevant non-profit organization. One is a patent attorney. All but four participants reported being actively engaged in conducting AI research.\nAccording to the website, the AGI-09 conference gathers “leading academic and industry researchers involved in serious scientific and engineering work aimed directly toward the goal of artificial general intelligence”. While these people are expert in the field, they are also probably highly selected for being optimistic about the timing of human-level AI. This seems likely to produce some bias.\nMeaning of ‘Turing test’\nSeveral meanings of ‘Turing test’ are prevalent, and it is unclear what distribution of them is being used by participants. The authors note that some participants asked about this ambiguity, and were encouraged verbally to consider the ‘one hour version’ instead of the ‘five minute version’, because the shorter one might be gamed by chat-bots (p6). The authors also write, ‘Using human cognitive development as a model, one might think that being able to do Nobel level science would take much longer than being able to conduct a social conversation, as in the Turing Test’ (p8). Both of these points suggest that the authors at least were thinking of a Turing test as a test of normal social conversation rather than a general test of human capabilities as they can be observed via a written communication channel.", "url": "https://aiimpacts.org/agi-09-survey/", "title": "AGI-09 Survey", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2014-12-29T18:50:48+00:00", "paged_url": "https://aiimpacts.org/feed?paged=25", "authors": ["Katja Grace"], "id": "2204e17894f7b4433d8f41ae1b0b6930", "summary": []} {"text": "Bainbridge Survey\n\nA survey of twenty-six technology experts in 2005 produced a median of 2085 as the year in which artificial intelligence would be able to functionally replicate a human brain (p344). They rated this application 5.6/10 in beneficialness to humanity.\nDetails\nIn 2005 William Bainbridge reported on a survey of 26 contributors to Converging Technologies reports. The contributors were asked when a large number of applications would be developed, and how beneficial they would be (see Appendix 1). The survey produced 2085 as the median year in which “the computing power and scientific knowledge will exist to build machines that are functionally equivalent to the human brain” (p344). The participants rated this development 5.6 out of 10 in beneficialness.\nParticipants\nBainbridge’s participants are contributors to ‘converging technology’ reports, which are on topics of nanotechnology, biotechnology, information technology, and cognitive science. From looking at what appears to be one of these reports, these seem to be mostly experts from government and national laboratories, academia, and the private sector. Few work in AI in particular. For instance, an arbitrary sample includes the Director of the Division of Behavioral and Cognitive Sciences at the National Science Foundation, a person from the Defense Threat Reduction Agency, and a person from HP laboratories.", "url": "https://aiimpacts.org/bainbridge-survey/", "title": "Bainbridge Survey", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2014-12-29T18:50:48+00:00", "paged_url": "https://aiimpacts.org/feed?paged=25", "authors": ["Katja Grace"], "id": "c25b88b21ca403fe65b21ae8be8f2103", "summary": []} {"text": "AI@50 Survey\n\nA seemingly informal seven-question poll was taken of participants at the AI@50 conference in 2006. 41% of respondents said it would take more than 50 years for AI to simulate every aspect of human intelligence, and 41% said it would never happen.\nDetails\nAI timelines question\nOne question was “when will computers be able to simulate every aspect of human intelligence?” 41% of respondents said “More than 50 years” and 41% said “Never”.\nParticipants and interpretation\nWe do not know how many people participated in the conference or in the poll.\nLuke Muehlhauser points out that many of the respondents were probably college students, rather than experts, and that the question may have been interpreted as asking in part about the possibility of machine consciousness.\nRecords of the poll\nInformation about the poll was available at http://web.archive.org/web/20110710193831/http://www.engagingexperience.com/ai50/ when we put up this page, but it has since become inaccessible. Secondary descriptions of parts of it exist at https://intelligence.org/2013/05/15/when-will-ai-be-created/#footnote_2_10199 and http://sethbaum.com/ac/2011_AI-Experts.pdf", "url": "https://aiimpacts.org/ai50-survey/", "title": "AI@50 Survey", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2014-12-29T18:50:48+00:00", "paged_url": "https://aiimpacts.org/feed?paged=25", "authors": ["Katja Grace"], "id": "ee928fdce7abe020bf9e21f91726c1d8", "summary": []} {"text": "Early Views of AI\n\nThis is an incomplete list of early works we have found discussing AI or AI related problems.\nList\n1. Claude Shannon (1950), in Programming a Computer for Playing Chess, offers the following list of “possible developments in the immediate future,”\n\nMachines for designing filters, equalizers, etc\nMachines for designing relay and switching circuits\nMachines which will handle routing of telephone calls based on the individual circumstances rather than by fixed patterns\nMachines for performing symbolic (non-numerical) mathematical operations\nMachines capable of translating from one language to another\nMachines for making strategic decisions in simplified military operations\nMachines capable of orchestrating a melody\nMachines capable of logical deduction\n\n2. The proposal for Dartmouth conference on AI offers the following “aspects of the artificial intelligence project”:\n\nAutomatic computers. This appears to be an application rather than an aspect of the problem; if you can describe how to do a task precisely, it can be automated.\nHow Can a Computer be Programmed to Use a Language\nHow can a set of (hypothetical) neurons be arranged so as to form concepts\nTheory of the size of a calculation\nSelf-improvement\nAbstractions. “A direct attempt to classify these and to describe machine methods of forming abstractions from sensory and other data would seem worthwhile.”\nRandomness and creativity\n", "url": "https://aiimpacts.org/early-views-of-ai/", "title": "Early Views of AI", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2014-12-29T18:47:46+00:00", "paged_url": "https://aiimpacts.org/feed?paged=25", "authors": ["Katja Grace"], "id": "dae148ccdc97f7c76f18c13c46febb68", "summary": []} {"text": "FHI Winter Intelligence Survey\n\nThe Future of Humanity Institute administered a survey in 2011 at their Winter Intelligence AGI impacts conference. Participants’ median estimate for a 50% chance of human-level AI was 2050.\nDetails\nAI timelines question\nThe survey included the question: “Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of human-level machine intelligence? Feel free to answer ‘never’ if you believe such a milestone will never be reached.”\nThe first quartile / second quartile / third quartile responses to each of these three questions were as follows:\n10% chance: 2015 / 2028 / 2030\n50% chance: 2040 / 2050 / 2080\n90% chance: 2100 / 2150 / 2250\nParticipants and selection effects\nSurvey participants probably expect AI sooner than comparably expert groups, by virtue of being selected from participants at the Winter Intelligence conference. The conference is described as focussing on “artificial intelligence and the impacts it will have on the world,” a topic of disproportionately great natural interest to researchers who believe that AI will substantially impact the world soon. The response rate to the survey was 41% (35 respondents), limiting response bias.\nWhen asked “Prior to this conference, how much have you thought about these issues?” the respondents were roughly evenly divided between “Significant interest,” “Minor research focus / sustained study,” and “major research focus.”\nWhen asked to describe their field, of the 35 respondents, 22% indicated an area that the survey administrators considered to be “AI and Robotics” as their field, 22% indicated a field considered to be “computer science and engineering,” and the remainder indicated a variety of fields with less direct relevant to AI progress (excepting perhaps cognitive science and neuroscience, whose prevalence the authors do not report). The administrators of the survey write:\n“There were no significant (as per ANOVA) inter-group differences in regards to who would develop AI, the outcomes, type of AI, expertise, or likelihood of Watson winning. Merging the AI and computer science group and the philosophy and general academia group did not change anything: participant views did not link strongly to their formal background. ” (p. 10).", "url": "https://aiimpacts.org/fhi-ai-timelines-survey/", "title": "FHI Winter Intelligence Survey", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2014-12-29T18:47:28+00:00", "paged_url": "https://aiimpacts.org/feed?paged=25", "authors": ["Katja Grace"], "id": "50792d815fb27e54f4159721badc4dac", "summary": []} {"text": "Hanson AI Expert Survey\n\nIn a small informal survey running since 2012, AI researchers generally estimated that their subfields have moved less than ten percent of the way to human-level intelligence. Only one (in the slowest moving subfield) observed acceleration.\nThis suggests on a simple extrapolation that reaching human-level capability across subfields will take over a century (in contrast with many other predictions).\nDetails\nRobin Hanson has asked experts in various social contexts to estimate how far we’ve come in their own subfield of AI research in the last twenty years, compared to how far we have to go to reach human level abilities. His results are listed in Table 1. He points out that on an outside view calculation, this suggests at least a century until human-level AI.\n\n\n\nYear added to list\n Person\nSubfield\nDistance in 20y\n Acceleration\n\n\n 2012\n A few UAI attendees\n\n 5-10%\n ~0\n\n\n 2012\n Melanie Mitchell\n Analogical reasoning\n 5%\n ~0\n\n\n 2012\n Murray Shanahan\n Knowledge representation\n 10%\n ~0\n\n\n 2013\n Wendy Hall\n Computer-assisted training\n 1%\n\n\n\n 2013\n Claire Cardie (and Peter Norvig agrees in ’14)\n Natural language processing\n 20%\n\n\n\n 2013\n Boi Faltings (and Peter Norvig agrees in ’14)\n Constraint satisfaction\n Past human-level 20 years ago\n\n\n\n 2014\n Aaron Dollar\n robotic grasping manipulation\n <1%\n positive\n\n\n 2014\n Peter Norvig\n*\n\n\n\n\n 2014\n Timothy Meese\n early human vision processing\n 5%\n negative\n\n\n 2015\n Francesca Rossi\nconstraint reasoning\n 10%\n negative\n\n\n 2015\n Margret Boden\n no particular subfield\n 5%\n\n\n\n 2015\n David Kelley\n big data analysis\n 5%\n positive\n\n\n 2016\n Henry Kautz\n constraint satisfaction\n >100%\n\n\n\n 2016\n Henry Kautz\n language\n 10%\n positive\n\n\n 2016\n Jeff Legault\n robotics\n 5%\n positive\n\n\n 2017\n Thore Husfeldt\n human-understandable explanation\n <0.5%\n\n\n\n\nTable 1 : Results from Robin Hanson’s informal survey\n*Hanson’s summary of Peter Norvig’s response seems hard to fit into this framework:\nAfter coming to a talk of mine, Peter Norvig told me that he agrees with both Claire Cardie and Boi Faltings, that on speech recognition and machine translation we’ve gone from not usable to usable in 20 years, though we still have far to go on deeper question answering, and for retrieving a fact or page that is relevant to a search query we’ve far surpassed human ability in recall and do pretty well on precision.", "url": "https://aiimpacts.org/hanson-ai-expert-survey/", "title": "Hanson AI Expert Survey", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2014-12-29T18:47:27+00:00", "paged_url": "https://aiimpacts.org/feed?paged=25", "authors": ["Katja Grace"], "id": "39b33af307c31147dfe66b3e0743295a", "summary": []} {"text": "Müller and Bostrom AI Progress Poll\n\nVincent Müller and Nick Bostrom of FHI conducted a poll of four groups of AI experts in 2012-13. Combined, the median date by which they gave a 10% chance of human-level AI was 2022, and the median date by which they gave a 50% chance of human-level AI was 2040.\nDetails\nAccording to Bostrom, the participants were asked when they expect “human-level machine intelligence” to be developed, defined as “one that can carry out most human professions at least as well as a typical human”. The results were as follows. The groups surveyed are described below.\n\n\n\n\n Response rate\n10%\n50%\n90%\n\n\n PT-AI\n 43%\n2023\n2048\n2080\n\n\n AGI\n 65%\n2022\n2040\n2065\n\n\n EETN\n 10%\n2020\n2050\n2093\n\n\n TOP100\n 29%\n2022\n2040\n2075\n\n\n Combined\n 31%\n2022\n2040\n2075\n\n\n\nFigure 1: Median dates for different confidence levels for human-level AI, given by different groups of surveyed experts (from Bostrom, 2014).\nSurveyed groups:\nPT-AI: Participants at the 2011 Philosophy and Theory of AI conference (88 total). By the list of speakers, this appears to have contained a fairly even mixture of philosophers, computer scientists and others (e.g. cognitive scientists). According to the paper, they tend to be interested in theory, to not do technical AI work, and to be skeptical of AI progress being easy.\nAGI: Participants at the 2012 AGI-12 and AGI Impacts conferences (111 total). These people mostly do technical work.\nEETN: Members of the Greek Association for Artificial Intelligence, which only accepts published AI researchers (250 total).\nTOP100: The 100 top authors in artificial intelligence, by citation, in all years, according to Microsoft Academic Search in May 2013. These people mostly do technical AI work, and tend to be relatively old and based in the US.", "url": "https://aiimpacts.org/muller-and-bostrom-ai-progress-poll/", "title": "Müller and Bostrom AI Progress Poll", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2014-12-29T18:47:11+00:00", "paged_url": "https://aiimpacts.org/feed?paged=25", "authors": ["Katja Grace"], "id": "67c95675f20a6d4e7a2b35e2b87977ae", "summary": []} {"text": "Kruel AI Interviews\n\nAlexander Kruel interviewed 37 experts on areas related to AI, starting in 2011 and probably ending in 2012. Of those answering the question in a full quantitative way, median estimates for human-level AI (assuming business as usual) were 2025, 2035 and 2070 for 10%, 50% and 90% probabilities respectively. It appears that most respondents found human extinction as a result of human-level AI implausible.\nDetails\nAI timelines question\nKruel asked each interviewee something similar to “Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?” Twenty respondents gave full quantitative answers. For those, the median estimates were 2025, 2035 and 2070 for 10%, 50% and 90% respectively, according to this spreadsheet (belonging to Luke Muehlhauser).\nAI risk question\nAlexander asked each interviewee something like:\n‘What probability do you assign to the possibility of human extinction as a result of badly done AI?\nExplanatory remark to Q2:\nP(human extinction | badly done AI) = ?\n(Where ‘badly done’ = AGI capable of self-modification that is not provably non-dangerous.)\nAn arbitrary selection of (abridged) responses; parts that answer the question relatively directly are emboldened:\n\nBrandon Rohrer: <1%\nTim Finin: .001\nPat Hayes: Zero. The whole idea is ludicrous.\nPei Wang: I don’t think it makes much sense to talk about “probability” here, except to drop all of its mathematical meaning…\nJ. Storrs Hall: …unlikely but not inconcievable. If it happens…it will be because the AI was part of a doomsday device probably built by some military for “mutual assured destruction”, and some other military tried to call their bluff. …\nPaul Cohen: From where I sit today, near zero….\nWilliam Uther: …Personally, I don’t think ‘Terminator’ style machines run amok is a very likely scenario….\nKevin Korb: …we have every prospect of building an AI that behaves reasonably vis-a-vis humans, should we be able to build one at all…\nThe ability of humans to speed up their own extinction will, I expect, not be matched any time soon by machine, again not in my lifetime\nMichael G. Dyer: Loss of human dominance is a foregone conclusion (100% for loss of dominance)…As to extinction, we will only not go extinct if our robot masters decide to keep some of us around…\nPeter Gacs: …near 1%…\n\nInterviewees\nThe MIRI dataset (to be linked soon) contains all of the ‘full’ predictions mentioned above, and seven more from the Kruel interviews that had sufficient detail for its purposes. Of those 27 participants, we class 10 as AGI researchers, 13 as other AI researchers, 1 as a futurist, and 3 as none of the above.", "url": "https://aiimpacts.org/kruel-ai-survey/", "title": "Kruel AI Interviews", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2014-12-29T18:47:11+00:00", "paged_url": "https://aiimpacts.org/feed?paged=25", "authors": ["Katja Grace"], "id": "28bbe2d38682c88ba95f111c852625a3", "summary": []} {"text": "Klein AGI Survey\n\nFuturist Bruce Klein ran an informal online survey in 2007, asking ‘When will AI surpass human-level intelligence?”. He got 888 responses, from ‘friends’ of unspecified nature.\nDetails\nThe results are shown below, taken from Baum et al, p4. Roughly 50% of respondents gave answers before 2050.\n", "url": "https://aiimpacts.org/klein-agi-survey/", "title": "Klein AGI Survey", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2014-12-29T18:47:11+00:00", "paged_url": "https://aiimpacts.org/feed?paged=26", "authors": ["Katja Grace"], "id": "b331abd32dedff0ec375a04336b85a55", "summary": []} {"text": "Similarity Between Historical and Contemporary AI Predictions\n\nAI predictions from public statements made before and after 2000 form similar distributions. Such predictions from before 1980 appear to be more optimistic, though predictions from a larger early survey are not.\nDiscussion\nSimilarity of predictions over time\nMIRI dataset\nWe compared early and late predictions using the MIRI dataset. We find the correlation between the date of a prediction and number of years until AI is predicted from that time is 0.13. Most predictions are in the last decade or two however, so this does not tell us much about long run trends (see Figure 1).\nFigure 1: Years until AI is predicted (minPY) over time in MIRI AI predictions dataset.\nThe six predictions prior to 1980 were all below the median 30 years, which would have less than 2% chance if they were really drawn from the same distribution.\nThe predictions made before and after 2000 form very similar distributions however (see figure 2). The largest difference between the fraction of pre-2000 and since-2000 people who predict AI by any given distance in the future is about 15%. A difference this large is fairly likely by chance, according to our unpublished calculations. See the MIRI dataset page for further details.\nFigure 2: Cumulative probability of prediction falling less than X years from date of writing (minPY; from MIRI AI predictions dataset)\nSurvey data\nThe surveys we know of provide some evidence against early predictions being more optimistic. Michie’s survey is the only survey we know of made more than ten years ago. Ten surveys have been made since 2005 that give median predictions or median fiftieth percentile dates. The median date in Michie’s survey is the third furthest out in the set of eleven: fifty years, compared to common twenty to forty year medians now. Michie’s survey does not appear to have involved options between twenty and fifty years however, making this result less informative. However it suggests the researchers in the survey were not substantially more optimistic than researchers in modern surveys. They were also apparently more pessimistic than the six early statements from the MIRI dataset discussed above, though some difference should be expected from comparing a survey with public statements. Michie’s survey had sixty-three respondents, compared to the MIRI dataset’s six, making this substantial evidence.\nFigure 3: Survey results, as shown in Michie’s paper.\nArmstrong and Sotala on failed past predictions\nArmstrong and Sotala compare failed predictions of the past to all predictions, in an older version of the same dataset. This is not the same as comparing historical to contemporary predictions, but related. In particular, they use a subset of past predictions to conclude that contemporary predictions are likely to be similarly error-prone.\nThey find that predictions which are old enough to be proved wrong form a similar distribution to the entire set of predictions (see Figures 5-6). They infer that recent predictions are likely to be flawed in similar ways to the predictions we know to be wrong.\nThis inference appears to us to be wrong, due to a selection bias. Figure 4 illustrates how such reasoning fails with a simple scenario where the failure is clear. In this scenario, for thirty years people have divided their predictions between ten, twenty and thirty years out. In 2010 (the fictional present), many of these predictions are known to have been wrong (shown in pink). In order for the total distribution of predictions to match that of the past failed predictions, the modern predictions have to form a very distribution from the historical predictions (e.g. one such distribution is shown in the ‘2010’ row).\nFigure 4: a hypothetical distribution of predictions.\nIn general, failed predictions are disproportionately short early predictions, and also disproportionately short bad predictions. If the distributions of failed and total predictions look the same, this suggests that the distribution of early and late predictions is not the same – the later predictions must include fewer longer predictions, to make up for the longer unfalsified predictions inherited from the earlier dates, as well as the longer predictions effectively missing from the earlier dates.\nIf the characteristic lengths of the predictions was small relative to the time between different predictions, these biases would be small. In the example in figure 4, if the distance between the groups had been one hundred years, there would be no problem. However in the MIRI dataset, both are around twenty years.\nIn sum, the fact that failed predictions look like all predictions suggests that historical predictions came from a different distribution to present predictions. Which would seem to be good news about present predictions, if past predictions were bad. However, we have other evidence from comparing predictions from before and after 2000 directly that those are fairly similar (see above), so if earlier methods were unsuccessful, this is some mark against current methods. On the other hand, public predictions from before 1980 appear to be systematically more optimistic.\nFigure 5: Figures from Armstrong and Sotala\nFigure 6: Figure from Armstrong and Sotala\nImplications\nAccuracy of AI predictions: if people make fairly similar predictions over time, this is some evidence that they are not making their predictions based on information about their environment, which has changed over the decades (at a minimum, time has passed). For instance, some suspect that people make their predictions a fixed number of years into the future, to maximize their personal benefits from making exciting but hard to verify predictions. Evidence for this particular hypothesis seems weak to us, however the general point stands. This evidence against accuracy is not as strong as it may seem however, since there appear to be reasonable prior distributions over the date of AI which look the same after seeing time pass.\n ", "url": "https://aiimpacts.org/similarity-between-historical-and-contemporary-ai-predictions/", "title": "Similarity Between Historical and Contemporary AI Predictions", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2014-12-29T18:46:50+00:00", "paged_url": "https://aiimpacts.org/feed?paged=26", "authors": ["Katja Grace"], "id": "4c331cc9d7a43e70b0341281403d82f7", "summary": []} {"text": "Human-Level AI\n\nPublished 23 January 2014, last updated Aug 7 2022\n‘Human-level AI’ refers to AI which can reproduce everything a human can do, approximately. Several variants of this concept are worth distinguishing.\nDetails\nVariations in the meaning of ‘human-level AI’\nConsiderations in specifying ‘human-level AI’ more precisely:\n\nDo we mean to imply anything about running costs? Is an AI that reproduces human behavior for ten billion dollars per year ‘human-level’, or does it need to be human-priced? See ‘human-level at any cost vs. human-level at human cost’ below for more details.\nWhat characteristics of a human need to be reproduced? Usually we do not mean that the AI should be indistinguishable from a human. For instance, we usually do not care whether it looks like a human. A common requirement is that the AI have the economically valuable skills of a human. We sometimes also talk about AI being ‘human-level’ in a narrower set of relevant characteristics, such as in its ability to do further AI research.\nWhat does it mean to reproduce human behavior? If AI replaces all hairdressers in society, but uniformly produces a slightly worse haircut in some dimensions (but so cheaply!), does that count as ‘human-level’? If not, then all humans may be replaced even though AI is not ‘human-level’. On the other hand, if this does count, then where is the line? Can an AI ‘reproduce human behavior’ by merely producing anything a buyer would prefer to have than what a human produces? Many machines already do this, and this is not what we mean.\nHow much of human behavior needs to be reproduced? If AI cannot entirely compete with humans for the job of waiter, merely because some small population prefers human waiters, this will not make a large difference to anything, so a requirement that human-level AI replace humans in all economically useful skills is too high a bar for what we are intuitively interested in. There is a further question of what metric one might use when specifying the bar.\nWhat conditions is the AI available under? Does it matter if it has actually been built? Need it be available in some particular marketplace, or quantity, or price range?\n\nRelated definitions\nHigh Level Machine Intelligence (HLMI)\nThe 2016 and 2022 Expert Surveys on Progress in AI use the following definition:\nSay we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption.\nSuperhuman AI\nA ‘superhuman’ system is meaningfully more capable than a human-level system. In practice the first human-level system is likely to be superhuman.\nKey issues\nHuman-level at any cost vs. human-level at human cost\nIn common usage, ‘human-level’ AI can mean either AI which can reproduce a human at any cost and speed, or AI which can replace a human (i.e. is as cheap as a human, and can be used in the same situations). Both are relevant for different issues. For instance, the ‘at any cost’ meaning is important when considering how people will respond to human-level artificial intelligence, or whether a human-level artificial intelligence will use illicit means to acquire resources and cause destruction. Human-level at human cost is the relevant concept when thinking about AI replacing humans in the labor market, the economy growing very fast, or legitimate AI development ramping up into an intelligence explosion.\nToday few applications are more than an order of magnitude more expensive to run than a human, suggesting a short time before an AI project came down in price to the cost of a human. However some applications are more expensive, and even if an early AI project were only a few orders of magnitude more expensive than a human per time, it may be much slower. Thus it is hard to make useful inferences about the potential time delay between an arbitrarily expensive human-level AI and an AI which might replace a human, even if we assume hardware continues to fall in price regularly.\n‘Human-level’ is superhuman\nAs explained at the Superintelligence Reading Group:\nAnother thing to be aware of is the diversity of mental skills. If by ‘human-level’ we mean a machine that is at least as good as a human at each of these skills, then in practice the first ‘human-level’ machine will be much better than a human on many of those skills. It may not seem ‘human-level’ so much as ‘very super-human’.\nWe could instead think of human-level as closer to ‘competitive with a human’ – where the machine has some super-human talents and lacks some skills humans have. This is not usually used, I think because it is hard to define in a meaningful way. There are already machines for which a company is willing to pay more than a human: in this sense a microscope might be ‘super-human’. There is no reason for a machine which is equal in value to a human to have the traits we are interested in talking about here, such as agency, superior cognitive abilities or the tendency to drive humans out of work and shape the future. Thus we talk about AI which is at least as good as a human, but you should beware that the predictions made about such an entity may apply before the entity is technically ‘human-level’.\nExample of how the first ‘human-level’ AI may surpass humans in many ways.", "url": "https://aiimpacts.org/human-level-ai/", "title": "Human-Level AI", "source": "aiimpacts.org", "source_type": "blog", "date_published": "2014-01-23T23:36:27+00:00", "paged_url": "https://aiimpacts.org/feed?paged=26", "authors": ["Katja Grace"], "id": "3e166657846a1185e706501d7c4681b8", "summary": []}